00:00:00.001  Started by upstream project "autotest-spdk-v24.01-LTS-vs-dpdk-v22.11" build number 979
00:00:00.001  originally caused by:
00:00:00.001   Started by upstream project "nightly-trigger" build number 3646
00:00:00.001   originally caused by:
00:00:00.001    Started by timer
00:00:00.001    Started by timer
00:00:00.141  Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/ubuntu22-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy
00:00:00.142  The recommended git tool is: git
00:00:00.142  using credential 00000000-0000-0000-0000-000000000002
00:00:00.144   > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/ubuntu22-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10
00:00:00.182  Fetching changes from the remote Git repository
00:00:00.184   > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10
00:00:00.217  Using shallow fetch with depth 1
00:00:00.217  Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool
00:00:00.217   > git --version # timeout=10
00:00:00.247   > git --version # 'git version 2.39.2'
00:00:00.247  using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials
00:00:00.264  Setting http proxy: proxy-dmz.intel.com:911
00:00:00.264   > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5
00:00:12.887   > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10
00:00:12.900   > git rev-parse FETCH_HEAD^{commit} # timeout=10
00:00:12.911  Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD)
00:00:12.911   > git config core.sparsecheckout # timeout=10
00:00:12.922   > git read-tree -mu HEAD # timeout=10
00:00:12.935   > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5
00:00:12.954  Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag"
00:00:12.954   > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10
00:00:13.030  [Pipeline] Start of Pipeline
00:00:13.047  [Pipeline] library
00:00:13.050  Loading library shm_lib@master
00:00:13.050  Library shm_lib@master is cached. Copying from home.
00:00:13.069  [Pipeline] node
00:00:13.080  Running on VM-host-SM4 in /var/jenkins/workspace/ubuntu22-vg-autotest
00:00:13.082  [Pipeline] {
00:00:13.093  [Pipeline] catchError
00:00:13.096  [Pipeline] {
00:00:13.114  [Pipeline] wrap
00:00:13.125  [Pipeline] {
00:00:13.135  [Pipeline] stage
00:00:13.138  [Pipeline] { (Prologue)
00:00:13.162  [Pipeline] echo
00:00:13.164  Node: VM-host-SM4
00:00:13.172  [Pipeline] cleanWs
00:00:13.185  [WS-CLEANUP] Deleting project workspace...
00:00:13.185  [WS-CLEANUP] Deferred wipeout is used...
00:00:13.193  [WS-CLEANUP] done
00:00:13.442  [Pipeline] setCustomBuildProperty
00:00:13.507  [Pipeline] httpRequest
00:00:13.888  [Pipeline] echo
00:00:13.889  Sorcerer 10.211.164.20 is alive
00:00:13.895  [Pipeline] retry
00:00:13.897  [Pipeline] {
00:00:13.905  [Pipeline] httpRequest
00:00:13.909  HttpMethod: GET
00:00:13.910  URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz
00:00:13.910  Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz
00:00:13.933  Response Code: HTTP/1.1 200 OK
00:00:13.934  Success: Status code 200 is in the accepted range: 200,404
00:00:13.935  Saving response body to /var/jenkins/workspace/ubuntu22-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz
00:00:32.632  [Pipeline] }
00:00:32.651  [Pipeline] // retry
00:00:32.660  [Pipeline] sh
00:00:32.944  + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz
00:00:32.961  [Pipeline] httpRequest
00:00:33.341  [Pipeline] echo
00:00:33.343  Sorcerer 10.211.164.20 is alive
00:00:33.349  [Pipeline] retry
00:00:33.351  [Pipeline] {
00:00:33.360  [Pipeline] httpRequest
00:00:33.363  HttpMethod: GET
00:00:33.364  URL: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz
00:00:33.364  Sending request to url: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz
00:00:33.366  Response Code: HTTP/1.1 200 OK
00:00:33.366  Success: Status code 200 is in the accepted range: 200,404
00:00:33.366  Saving response body to /var/jenkins/workspace/ubuntu22-vg-autotest/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz
00:00:50.237  [Pipeline] }
00:00:50.253  [Pipeline] // retry
00:00:50.260  [Pipeline] sh
00:00:50.538  + tar --no-same-owner -xf spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz
00:00:53.082  [Pipeline] sh
00:00:53.362  + git -C spdk log --oneline -n5
00:00:53.362  c13c99a5e test: Various fixes for Fedora40
00:00:53.362  726a04d70 test/nvmf: adjust timeout for bigger nvmes
00:00:53.362  61c96acfb dpdk: Point dpdk submodule at a latest fix from spdk-23.11
00:00:53.362  7db6dcdb8 nvme/fio_plugin: update the way ruhs descriptors are fetched
00:00:53.362  ff6f5c41e nvme/fio_plugin: trim add support for multiple ranges
00:00:53.381  [Pipeline] withCredentials
00:00:53.393   > git --version # timeout=10
00:00:53.406   > git --version # 'git version 2.39.2'
00:00:53.473  Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS
00:00:53.475  [Pipeline] {
00:00:53.485  [Pipeline] retry
00:00:53.487  [Pipeline] {
00:00:53.503  [Pipeline] sh
00:00:53.791  + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4
00:00:53.802  [Pipeline] }
00:00:53.822  [Pipeline] // retry
00:00:53.828  [Pipeline] }
00:00:53.848  [Pipeline] // withCredentials
00:00:53.859  [Pipeline] httpRequest
00:00:54.224  [Pipeline] echo
00:00:54.227  Sorcerer 10.211.164.20 is alive
00:00:54.238  [Pipeline] retry
00:00:54.240  [Pipeline] {
00:00:54.256  [Pipeline] httpRequest
00:00:54.261  HttpMethod: GET
00:00:54.262  URL: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz
00:00:54.263  Sending request to url: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz
00:00:54.263  Response Code: HTTP/1.1 200 OK
00:00:54.264  Success: Status code 200 is in the accepted range: 200,404
00:00:54.264  Saving response body to /var/jenkins/workspace/ubuntu22-vg-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz
00:01:01.110  [Pipeline] }
00:01:01.127  [Pipeline] // retry
00:01:01.135  [Pipeline] sh
00:01:01.416  + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz
00:01:02.804  [Pipeline] sh
00:01:03.085  + git -C dpdk log --oneline -n5
00:01:03.085  caf0f5d395 version: 22.11.4
00:01:03.085  7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt"
00:01:03.085  dc9c799c7d vhost: fix missing spinlock unlock
00:01:03.085  4307659a90 net/mlx5: fix LACP redirection in Rx domain
00:01:03.085  6ef77f2a5e net/gve: fix RX buffer size alignment
00:01:03.102  [Pipeline] writeFile
00:01:03.117  [Pipeline] sh
00:01:03.399  + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh
00:01:03.412  [Pipeline] sh
00:01:03.698  + cat autorun-spdk.conf
00:01:03.698  SPDK_TEST_UNITTEST=1
00:01:03.698  SPDK_RUN_FUNCTIONAL_TEST=1
00:01:03.698  SPDK_TEST_NVME=1
00:01:03.698  SPDK_TEST_BLOCKDEV=1
00:01:03.698  SPDK_RUN_ASAN=1
00:01:03.698  SPDK_RUN_UBSAN=1
00:01:03.698  SPDK_TEST_RAID5=1
00:01:03.698  SPDK_TEST_NATIVE_DPDK=v22.11.4
00:01:03.698  SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build
00:01:03.698  SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi
00:01:03.705  RUN_NIGHTLY=1
00:01:03.707  [Pipeline] }
00:01:03.721  [Pipeline] // stage
00:01:03.737  [Pipeline] stage
00:01:03.740  [Pipeline] { (Run VM)
00:01:03.754  [Pipeline] sh
00:01:04.036  + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh
00:01:04.036  + echo 'Start stage prepare_nvme.sh'
00:01:04.036  Start stage prepare_nvme.sh
00:01:04.036  + [[ -n 5 ]]
00:01:04.036  + disk_prefix=ex5
00:01:04.036  + [[ -n /var/jenkins/workspace/ubuntu22-vg-autotest ]]
00:01:04.036  + [[ -e /var/jenkins/workspace/ubuntu22-vg-autotest/autorun-spdk.conf ]]
00:01:04.036  + source /var/jenkins/workspace/ubuntu22-vg-autotest/autorun-spdk.conf
00:01:04.036  ++ SPDK_TEST_UNITTEST=1
00:01:04.036  ++ SPDK_RUN_FUNCTIONAL_TEST=1
00:01:04.036  ++ SPDK_TEST_NVME=1
00:01:04.036  ++ SPDK_TEST_BLOCKDEV=1
00:01:04.036  ++ SPDK_RUN_ASAN=1
00:01:04.036  ++ SPDK_RUN_UBSAN=1
00:01:04.036  ++ SPDK_TEST_RAID5=1
00:01:04.036  ++ SPDK_TEST_NATIVE_DPDK=v22.11.4
00:01:04.036  ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build
00:01:04.036  ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi
00:01:04.036  ++ RUN_NIGHTLY=1
00:01:04.036  + cd /var/jenkins/workspace/ubuntu22-vg-autotest
00:01:04.036  + nvme_files=()
00:01:04.036  + declare -A nvme_files
00:01:04.036  + backend_dir=/var/lib/libvirt/images/backends
00:01:04.036  + nvme_files['nvme.img']=5G
00:01:04.036  + nvme_files['nvme-cmb.img']=5G
00:01:04.036  + nvme_files['nvme-multi0.img']=4G
00:01:04.036  + nvme_files['nvme-multi1.img']=4G
00:01:04.036  + nvme_files['nvme-multi2.img']=4G
00:01:04.036  + nvme_files['nvme-openstack.img']=8G
00:01:04.036  + nvme_files['nvme-zns.img']=5G
00:01:04.036  + ((  SPDK_TEST_NVME_PMR == 1  ))
00:01:04.036  + ((  SPDK_TEST_FTL == 1  ))
00:01:04.036  + ((  SPDK_TEST_NVME_FDP == 1  ))
00:01:04.036  + [[ ! -d /var/lib/libvirt/images/backends ]]
00:01:04.036  + for nvme in "${!nvme_files[@]}"
00:01:04.036  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi2.img -s 4G
00:01:04.036  Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc
00:01:04.036  + for nvme in "${!nvme_files[@]}"
00:01:04.036  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-cmb.img -s 5G
00:01:04.036  Formatting '/var/lib/libvirt/images/backends/ex5-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc
00:01:04.036  + for nvme in "${!nvme_files[@]}"
00:01:04.036  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-openstack.img -s 8G
00:01:04.036  Formatting '/var/lib/libvirt/images/backends/ex5-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc
00:01:04.036  + for nvme in "${!nvme_files[@]}"
00:01:04.036  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-zns.img -s 5G
00:01:04.036  Formatting '/var/lib/libvirt/images/backends/ex5-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc
00:01:04.036  + for nvme in "${!nvme_files[@]}"
00:01:04.036  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi1.img -s 4G
00:01:04.036  Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc
00:01:04.036  + for nvme in "${!nvme_files[@]}"
00:01:04.036  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi0.img -s 4G
00:01:04.296  Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc
00:01:04.296  + for nvme in "${!nvme_files[@]}"
00:01:04.296  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme.img -s 5G
00:01:04.296  Formatting '/var/lib/libvirt/images/backends/ex5-nvme.img', fmt=raw size=5368709120 preallocation=falloc
00:01:04.296  ++ sudo grep -rl ex5-nvme.img /etc/libvirt/qemu
00:01:04.296  + echo 'End stage prepare_nvme.sh'
00:01:04.296  End stage prepare_nvme.sh
00:01:04.308  [Pipeline] sh
00:01:04.590  + DISTRO=ubuntu2204 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh
00:01:04.590  Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex5-nvme.img -H -a -v -f ubuntu2204
00:01:04.590  
00:01:04.590  DIR=/var/jenkins/workspace/ubuntu22-vg-autotest/spdk/scripts/vagrant
00:01:04.590  SPDK_DIR=/var/jenkins/workspace/ubuntu22-vg-autotest/spdk
00:01:04.590  VAGRANT_TARGET=/var/jenkins/workspace/ubuntu22-vg-autotest
00:01:04.590  HELP=0
00:01:04.590  DRY_RUN=0
00:01:04.590  NVME_FILE=/var/lib/libvirt/images/backends/ex5-nvme.img,
00:01:04.590  NVME_DISKS_TYPE=nvme,
00:01:04.590  NVME_AUTO_CREATE=0
00:01:04.590  NVME_DISKS_NAMESPACES=,
00:01:04.590  NVME_CMB=,
00:01:04.590  NVME_PMR=,
00:01:04.590  NVME_ZNS=,
00:01:04.590  NVME_MS=,
00:01:04.590  NVME_FDP=,
00:01:04.590  SPDK_VAGRANT_DISTRO=ubuntu2204
00:01:04.590  SPDK_VAGRANT_VMCPU=10
00:01:04.590  SPDK_VAGRANT_VMRAM=12288
00:01:04.590  SPDK_VAGRANT_PROVIDER=libvirt
00:01:04.590  SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911
00:01:04.590  SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64
00:01:04.590  SPDK_OPENSTACK_NETWORK=0
00:01:04.590  VAGRANT_PACKAGE_BOX=0
00:01:04.590  VAGRANTFILE=/var/jenkins/workspace/ubuntu22-vg-autotest/spdk/scripts/vagrant/Vagrantfile
00:01:04.590  FORCE_DISTRO=true
00:01:04.590  VAGRANT_BOX_VERSION=
00:01:04.590  EXTRA_VAGRANTFILES=
00:01:04.590  NIC_MODEL=e1000
00:01:04.590  
00:01:04.590  mkdir: created directory '/var/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt'
00:01:04.590  /var/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt /var/jenkins/workspace/ubuntu22-vg-autotest
00:01:07.884  Bringing machine 'default' up with 'libvirt' provider...
00:01:07.884  ==> default: Creating image (snapshot of base box volume).
00:01:08.143  ==> default: Creating domain with the following settings...
00:01:08.143  ==> default:  -- Name:              ubuntu2204-22.04-1711172311-2200_default_1732034700_ce0fb9b5b1d2b7c15d80
00:01:08.143  ==> default:  -- Domain type:       kvm
00:01:08.143  ==> default:  -- Cpus:              10
00:01:08.143  ==> default:  -- Feature:           acpi
00:01:08.143  ==> default:  -- Feature:           apic
00:01:08.143  ==> default:  -- Feature:           pae
00:01:08.143  ==> default:  -- Memory:            12288M
00:01:08.143  ==> default:  -- Memory Backing:    hugepages: 
00:01:08.143  ==> default:  -- Management MAC:    
00:01:08.143  ==> default:  -- Loader:            
00:01:08.143  ==> default:  -- Nvram:             
00:01:08.143  ==> default:  -- Base box:          spdk/ubuntu2204
00:01:08.143  ==> default:  -- Storage pool:      default
00:01:08.143  ==> default:  -- Image:             /var/lib/libvirt/images/ubuntu2204-22.04-1711172311-2200_default_1732034700_ce0fb9b5b1d2b7c15d80.img (20G)
00:01:08.143  ==> default:  -- Volume Cache:      default
00:01:08.143  ==> default:  -- Kernel:            
00:01:08.143  ==> default:  -- Initrd:            
00:01:08.143  ==> default:  -- Graphics Type:     vnc
00:01:08.143  ==> default:  -- Graphics Port:     -1
00:01:08.143  ==> default:  -- Graphics IP:       127.0.0.1
00:01:08.143  ==> default:  -- Graphics Password: Not defined
00:01:08.143  ==> default:  -- Video Type:        cirrus
00:01:08.143  ==> default:  -- Video VRAM:        9216
00:01:08.143  ==> default:  -- Sound Type:	
00:01:08.143  ==> default:  -- Keymap:            en-us
00:01:08.143  ==> default:  -- TPM Path:          
00:01:08.143  ==> default:  -- INPUT:             type=mouse, bus=ps2
00:01:08.143  ==> default:  -- Command line args: 
00:01:08.143  ==> default:     -> value=-device, 
00:01:08.143  ==> default:     -> value=nvme,id=nvme-0,serial=12340, 
00:01:08.143  ==> default:     -> value=-drive, 
00:01:08.143  ==> default:     -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme.img,if=none,id=nvme-0-drive0, 
00:01:08.143  ==> default:     -> value=-device, 
00:01:08.143  ==> default:     -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 
00:01:08.143  ==> default: Creating shared folders metadata...
00:01:08.143  ==> default: Starting domain.
00:01:10.127  ==> default: Waiting for domain to get an IP address...
00:01:20.105  ==> default: Waiting for SSH to become available...
00:01:22.650  ==> default: Configuring and enabling network interfaces...
00:01:27.921  ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk
00:01:34.486  ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk
00:01:38.678  ==> default: Mounting SSHFS shared folder...
00:01:40.056  ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt/output => /home/vagrant/spdk_repo/output
00:01:40.056  ==> default: Checking Mount..
00:01:40.623  ==> default: Folder Successfully Mounted!
00:01:40.623  ==> default: Running provisioner: file...
00:01:41.191      default: ~/.gitconfig => .gitconfig
00:01:41.449  
00:01:41.449    SUCCESS!
00:01:41.449  
00:01:41.449    cd to /var/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt and type "vagrant ssh" to use.
00:01:41.449    Use vagrant "suspend" and vagrant "resume" to stop and start.
00:01:41.449    Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt" to destroy all trace of vm.
00:01:41.449  
00:01:41.458  [Pipeline] }
00:01:41.474  [Pipeline] // stage
00:01:41.482  [Pipeline] dir
00:01:41.483  Running in /var/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt
00:01:41.485  [Pipeline] {
00:01:41.497  [Pipeline] catchError
00:01:41.499  [Pipeline] {
00:01:41.511  [Pipeline] sh
00:01:41.789  + vagrant ssh-config --host vagrant
00:01:41.789  + sed -ne /^Host/,$p
00:01:41.789  + tee ssh_conf
00:01:45.091  Host vagrant
00:01:45.091    HostName 192.168.121.94
00:01:45.091    User vagrant
00:01:45.091    Port 22
00:01:45.091    UserKnownHostsFile /dev/null
00:01:45.091    StrictHostKeyChecking no
00:01:45.091    PasswordAuthentication no
00:01:45.091    IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-ubuntu2204/22.04-1711172311-2200/libvirt/ubuntu2204
00:01:45.091    IdentitiesOnly yes
00:01:45.091    LogLevel FATAL
00:01:45.091    ForwardAgent yes
00:01:45.091    ForwardX11 yes
00:01:45.091  
00:01:45.123  [Pipeline] withEnv
00:01:45.125  [Pipeline] {
00:01:45.141  [Pipeline] sh
00:01:45.422  + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash
00:01:45.422  		source /etc/os-release
00:01:45.422  		[[ -e /image.version ]] && img=$(< /image.version)
00:01:45.422  		# Minimal, systemd-like check.
00:01:45.422  		if [[ -e /.dockerenv ]]; then
00:01:45.422  			# Clear garbage from the node's name:
00:01:45.422  			#  agt-er_autotest_547-896 -> autotest_547-896
00:01:45.422  			#  $HOSTNAME is the actual container id
00:01:45.422  			agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_}
00:01:45.422  			if grep -q "/etc/hostname" /proc/self/mountinfo; then
00:01:45.422  				# We can assume this is a mount from a host where container is running,
00:01:45.422  				# so fetch its hostname to easily identify the target swarm worker.
00:01:45.422  				container="$(< /etc/hostname) ($agent)"
00:01:45.422  			else
00:01:45.422  				# Fallback
00:01:45.422  				container=$agent
00:01:45.422  			fi
00:01:45.422  		fi
00:01:45.422  		echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}"
00:01:45.422  
00:01:45.693  [Pipeline] }
00:01:45.709  [Pipeline] // withEnv
00:01:45.718  [Pipeline] setCustomBuildProperty
00:01:45.735  [Pipeline] stage
00:01:45.737  [Pipeline] { (Tests)
00:01:45.756  [Pipeline] sh
00:01:46.038  + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu22-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./
00:01:46.312  [Pipeline] sh
00:01:46.593  + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu22-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./
00:01:46.865  [Pipeline] timeout
00:01:46.866  Timeout set to expire in 1 hr 30 min
00:01:46.867  [Pipeline] {
00:01:46.881  [Pipeline] sh
00:01:47.161  + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard
00:01:47.727  HEAD is now at c13c99a5e test: Various fixes for Fedora40
00:01:47.739  [Pipeline] sh
00:01:48.020  + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo
00:01:48.293  [Pipeline] sh
00:01:48.572  + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu22-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo
00:01:48.849  [Pipeline] sh
00:01:49.127  + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=ubuntu22-vg-autotest ./autoruner.sh spdk_repo
00:01:49.385  ++ readlink -f spdk_repo
00:01:49.385  + DIR_ROOT=/home/vagrant/spdk_repo
00:01:49.385  + [[ -n /home/vagrant/spdk_repo ]]
00:01:49.385  + DIR_SPDK=/home/vagrant/spdk_repo/spdk
00:01:49.385  + DIR_OUTPUT=/home/vagrant/spdk_repo/output
00:01:49.385  + [[ -d /home/vagrant/spdk_repo/spdk ]]
00:01:49.385  + [[ ! -d /home/vagrant/spdk_repo/output ]]
00:01:49.385  + [[ -d /home/vagrant/spdk_repo/output ]]
00:01:49.385  + [[ ubuntu22-vg-autotest == pkgdep-* ]]
00:01:49.385  + cd /home/vagrant/spdk_repo
00:01:49.385  + source /etc/os-release
00:01:49.385  ++ PRETTY_NAME='Ubuntu 22.04.4 LTS'
00:01:49.385  ++ NAME=Ubuntu
00:01:49.385  ++ VERSION_ID=22.04
00:01:49.385  ++ VERSION='22.04.4 LTS (Jammy Jellyfish)'
00:01:49.385  ++ VERSION_CODENAME=jammy
00:01:49.385  ++ ID=ubuntu
00:01:49.385  ++ ID_LIKE=debian
00:01:49.385  ++ HOME_URL=https://www.ubuntu.com/
00:01:49.385  ++ SUPPORT_URL=https://help.ubuntu.com/
00:01:49.385  ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/
00:01:49.385  ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy
00:01:49.385  ++ UBUNTU_CODENAME=jammy
00:01:49.385  + uname -a
00:01:49.385  Linux ubuntu2204-cloud-1711172311-2200 5.15.0-101-generic #111-Ubuntu SMP Tue Mar 5 20:16:58 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
00:01:49.385  + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status
00:01:49.645  Hugepages
00:01:49.645  node     hugesize     free /  total
00:01:49.645  node0   1048576kB        0 /      0
00:01:49.645  node0      2048kB        0 /      0
00:01:49.645  
00:01:49.645  Type                      BDF             Vendor Device NUMA    Driver           Device     Block devices
00:01:49.645  virtio                    0000:00:03.0    1af4   1001   unknown virtio-pci       -          vda
00:01:49.645  NVMe                      0000:00:06.0    1b36   0010   unknown nvme             nvme0      nvme0n1
00:01:49.645  + rm -f /tmp/spdk-ld-path
00:01:49.645  + source autorun-spdk.conf
00:01:49.645  ++ SPDK_TEST_UNITTEST=1
00:01:49.645  ++ SPDK_RUN_FUNCTIONAL_TEST=1
00:01:49.645  ++ SPDK_TEST_NVME=1
00:01:49.645  ++ SPDK_TEST_BLOCKDEV=1
00:01:49.645  ++ SPDK_RUN_ASAN=1
00:01:49.645  ++ SPDK_RUN_UBSAN=1
00:01:49.645  ++ SPDK_TEST_RAID5=1
00:01:49.645  ++ SPDK_TEST_NATIVE_DPDK=v22.11.4
00:01:49.645  ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build
00:01:49.645  ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi
00:01:49.645  ++ RUN_NIGHTLY=1
00:01:49.645  + ((  SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1  ))
00:01:49.645  + [[ -n '' ]]
00:01:49.645  + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk
00:01:49.645  + for M in /var/spdk/build-*-manifest.txt
00:01:49.645  + [[ -f /var/spdk/build-pkg-manifest.txt ]]
00:01:49.645  + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/
00:01:49.645  + for M in /var/spdk/build-*-manifest.txt
00:01:49.645  + [[ -f /var/spdk/build-repo-manifest.txt ]]
00:01:49.645  + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/
00:01:49.645  ++ uname
00:01:49.645  + [[ Linux == \L\i\n\u\x ]]
00:01:49.645  + sudo dmesg -T
00:01:49.645  + sudo dmesg --clear
00:01:49.645  + dmesg_pid=2284
00:01:49.645  + sudo dmesg -Tw
00:01:49.645  + [[ Ubuntu == FreeBSD ]]
00:01:49.645  + export UNBIND_ENTIRE_IOMMU_GROUP=yes
00:01:49.645  + UNBIND_ENTIRE_IOMMU_GROUP=yes
00:01:49.645  + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]]
00:01:49.645  + [[ -x /usr/src/fio-static/fio ]]
00:01:49.645  + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]]
00:01:49.645  + [[ ! -v VFIO_QEMU_BIN ]]
00:01:49.645  + [[ -e /usr/local/qemu/vfio-user-latest ]]
00:01:49.645  + vfios=(/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64)
00:01:49.645  + export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64'
00:01:49.645  + VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64'
00:01:49.645  + [[ -e /usr/local/qemu/vanilla-latest ]]
00:01:49.645  + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf
00:01:49.645  Test configuration:
00:01:49.645  SPDK_TEST_UNITTEST=1
00:01:49.645  SPDK_RUN_FUNCTIONAL_TEST=1
00:01:49.645  SPDK_TEST_NVME=1
00:01:49.645  SPDK_TEST_BLOCKDEV=1
00:01:49.645  SPDK_RUN_ASAN=1
00:01:49.645  SPDK_RUN_UBSAN=1
00:01:49.645  SPDK_TEST_RAID5=1
00:01:49.645  SPDK_TEST_NATIVE_DPDK=v22.11.4
00:01:49.645  SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build
00:01:49.645  SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi
00:01:49.905  RUN_NIGHTLY=1   16:45:42	-- common/autotest_common.sh@1689 -- $ [[ n == y ]]
00:01:49.905    16:45:42	-- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:01:49.905     16:45:42	-- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]]
00:01:49.905     16:45:42	-- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:01:49.905     16:45:42	-- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh
00:01:49.905      16:45:42	-- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:01:49.905      16:45:42	-- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:01:49.905      16:45:42	-- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:01:49.905      16:45:42	-- paths/export.sh@5 -- $ export PATH
00:01:49.905      16:45:42	-- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:01:49.905    16:45:42	-- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output
00:01:49.905      16:45:42	-- common/autobuild_common.sh@440 -- $ date +%s
00:01:49.905     16:45:42	-- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1732034742.XXXXXX
00:01:49.905    16:45:42	-- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1732034742.gUjVP3
00:01:49.905    16:45:42	-- common/autobuild_common.sh@442 -- $ [[ -n '' ]]
00:01:49.905    16:45:42	-- common/autobuild_common.sh@446 -- $ '[' -n v22.11.4 ']'
00:01:49.905     16:45:42	-- common/autobuild_common.sh@447 -- $ dirname /home/vagrant/spdk_repo/dpdk/build
00:01:49.905    16:45:42	-- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk'
00:01:49.905    16:45:42	-- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp'
00:01:49.905    16:45:42	-- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp  --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs'
00:01:49.905     16:45:42	-- common/autobuild_common.sh@456 -- $ get_config_params
00:01:49.905     16:45:42	-- common/autotest_common.sh@397 -- $ xtrace_disable
00:01:49.905     16:45:42	-- common/autotest_common.sh@10 -- $ set +x
00:01:49.905    16:45:42	-- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build'
00:01:49.905   16:45:42	-- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD=
00:01:49.905   16:45:42	-- spdk/autobuild.sh@12 -- $ umask 022
00:01:49.905   16:45:42	-- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk
00:01:49.905   16:45:42	-- spdk/autobuild.sh@16 -- $ date -u
00:01:49.905  Tue Nov 19 16:45:42 UTC 2024
00:01:49.905   16:45:42	-- spdk/autobuild.sh@17 -- $ git describe --tags
00:01:49.905  LTS-67-gc13c99a5e
00:01:49.905   16:45:42	-- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']'
00:01:49.905   16:45:42	-- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan'
00:01:49.905   16:45:42	-- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']'
00:01:49.905   16:45:42	-- common/autotest_common.sh@1093 -- $ xtrace_disable
00:01:49.905   16:45:42	-- common/autotest_common.sh@10 -- $ set +x
00:01:49.905  ************************************
00:01:49.905  START TEST asan
00:01:49.905  ************************************
00:01:49.905  using asan
00:01:49.905   16:45:42	-- common/autotest_common.sh@1114 -- $ echo 'using asan'
00:01:49.905  
00:01:49.905  real	0m0.000s
00:01:49.905  user	0m0.000s
00:01:49.905  sys	0m0.000s
00:01:49.905   16:45:42	-- common/autotest_common.sh@1115 -- $ xtrace_disable
00:01:49.905   16:45:42	-- common/autotest_common.sh@10 -- $ set +x
00:01:49.905  ************************************
00:01:49.905  END TEST asan
00:01:49.905  ************************************
00:01:49.905   16:45:42	-- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']'
00:01:49.905   16:45:42	-- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan'
00:01:49.905   16:45:42	-- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']'
00:01:49.905   16:45:42	-- common/autotest_common.sh@1093 -- $ xtrace_disable
00:01:49.905   16:45:42	-- common/autotest_common.sh@10 -- $ set +x
00:01:49.905  ************************************
00:01:49.905  START TEST ubsan
00:01:49.905  ************************************
00:01:49.905  using ubsan
00:01:49.905   16:45:42	-- common/autotest_common.sh@1114 -- $ echo 'using ubsan'
00:01:49.905  
00:01:49.905  real	0m0.000s
00:01:49.905  user	0m0.000s
00:01:49.905  sys	0m0.000s
00:01:49.905  ************************************
00:01:49.905  END TEST ubsan
00:01:49.905   16:45:42	-- common/autotest_common.sh@1115 -- $ xtrace_disable
00:01:49.905   16:45:42	-- common/autotest_common.sh@10 -- $ set +x
00:01:49.905  ************************************
00:01:50.165   16:45:42	-- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']'
00:01:50.165   16:45:42	-- spdk/autobuild.sh@28 -- $ build_native_dpdk
00:01:50.165   16:45:42	-- common/autobuild_common.sh@432 -- $ run_test build_native_dpdk _build_native_dpdk
00:01:50.165   16:45:42	-- common/autotest_common.sh@1087 -- $ '[' 2 -le 1 ']'
00:01:50.165   16:45:42	-- common/autotest_common.sh@1093 -- $ xtrace_disable
00:01:50.165   16:45:42	-- common/autotest_common.sh@10 -- $ set +x
00:01:50.165  ************************************
00:01:50.165  START TEST build_native_dpdk
00:01:50.165  ************************************
00:01:50.165   16:45:42	-- common/autotest_common.sh@1114 -- $ _build_native_dpdk
00:01:50.165   16:45:42	-- common/autobuild_common.sh@48 -- $ local external_dpdk_dir
00:01:50.165   16:45:42	-- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir
00:01:50.165   16:45:42	-- common/autobuild_common.sh@50 -- $ local compiler_version
00:01:50.165   16:45:42	-- common/autobuild_common.sh@51 -- $ local compiler
00:01:50.165   16:45:42	-- common/autobuild_common.sh@52 -- $ local dpdk_kmods
00:01:50.165   16:45:42	-- common/autobuild_common.sh@53 -- $ local repo=dpdk
00:01:50.165   16:45:42	-- common/autobuild_common.sh@55 -- $ compiler=gcc
00:01:50.165   16:45:42	-- common/autobuild_common.sh@61 -- $ export CC=gcc
00:01:50.165   16:45:42	-- common/autobuild_common.sh@61 -- $ CC=gcc
00:01:50.165   16:45:42	-- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]]
00:01:50.165   16:45:42	-- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]]
00:01:50.165    16:45:42	-- common/autobuild_common.sh@68 -- $ gcc -dumpversion
00:01:50.165   16:45:42	-- common/autobuild_common.sh@68 -- $ compiler_version=11
00:01:50.165   16:45:42	-- common/autobuild_common.sh@69 -- $ compiler_version=11
00:01:50.165   16:45:42	-- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build
00:01:50.165    16:45:42	-- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build
00:01:50.165   16:45:42	-- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk
00:01:50.165   16:45:42	-- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]]
00:01:50.165   16:45:42	-- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk
00:01:50.165   16:45:42	-- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5
00:01:50.165  caf0f5d395 version: 22.11.4
00:01:50.165  7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt"
00:01:50.165  dc9c799c7d vhost: fix missing spinlock unlock
00:01:50.165  4307659a90 net/mlx5: fix LACP redirection in Rx domain
00:01:50.165  6ef77f2a5e net/gve: fix RX buffer size alignment
00:01:50.165   16:45:42	-- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon'
00:01:50.165   16:45:42	-- common/autobuild_common.sh@86 -- $ dpdk_ldflags=
00:01:50.165   16:45:42	-- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4
00:01:50.165   16:45:42	-- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]]
00:01:50.165   16:45:42	-- common/autobuild_common.sh@89 -- $ [[ 11 -ge 5 ]]
00:01:50.165   16:45:42	-- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror'
00:01:50.165   16:45:42	-- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]]
00:01:50.165   16:45:42	-- common/autobuild_common.sh@93 -- $ [[ 11 -ge 10 ]]
00:01:50.165   16:45:42	-- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow'
00:01:50.165   16:45:42	-- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base")
00:01:50.165   16:45:42	-- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n
00:01:50.165   16:45:42	-- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]]
00:01:50.165   16:45:42	-- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]]
00:01:50.165   16:45:42	-- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]]
00:01:50.165   16:45:42	-- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk
00:01:50.165    16:45:42	-- common/autobuild_common.sh@168 -- $ uname -s
00:01:50.165   16:45:42	-- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']'
00:01:50.165   16:45:42	-- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0
00:01:50.165   16:45:42	-- scripts/common.sh@372 -- $ cmp_versions 22.11.4 '<' 21.11.0
00:01:50.165   16:45:42	-- scripts/common.sh@332 -- $ local ver1 ver1_l
00:01:50.165   16:45:42	-- scripts/common.sh@333 -- $ local ver2 ver2_l
00:01:50.165   16:45:42	-- scripts/common.sh@335 -- $ IFS=.-:
00:01:50.165   16:45:42	-- scripts/common.sh@335 -- $ read -ra ver1
00:01:50.165   16:45:42	-- scripts/common.sh@336 -- $ IFS=.-:
00:01:50.165   16:45:42	-- scripts/common.sh@336 -- $ read -ra ver2
00:01:50.165   16:45:42	-- scripts/common.sh@337 -- $ local 'op=<'
00:01:50.165   16:45:42	-- scripts/common.sh@339 -- $ ver1_l=3
00:01:50.165   16:45:42	-- scripts/common.sh@340 -- $ ver2_l=3
00:01:50.165   16:45:42	-- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v
00:01:50.165   16:45:42	-- scripts/common.sh@343 -- $ case "$op" in
00:01:50.165   16:45:42	-- scripts/common.sh@344 -- $ : 1
00:01:50.165   16:45:42	-- scripts/common.sh@363 -- $ (( v = 0 ))
00:01:50.165   16:45:42	-- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:01:50.165    16:45:42	-- scripts/common.sh@364 -- $ decimal 22
00:01:50.165    16:45:42	-- scripts/common.sh@352 -- $ local d=22
00:01:50.165    16:45:42	-- scripts/common.sh@353 -- $ [[ 22 =~ ^[0-9]+$ ]]
00:01:50.165    16:45:42	-- scripts/common.sh@354 -- $ echo 22
00:01:50.165   16:45:42	-- scripts/common.sh@364 -- $ ver1[v]=22
00:01:50.165    16:45:42	-- scripts/common.sh@365 -- $ decimal 21
00:01:50.165    16:45:42	-- scripts/common.sh@352 -- $ local d=21
00:01:50.165    16:45:42	-- scripts/common.sh@353 -- $ [[ 21 =~ ^[0-9]+$ ]]
00:01:50.165    16:45:42	-- scripts/common.sh@354 -- $ echo 21
00:01:50.165   16:45:42	-- scripts/common.sh@365 -- $ ver2[v]=21
00:01:50.165   16:45:42	-- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] ))
00:01:50.165   16:45:42	-- scripts/common.sh@366 -- $ return 1
00:01:50.165   16:45:42	-- common/autobuild_common.sh@173 -- $ patch -p1
00:01:50.165  patching file config/rte_config.h
00:01:50.165  Hunk #1 succeeded at 60 (offset 1 line).
00:01:50.165   16:45:42	-- common/autobuild_common.sh@176 -- $ lt 22.11.4 24.07.0
00:01:50.165   16:45:42	-- scripts/common.sh@372 -- $ cmp_versions 22.11.4 '<' 24.07.0
00:01:50.165   16:45:42	-- scripts/common.sh@332 -- $ local ver1 ver1_l
00:01:50.165   16:45:42	-- scripts/common.sh@333 -- $ local ver2 ver2_l
00:01:50.165   16:45:42	-- scripts/common.sh@335 -- $ IFS=.-:
00:01:50.165   16:45:42	-- scripts/common.sh@335 -- $ read -ra ver1
00:01:50.165   16:45:42	-- scripts/common.sh@336 -- $ IFS=.-:
00:01:50.165   16:45:42	-- scripts/common.sh@336 -- $ read -ra ver2
00:01:50.165   16:45:42	-- scripts/common.sh@337 -- $ local 'op=<'
00:01:50.165   16:45:42	-- scripts/common.sh@339 -- $ ver1_l=3
00:01:50.165   16:45:42	-- scripts/common.sh@340 -- $ ver2_l=3
00:01:50.165   16:45:42	-- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v
00:01:50.165   16:45:42	-- scripts/common.sh@343 -- $ case "$op" in
00:01:50.165   16:45:42	-- scripts/common.sh@344 -- $ : 1
00:01:50.165   16:45:42	-- scripts/common.sh@363 -- $ (( v = 0 ))
00:01:50.165   16:45:42	-- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:01:50.165    16:45:42	-- scripts/common.sh@364 -- $ decimal 22
00:01:50.165    16:45:42	-- scripts/common.sh@352 -- $ local d=22
00:01:50.165    16:45:42	-- scripts/common.sh@353 -- $ [[ 22 =~ ^[0-9]+$ ]]
00:01:50.165    16:45:42	-- scripts/common.sh@354 -- $ echo 22
00:01:50.165   16:45:42	-- scripts/common.sh@364 -- $ ver1[v]=22
00:01:50.165    16:45:42	-- scripts/common.sh@365 -- $ decimal 24
00:01:50.165    16:45:42	-- scripts/common.sh@352 -- $ local d=24
00:01:50.165    16:45:42	-- scripts/common.sh@353 -- $ [[ 24 =~ ^[0-9]+$ ]]
00:01:50.165    16:45:42	-- scripts/common.sh@354 -- $ echo 24
00:01:50.166   16:45:42	-- scripts/common.sh@365 -- $ ver2[v]=24
00:01:50.166   16:45:42	-- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] ))
00:01:50.166   16:45:42	-- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] ))
00:01:50.166   16:45:42	-- scripts/common.sh@367 -- $ return 0
00:01:50.166   16:45:42	-- common/autobuild_common.sh@177 -- $ patch -p1
00:01:50.166  patching file lib/pcapng/rte_pcapng.c
00:01:50.166  Hunk #1 succeeded at 110 (offset -18 lines).
00:01:50.166   16:45:42	-- common/autobuild_common.sh@180 -- $ dpdk_kmods=false
00:01:50.166    16:45:42	-- common/autobuild_common.sh@181 -- $ uname -s
00:01:50.166   16:45:42	-- common/autobuild_common.sh@181 -- $ '[' Linux = FreeBSD ']'
00:01:50.166    16:45:42	-- common/autobuild_common.sh@185 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base
00:01:50.166   16:45:42	-- common/autobuild_common.sh@185 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,
00:01:55.441  The Meson build system
00:01:55.441  Version: 1.4.0
00:01:55.441  Source dir: /home/vagrant/spdk_repo/dpdk
00:01:55.441  Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp
00:01:55.441  Build type: native build
00:01:55.441  Program cat found: YES (/usr/bin/cat)
00:01:55.441  Project name: DPDK
00:01:55.441  Project version: 22.11.4
00:01:55.441  C compiler for the host machine: gcc (gcc 11.4.0 "gcc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0")
00:01:55.441  C linker for the host machine: gcc ld.bfd 2.38
00:01:55.441  Host machine cpu family: x86_64
00:01:55.441  Host machine cpu: x86_64
00:01:55.441  Message: ## Building in Developer Mode ##
00:01:55.441  Program pkg-config found: YES (/usr/bin/pkg-config)
00:01:55.441  Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh)
00:01:55.441  Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh)
00:01:55.441  Program objdump found: YES (/usr/bin/objdump)
00:01:55.441  Program python3 found: YES (/usr/bin/python3)
00:01:55.441  Program cat found: YES (/usr/bin/cat)
00:01:55.441  config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead.
00:01:55.441  Checking for size of "void *" : 8 
00:01:55.441  Checking for size of "void *" : 8 (cached)
00:01:55.441  Library m found: YES
00:01:55.441  Library numa found: YES
00:01:55.441  Has header "numaif.h" : YES 
00:01:55.441  Library fdt found: NO
00:01:55.441  Library execinfo found: NO
00:01:55.441  Has header "execinfo.h" : YES 
00:01:55.441  Found pkg-config: YES (/usr/bin/pkg-config) 0.29.2
00:01:55.441  Run-time dependency libarchive found: NO (tried pkgconfig)
00:01:55.441  Run-time dependency libbsd found: NO (tried pkgconfig)
00:01:55.441  Run-time dependency jansson found: NO (tried pkgconfig)
00:01:55.441  Run-time dependency openssl found: YES 3.0.2
00:01:55.441  Run-time dependency libpcap found: NO (tried pkgconfig)
00:01:55.441  Library pcap found: NO
00:01:55.441  Compiler for C supports arguments -Wcast-qual: YES 
00:01:55.441  Compiler for C supports arguments -Wdeprecated: YES 
00:01:55.441  Compiler for C supports arguments -Wformat: YES 
00:01:55.441  Compiler for C supports arguments -Wformat-nonliteral: YES 
00:01:55.441  Compiler for C supports arguments -Wformat-security: YES 
00:01:55.441  Compiler for C supports arguments -Wmissing-declarations: YES 
00:01:55.441  Compiler for C supports arguments -Wmissing-prototypes: YES 
00:01:55.441  Compiler for C supports arguments -Wnested-externs: YES 
00:01:55.441  Compiler for C supports arguments -Wold-style-definition: YES 
00:01:55.441  Compiler for C supports arguments -Wpointer-arith: YES 
00:01:55.441  Compiler for C supports arguments -Wsign-compare: YES 
00:01:55.441  Compiler for C supports arguments -Wstrict-prototypes: YES 
00:01:55.441  Compiler for C supports arguments -Wundef: YES 
00:01:55.441  Compiler for C supports arguments -Wwrite-strings: YES 
00:01:55.441  Compiler for C supports arguments -Wno-address-of-packed-member: YES 
00:01:55.441  Compiler for C supports arguments -Wno-packed-not-aligned: YES 
00:01:55.441  Compiler for C supports arguments -Wno-missing-field-initializers: YES 
00:01:55.441  Compiler for C supports arguments -Wno-zero-length-bounds: YES 
00:01:55.441  Compiler for C supports arguments -mavx512f: YES 
00:01:55.441  Checking if "AVX512 checking" compiles: YES 
00:01:55.441  Fetching value of define "__SSE4_2__" : 1 
00:01:55.441  Fetching value of define "__AES__" : 1 
00:01:55.441  Fetching value of define "__AVX__" : 1 
00:01:55.441  Fetching value of define "__AVX2__" : 1 
00:01:55.442  Fetching value of define "__AVX512BW__" : 1 
00:01:55.442  Fetching value of define "__AVX512CD__" : 1 
00:01:55.442  Fetching value of define "__AVX512DQ__" : 1 
00:01:55.442  Fetching value of define "__AVX512F__" : 1 
00:01:55.442  Fetching value of define "__AVX512VL__" : 1 
00:01:55.442  Fetching value of define "__PCLMUL__" : 1 
00:01:55.442  Fetching value of define "__RDRND__" : 1 
00:01:55.442  Fetching value of define "__RDSEED__" : 1 
00:01:55.442  Fetching value of define "__VPCLMULQDQ__" : (undefined) 
00:01:55.442  Compiler for C supports arguments -Wno-format-truncation: YES 
00:01:55.442  Message: lib/kvargs: Defining dependency "kvargs"
00:01:55.442  Message: lib/telemetry: Defining dependency "telemetry"
00:01:55.442  Checking for function "getentropy" : YES 
00:01:55.442  Message: lib/eal: Defining dependency "eal"
00:01:55.442  Message: lib/ring: Defining dependency "ring"
00:01:55.442  Message: lib/rcu: Defining dependency "rcu"
00:01:55.442  Message: lib/mempool: Defining dependency "mempool"
00:01:55.442  Message: lib/mbuf: Defining dependency "mbuf"
00:01:55.442  Fetching value of define "__PCLMUL__" : 1 (cached)
00:01:55.442  Fetching value of define "__AVX512F__" : 1 (cached)
00:01:55.442  Fetching value of define "__AVX512BW__" : 1 (cached)
00:01:55.442  Fetching value of define "__AVX512DQ__" : 1 (cached)
00:01:55.442  Fetching value of define "__AVX512VL__" : 1 (cached)
00:01:55.442  Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached)
00:01:55.442  Compiler for C supports arguments -mpclmul: YES 
00:01:55.442  Compiler for C supports arguments -maes: YES 
00:01:55.442  Compiler for C supports arguments -mavx512f: YES (cached)
00:01:55.442  Compiler for C supports arguments -mavx512bw: YES 
00:01:55.442  Compiler for C supports arguments -mavx512dq: YES 
00:01:55.442  Compiler for C supports arguments -mavx512vl: YES 
00:01:55.442  Compiler for C supports arguments -mvpclmulqdq: YES 
00:01:55.442  Compiler for C supports arguments -mavx2: YES 
00:01:55.442  Compiler for C supports arguments -mavx: YES 
00:01:55.442  Message: lib/net: Defining dependency "net"
00:01:55.442  Message: lib/meter: Defining dependency "meter"
00:01:55.442  Message: lib/ethdev: Defining dependency "ethdev"
00:01:55.442  Message: lib/pci: Defining dependency "pci"
00:01:55.442  Message: lib/cmdline: Defining dependency "cmdline"
00:01:55.442  Message: lib/metrics: Defining dependency "metrics"
00:01:55.442  Message: lib/hash: Defining dependency "hash"
00:01:55.442  Message: lib/timer: Defining dependency "timer"
00:01:55.442  Fetching value of define "__AVX2__" : 1 (cached)
00:01:55.442  Fetching value of define "__AVX512F__" : 1 (cached)
00:01:55.442  Fetching value of define "__AVX512VL__" : 1 (cached)
00:01:55.442  Fetching value of define "__AVX512CD__" : 1 (cached)
00:01:55.442  Fetching value of define "__AVX512BW__" : 1 (cached)
00:01:55.442  Message: lib/acl: Defining dependency "acl"
00:01:55.442  Message: lib/bbdev: Defining dependency "bbdev"
00:01:55.442  Message: lib/bitratestats: Defining dependency "bitratestats"
00:01:55.442  Run-time dependency libelf found: YES 0.186
00:01:55.442  lib/bpf/meson.build:43: WARNING: libpcap is missing, rte_bpf_convert API will be disabled
00:01:55.442  Message: lib/bpf: Defining dependency "bpf"
00:01:55.442  Message: lib/cfgfile: Defining dependency "cfgfile"
00:01:55.442  Message: lib/compressdev: Defining dependency "compressdev"
00:01:55.442  Message: lib/cryptodev: Defining dependency "cryptodev"
00:01:55.442  Message: lib/distributor: Defining dependency "distributor"
00:01:55.442  Message: lib/efd: Defining dependency "efd"
00:01:55.442  Message: lib/eventdev: Defining dependency "eventdev"
00:01:55.442  Message: lib/gpudev: Defining dependency "gpudev"
00:01:55.442  Message: lib/gro: Defining dependency "gro"
00:01:55.442  Message: lib/gso: Defining dependency "gso"
00:01:55.442  Message: lib/ip_frag: Defining dependency "ip_frag"
00:01:55.442  Message: lib/jobstats: Defining dependency "jobstats"
00:01:55.442  Message: lib/latencystats: Defining dependency "latencystats"
00:01:55.442  Message: lib/lpm: Defining dependency "lpm"
00:01:55.442  Fetching value of define "__AVX512F__" : 1 (cached)
00:01:55.442  Fetching value of define "__AVX512DQ__" : 1 (cached)
00:01:55.442  Fetching value of define "__AVX512IFMA__" : (undefined) 
00:01:55.442  Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 
00:01:55.442  Message: lib/member: Defining dependency "member"
00:01:55.442  Message: lib/pcapng: Defining dependency "pcapng"
00:01:55.442  Compiler for C supports arguments -Wno-cast-qual: YES 
00:01:55.442  Message: lib/power: Defining dependency "power"
00:01:55.442  Message: lib/rawdev: Defining dependency "rawdev"
00:01:55.442  Message: lib/regexdev: Defining dependency "regexdev"
00:01:55.442  Message: lib/dmadev: Defining dependency "dmadev"
00:01:55.442  Message: lib/rib: Defining dependency "rib"
00:01:55.442  Message: lib/reorder: Defining dependency "reorder"
00:01:55.442  Message: lib/sched: Defining dependency "sched"
00:01:55.442  Message: lib/security: Defining dependency "security"
00:01:55.442  Message: lib/stack: Defining dependency "stack"
00:01:55.442  Has header "linux/userfaultfd.h" : YES 
00:01:55.442  Message: lib/vhost: Defining dependency "vhost"
00:01:55.442  Message: lib/ipsec: Defining dependency "ipsec"
00:01:55.442  Fetching value of define "__AVX512F__" : 1 (cached)
00:01:55.442  Fetching value of define "__AVX512DQ__" : 1 (cached)
00:01:55.442  Fetching value of define "__AVX512BW__" : 1 (cached)
00:01:55.442  Message: lib/fib: Defining dependency "fib"
00:01:55.442  Message: lib/port: Defining dependency "port"
00:01:55.442  Message: lib/pdump: Defining dependency "pdump"
00:01:55.442  Message: lib/table: Defining dependency "table"
00:01:55.442  Message: lib/pipeline: Defining dependency "pipeline"
00:01:55.442  Message: lib/graph: Defining dependency "graph"
00:01:55.442  Message: lib/node: Defining dependency "node"
00:01:55.442  Compiler for C supports arguments -Wno-format-truncation: YES (cached)
00:01:55.442  Message: drivers/bus/pci: Defining dependency "bus_pci"
00:01:55.442  Message: drivers/bus/vdev: Defining dependency "bus_vdev"
00:01:55.442  Message: drivers/mempool/ring: Defining dependency "mempool_ring"
00:01:55.442  Compiler for C supports arguments -Wno-sign-compare: YES 
00:01:55.442  Compiler for C supports arguments -Wno-unused-value: YES 
00:01:55.442  Compiler for C supports arguments -Wno-format: YES 
00:01:55.442  Compiler for C supports arguments -Wno-format-security: YES 
00:01:55.442  Compiler for C supports arguments -Wno-format-nonliteral: YES 
00:01:56.427  Compiler for C supports arguments -Wno-strict-aliasing: YES 
00:01:56.427  Compiler for C supports arguments -Wno-unused-but-set-variable: YES 
00:01:56.427  Compiler for C supports arguments -Wno-unused-parameter: YES 
00:01:56.427  Fetching value of define "__AVX2__" : 1 (cached)
00:01:56.427  Fetching value of define "__AVX512F__" : 1 (cached)
00:01:56.427  Fetching value of define "__AVX512BW__" : 1 (cached)
00:01:56.427  Compiler for C supports arguments -mavx512f: YES (cached)
00:01:56.427  Compiler for C supports arguments -mavx512bw: YES (cached)
00:01:56.427  Compiler for C supports arguments -march=skylake-avx512: YES 
00:01:56.427  Message: drivers/net/i40e: Defining dependency "net_i40e"
00:01:56.427  Program doxygen found: YES (/usr/bin/doxygen)
00:01:56.427  Configuring doxy-api.conf using configuration
00:01:56.427  Program sphinx-build found: NO
00:01:56.427  Configuring rte_build_config.h using configuration
00:01:56.427  Message: 
00:01:56.427  =================
00:01:56.427  Applications Enabled
00:01:56.427  =================
00:01:56.427  
00:01:56.427  apps:
00:01:56.427  	pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, test-eventdev, 
00:01:56.427  	test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, test-security-perf, 
00:01:56.427  	
00:01:56.427  
00:01:56.427  Message: 
00:01:56.427  =================
00:01:56.427  Libraries Enabled
00:01:56.427  =================
00:01:56.427  
00:01:56.427  libs:
00:01:56.427  	kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 
00:01:56.427  	meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 
00:01:56.427  	bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 
00:01:56.427  	eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 
00:01:56.427  	member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 
00:01:56.427  	sched, security, stack, vhost, ipsec, fib, port, pdump, 
00:01:56.427  	table, pipeline, graph, node, 
00:01:56.427  
00:01:56.427  Message: 
00:01:56.427  ===============
00:01:56.427  Drivers Enabled
00:01:56.427  ===============
00:01:56.427  
00:01:56.427  common:
00:01:56.427  	
00:01:56.427  bus:
00:01:56.427  	pci, vdev, 
00:01:56.427  mempool:
00:01:56.427  	ring, 
00:01:56.427  dma:
00:01:56.427  	
00:01:56.427  net:
00:01:56.427  	i40e, 
00:01:56.427  raw:
00:01:56.427  	
00:01:56.427  crypto:
00:01:56.427  	
00:01:56.427  compress:
00:01:56.427  	
00:01:56.427  regex:
00:01:56.427  	
00:01:56.427  vdpa:
00:01:56.427  	
00:01:56.427  event:
00:01:56.427  	
00:01:56.427  baseband:
00:01:56.427  	
00:01:56.427  gpu:
00:01:56.427  	
00:01:56.427  
00:01:56.427  Message: 
00:01:56.427  =================
00:01:56.427  Content Skipped
00:01:56.427  =================
00:01:56.427  
00:01:56.427  apps:
00:01:56.427  	dumpcap:	missing dependency, "libpcap"
00:01:56.427  	
00:01:56.427  libs:
00:01:56.427  	kni:	explicitly disabled via build config (deprecated lib)
00:01:56.427  	flow_classify:	explicitly disabled via build config (deprecated lib)
00:01:56.427  	
00:01:56.427  drivers:
00:01:56.427  	common/cpt:	not in enabled drivers build config
00:01:56.427  	common/dpaax:	not in enabled drivers build config
00:01:56.427  	common/iavf:	not in enabled drivers build config
00:01:56.427  	common/idpf:	not in enabled drivers build config
00:01:56.427  	common/mvep:	not in enabled drivers build config
00:01:56.427  	common/octeontx:	not in enabled drivers build config
00:01:56.427  	bus/auxiliary:	not in enabled drivers build config
00:01:56.427  	bus/dpaa:	not in enabled drivers build config
00:01:56.427  	bus/fslmc:	not in enabled drivers build config
00:01:56.427  	bus/ifpga:	not in enabled drivers build config
00:01:56.427  	bus/vmbus:	not in enabled drivers build config
00:01:56.427  	common/cnxk:	not in enabled drivers build config
00:01:56.427  	common/mlx5:	not in enabled drivers build config
00:01:56.427  	common/qat:	not in enabled drivers build config
00:01:56.427  	common/sfc_efx:	not in enabled drivers build config
00:01:56.427  	mempool/bucket:	not in enabled drivers build config
00:01:56.427  	mempool/cnxk:	not in enabled drivers build config
00:01:56.427  	mempool/dpaa:	not in enabled drivers build config
00:01:56.427  	mempool/dpaa2:	not in enabled drivers build config
00:01:56.427  	mempool/octeontx:	not in enabled drivers build config
00:01:56.427  	mempool/stack:	not in enabled drivers build config
00:01:56.427  	dma/cnxk:	not in enabled drivers build config
00:01:56.427  	dma/dpaa:	not in enabled drivers build config
00:01:56.427  	dma/dpaa2:	not in enabled drivers build config
00:01:56.427  	dma/hisilicon:	not in enabled drivers build config
00:01:56.427  	dma/idxd:	not in enabled drivers build config
00:01:56.427  	dma/ioat:	not in enabled drivers build config
00:01:56.427  	dma/skeleton:	not in enabled drivers build config
00:01:56.427  	net/af_packet:	not in enabled drivers build config
00:01:56.427  	net/af_xdp:	not in enabled drivers build config
00:01:56.427  	net/ark:	not in enabled drivers build config
00:01:56.427  	net/atlantic:	not in enabled drivers build config
00:01:56.427  	net/avp:	not in enabled drivers build config
00:01:56.427  	net/axgbe:	not in enabled drivers build config
00:01:56.427  	net/bnx2x:	not in enabled drivers build config
00:01:56.427  	net/bnxt:	not in enabled drivers build config
00:01:56.427  	net/bonding:	not in enabled drivers build config
00:01:56.427  	net/cnxk:	not in enabled drivers build config
00:01:56.427  	net/cxgbe:	not in enabled drivers build config
00:01:56.427  	net/dpaa:	not in enabled drivers build config
00:01:56.427  	net/dpaa2:	not in enabled drivers build config
00:01:56.427  	net/e1000:	not in enabled drivers build config
00:01:56.427  	net/ena:	not in enabled drivers build config
00:01:56.427  	net/enetc:	not in enabled drivers build config
00:01:56.427  	net/enetfec:	not in enabled drivers build config
00:01:56.427  	net/enic:	not in enabled drivers build config
00:01:56.427  	net/failsafe:	not in enabled drivers build config
00:01:56.427  	net/fm10k:	not in enabled drivers build config
00:01:56.427  	net/gve:	not in enabled drivers build config
00:01:56.427  	net/hinic:	not in enabled drivers build config
00:01:56.428  	net/hns3:	not in enabled drivers build config
00:01:56.428  	net/iavf:	not in enabled drivers build config
00:01:56.428  	net/ice:	not in enabled drivers build config
00:01:56.428  	net/idpf:	not in enabled drivers build config
00:01:56.428  	net/igc:	not in enabled drivers build config
00:01:56.428  	net/ionic:	not in enabled drivers build config
00:01:56.428  	net/ipn3ke:	not in enabled drivers build config
00:01:56.428  	net/ixgbe:	not in enabled drivers build config
00:01:56.428  	net/kni:	not in enabled drivers build config
00:01:56.428  	net/liquidio:	not in enabled drivers build config
00:01:56.428  	net/mana:	not in enabled drivers build config
00:01:56.428  	net/memif:	not in enabled drivers build config
00:01:56.428  	net/mlx4:	not in enabled drivers build config
00:01:56.428  	net/mlx5:	not in enabled drivers build config
00:01:56.428  	net/mvneta:	not in enabled drivers build config
00:01:56.428  	net/mvpp2:	not in enabled drivers build config
00:01:56.428  	net/netvsc:	not in enabled drivers build config
00:01:56.428  	net/nfb:	not in enabled drivers build config
00:01:56.428  	net/nfp:	not in enabled drivers build config
00:01:56.428  	net/ngbe:	not in enabled drivers build config
00:01:56.428  	net/null:	not in enabled drivers build config
00:01:56.428  	net/octeontx:	not in enabled drivers build config
00:01:56.428  	net/octeon_ep:	not in enabled drivers build config
00:01:56.428  	net/pcap:	not in enabled drivers build config
00:01:56.428  	net/pfe:	not in enabled drivers build config
00:01:56.428  	net/qede:	not in enabled drivers build config
00:01:56.428  	net/ring:	not in enabled drivers build config
00:01:56.428  	net/sfc:	not in enabled drivers build config
00:01:56.428  	net/softnic:	not in enabled drivers build config
00:01:56.428  	net/tap:	not in enabled drivers build config
00:01:56.428  	net/thunderx:	not in enabled drivers build config
00:01:56.428  	net/txgbe:	not in enabled drivers build config
00:01:56.428  	net/vdev_netvsc:	not in enabled drivers build config
00:01:56.428  	net/vhost:	not in enabled drivers build config
00:01:56.428  	net/virtio:	not in enabled drivers build config
00:01:56.428  	net/vmxnet3:	not in enabled drivers build config
00:01:56.428  	raw/cnxk_bphy:	not in enabled drivers build config
00:01:56.428  	raw/cnxk_gpio:	not in enabled drivers build config
00:01:56.428  	raw/dpaa2_cmdif:	not in enabled drivers build config
00:01:56.428  	raw/ifpga:	not in enabled drivers build config
00:01:56.428  	raw/ntb:	not in enabled drivers build config
00:01:56.428  	raw/skeleton:	not in enabled drivers build config
00:01:56.428  	crypto/armv8:	not in enabled drivers build config
00:01:56.428  	crypto/bcmfs:	not in enabled drivers build config
00:01:56.428  	crypto/caam_jr:	not in enabled drivers build config
00:01:56.428  	crypto/ccp:	not in enabled drivers build config
00:01:56.428  	crypto/cnxk:	not in enabled drivers build config
00:01:56.428  	crypto/dpaa_sec:	not in enabled drivers build config
00:01:56.428  	crypto/dpaa2_sec:	not in enabled drivers build config
00:01:56.428  	crypto/ipsec_mb:	not in enabled drivers build config
00:01:56.428  	crypto/mlx5:	not in enabled drivers build config
00:01:56.428  	crypto/mvsam:	not in enabled drivers build config
00:01:56.428  	crypto/nitrox:	not in enabled drivers build config
00:01:56.428  	crypto/null:	not in enabled drivers build config
00:01:56.428  	crypto/octeontx:	not in enabled drivers build config
00:01:56.428  	crypto/openssl:	not in enabled drivers build config
00:01:56.428  	crypto/scheduler:	not in enabled drivers build config
00:01:56.428  	crypto/uadk:	not in enabled drivers build config
00:01:56.428  	crypto/virtio:	not in enabled drivers build config
00:01:56.428  	compress/isal:	not in enabled drivers build config
00:01:56.428  	compress/mlx5:	not in enabled drivers build config
00:01:56.428  	compress/octeontx:	not in enabled drivers build config
00:01:56.428  	compress/zlib:	not in enabled drivers build config
00:01:56.428  	regex/mlx5:	not in enabled drivers build config
00:01:56.428  	regex/cn9k:	not in enabled drivers build config
00:01:56.428  	vdpa/ifc:	not in enabled drivers build config
00:01:56.428  	vdpa/mlx5:	not in enabled drivers build config
00:01:56.428  	vdpa/sfc:	not in enabled drivers build config
00:01:56.428  	event/cnxk:	not in enabled drivers build config
00:01:56.428  	event/dlb2:	not in enabled drivers build config
00:01:56.428  	event/dpaa:	not in enabled drivers build config
00:01:56.428  	event/dpaa2:	not in enabled drivers build config
00:01:56.428  	event/dsw:	not in enabled drivers build config
00:01:56.428  	event/opdl:	not in enabled drivers build config
00:01:56.428  	event/skeleton:	not in enabled drivers build config
00:01:56.428  	event/sw:	not in enabled drivers build config
00:01:56.428  	event/octeontx:	not in enabled drivers build config
00:01:56.428  	baseband/acc:	not in enabled drivers build config
00:01:56.428  	baseband/fpga_5gnr_fec:	not in enabled drivers build config
00:01:56.428  	baseband/fpga_lte_fec:	not in enabled drivers build config
00:01:56.428  	baseband/la12xx:	not in enabled drivers build config
00:01:56.428  	baseband/null:	not in enabled drivers build config
00:01:56.428  	baseband/turbo_sw:	not in enabled drivers build config
00:01:56.428  	gpu/cuda:	not in enabled drivers build config
00:01:56.428  	
00:01:56.428  
00:01:56.428  Build targets in project: 310
00:01:56.428  
00:01:56.428  DPDK 22.11.4
00:01:56.428  
00:01:56.428    User defined options
00:01:56.428      libdir        : lib
00:01:56.428      prefix        : /home/vagrant/spdk_repo/dpdk/build
00:01:56.428      c_args        : -fPIC -g -fcommon -Werror -Wno-stringop-overflow
00:01:56.428      c_link_args   : 
00:01:56.428      enable_docs   : false
00:01:56.428      enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,
00:01:56.428      enable_kmods  : false
00:01:56.428      machine       : native
00:01:56.428      tests         : false
00:01:56.428  
00:01:56.428  Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja
00:01:56.428  WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated.
00:01:56.687   16:45:49	-- common/autobuild_common.sh@189 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10
00:01:56.687  ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp'
00:01:56.687  [1/737] Generating lib/rte_kvargs_def with a custom command
00:01:56.687  [2/737] Generating lib/rte_telemetry_def with a custom command
00:01:56.687  [3/737] Generating lib/rte_kvargs_mingw with a custom command
00:01:56.687  [4/737] Generating lib/rte_telemetry_mingw with a custom command
00:01:56.687  [5/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o
00:01:56.687  [6/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o
00:01:56.945  [7/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o
00:01:56.945  [8/737] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o
00:01:56.945  [9/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o
00:01:56.945  [10/737] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o
00:01:56.945  [11/737] Linking static target lib/librte_kvargs.a
00:01:56.945  [12/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o
00:01:56.945  [13/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o
00:01:56.945  [14/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o
00:01:56.945  [15/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o
00:01:56.945  [16/737] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o
00:01:56.945  [17/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o
00:01:56.945  [18/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o
00:01:57.203  [19/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o
00:01:57.203  [20/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o
00:01:57.203  [21/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o
00:01:57.203  [22/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o
00:01:57.203  [23/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o
00:01:57.203  [24/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o
00:01:57.203  [25/737] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output)
00:01:57.203  [26/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o
00:01:57.203  [27/737] Linking target lib/librte_kvargs.so.23.0
00:01:57.203  [28/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o
00:01:57.203  [29/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o
00:01:57.203  [30/737] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o
00:01:57.203  [31/737] Linking static target lib/librte_telemetry.a
00:01:57.203  [32/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o
00:01:57.203  [33/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o
00:01:57.462  [34/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o
00:01:57.462  [35/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o
00:01:57.462  [36/737] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o
00:01:57.462  [37/737] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o
00:01:57.462  [38/737] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o
00:01:57.462  [39/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o
00:01:57.462  [40/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o
00:01:57.721  [41/737] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols
00:01:57.721  [42/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o
00:01:57.721  [43/737] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o
00:01:57.721  [44/737] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o
00:01:57.721  [45/737] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o
00:01:57.721  [46/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o
00:01:57.721  [47/737] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output)
00:01:57.721  [48/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o
00:01:57.721  [49/737] Linking target lib/librte_telemetry.so.23.0
00:01:57.721  [50/737] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o
00:01:57.980  [51/737] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o
00:01:57.980  [52/737] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o
00:01:57.980  [53/737] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o
00:01:57.980  [54/737] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o
00:01:57.980  [55/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o
00:01:57.980  [56/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o
00:01:57.980  [57/737] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o
00:01:57.980  [58/737] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o
00:01:57.980  [59/737] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o
00:01:57.980  [60/737] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o
00:01:57.980  [61/737] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols
00:01:57.980  [62/737] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o
00:01:57.980  [63/737] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o
00:01:57.980  [64/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o
00:01:57.980  [65/737] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o
00:01:57.980  [66/737] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o
00:01:57.980  [67/737] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o
00:01:57.980  [68/737] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o
00:01:57.980  [69/737] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o
00:01:58.238  [70/737] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o
00:01:58.238  [71/737] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o
00:01:58.238  [72/737] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o
00:01:58.238  [73/737] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o
00:01:58.238  [74/737] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o
00:01:58.238  [75/737] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o
00:01:58.238  [76/737] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o
00:01:58.238  [77/737] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o
00:01:58.238  [78/737] Generating lib/rte_eal_def with a custom command
00:01:58.238  [79/737] Generating lib/rte_eal_mingw with a custom command
00:01:58.238  [80/737] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o
00:01:58.238  [81/737] Generating lib/rte_ring_def with a custom command
00:01:58.238  [82/737] Generating lib/rte_ring_mingw with a custom command
00:01:58.238  [83/737] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o
00:01:58.238  [84/737] Generating lib/rte_rcu_def with a custom command
00:01:58.238  [85/737] Generating lib/rte_rcu_mingw with a custom command
00:01:58.238  [86/737] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o
00:01:58.238  [87/737] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o
00:01:58.238  [88/737] Linking static target lib/librte_ring.a
00:01:58.498  [89/737] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o
00:01:58.498  [90/737] Generating lib/rte_mempool_def with a custom command
00:01:58.498  [91/737] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o
00:01:58.498  [92/737] Generating lib/rte_mempool_mingw with a custom command
00:01:58.498  [93/737] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o
00:01:58.498  [94/737] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o
00:01:58.757  [95/737] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o
00:01:58.757  [96/737] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o
00:01:58.757  [97/737] Generating lib/rte_mbuf_def with a custom command
00:01:58.757  [98/737] Generating lib/rte_mbuf_mingw with a custom command
00:01:58.757  [99/737] Linking static target lib/librte_eal.a
00:01:58.757  [100/737] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o
00:01:58.757  [101/737] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o
00:01:58.757  [102/737] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o
00:01:58.757  [103/737] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output)
00:01:59.015  [104/737] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o
00:01:59.015  [105/737] Linking static target lib/librte_rcu.a
00:01:59.015  [106/737] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o
00:01:59.015  [107/737] Linking static target lib/librte_mempool.a
00:01:59.015  [108/737] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o
00:01:59.015  [109/737] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o
00:01:59.015  [110/737] Linking static target lib/net/libnet_crc_avx512_lib.a
00:01:59.015  [111/737] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o
00:01:59.015  [112/737] Generating lib/rte_net_def with a custom command
00:01:59.015  [113/737] Generating lib/rte_net_mingw with a custom command
00:01:59.275  [114/737] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o
00:01:59.275  [115/737] Generating lib/rte_meter_def with a custom command
00:01:59.275  [116/737] Generating lib/rte_meter_mingw with a custom command
00:01:59.275  [117/737] Compiling C object lib/librte_net.a.p/net_rte_net.c.o
00:01:59.275  [118/737] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o
00:01:59.275  [119/737] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output)
00:01:59.275  [120/737] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o
00:01:59.275  [121/737] Linking static target lib/librte_meter.a
00:01:59.275  [122/737] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o
00:01:59.275  [123/737] Linking static target lib/librte_net.a
00:01:59.535  [124/737] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output)
00:01:59.535  [125/737] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o
00:01:59.535  [126/737] Linking static target lib/librte_mbuf.a
00:01:59.535  [127/737] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o
00:01:59.535  [128/737] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output)
00:01:59.535  [129/737] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o
00:01:59.535  [130/737] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o
00:01:59.794  [131/737] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o
00:01:59.794  [132/737] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o
00:01:59.794  [133/737] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o
00:02:00.052  [134/737] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output)
00:02:00.052  [135/737] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o
00:02:00.052  [136/737] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o
00:02:00.052  [137/737] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o
00:02:00.053  [138/737] Generating lib/rte_ethdev_def with a custom command
00:02:00.053  [139/737] Generating lib/rte_ethdev_mingw with a custom command
00:02:00.053  [140/737] Generating lib/rte_pci_def with a custom command
00:02:00.053  [141/737] Generating lib/rte_pci_mingw with a custom command
00:02:00.311  [142/737] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o
00:02:00.311  [143/737] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output)
00:02:00.311  [144/737] Linking static target lib/librte_pci.a
00:02:00.311  [145/737] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o
00:02:00.311  [146/737] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o
00:02:00.311  [147/737] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o
00:02:00.311  [148/737] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o
00:02:00.311  [149/737] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o
00:02:00.570  [150/737] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o
00:02:00.570  [151/737] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o
00:02:00.570  [152/737] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o
00:02:00.570  [153/737] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output)
00:02:00.571  [154/737] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o
00:02:00.571  [155/737] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o
00:02:00.571  [156/737] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o
00:02:00.571  [157/737] Generating lib/rte_cmdline_def with a custom command
00:02:00.571  [158/737] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o
00:02:00.571  [159/737] Generating lib/rte_cmdline_mingw with a custom command
00:02:00.571  [160/737] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o
00:02:00.571  [161/737] Generating lib/rte_metrics_def with a custom command
00:02:00.571  [162/737] Generating lib/rte_metrics_mingw with a custom command
00:02:00.571  [163/737] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o
00:02:00.571  [164/737] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o
00:02:00.830  [165/737] Generating lib/rte_hash_def with a custom command
00:02:00.830  [166/737] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o
00:02:00.830  [167/737] Generating lib/rte_hash_mingw with a custom command
00:02:00.830  [168/737] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o
00:02:00.830  [169/737] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o
00:02:00.830  [170/737] Linking static target lib/librte_cmdline.a
00:02:00.830  [171/737] Generating lib/rte_timer_def with a custom command
00:02:00.830  [172/737] Generating lib/rte_timer_mingw with a custom command
00:02:00.830  [173/737] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o
00:02:00.830  [174/737] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o
00:02:00.830  [175/737] Linking static target lib/librte_metrics.a
00:02:01.090  [176/737] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o
00:02:01.349  [177/737] Linking static target lib/librte_timer.a
00:02:01.349  [178/737] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o
00:02:01.349  [179/737] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o
00:02:01.349  [180/737] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o
00:02:01.349  [181/737] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output)
00:02:01.608  [182/737] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o
00:02:01.608  [183/737] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o
00:02:01.608  [184/737] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output)
00:02:01.608  [185/737] Generating lib/rte_acl_def with a custom command
00:02:01.608  [186/737] Generating lib/rte_acl_mingw with a custom command
00:02:01.608  [187/737] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o
00:02:01.608  [188/737] Generating lib/rte_bbdev_def with a custom command
00:02:01.608  [189/737] Generating lib/rte_bbdev_mingw with a custom command
00:02:01.867  [190/737] Generating lib/rte_bitratestats_def with a custom command
00:02:01.867  [191/737] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o
00:02:01.867  [192/737] Linking static target lib/librte_ethdev.a
00:02:01.867  [193/737] Generating lib/rte_bitratestats_mingw with a custom command
00:02:01.867  [194/737] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output)
00:02:02.127  [195/737] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o
00:02:02.127  [196/737] Linking static target lib/librte_bitratestats.a
00:02:02.127  [197/737] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o
00:02:02.127  [198/737] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o
00:02:02.127  [199/737] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output)
00:02:02.428  [200/737] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o
00:02:02.428  [201/737] Linking static target lib/librte_bbdev.a
00:02:02.428  [202/737] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o
00:02:02.687  [203/737] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o
00:02:02.946  [204/737] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o
00:02:02.946  [205/737] Linking static target lib/librte_hash.a
00:02:02.946  [206/737] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o
00:02:02.946  [207/737] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o
00:02:02.946  [208/737] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output)
00:02:02.946  [209/737] Generating lib/rte_bpf_def with a custom command
00:02:02.946  [210/737] Generating lib/rte_bpf_mingw with a custom command
00:02:02.946  [211/737] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o
00:02:02.946  [212/737] Generating lib/rte_cfgfile_def with a custom command
00:02:02.946  [213/737] Generating lib/rte_cfgfile_mingw with a custom command
00:02:03.206  [214/737] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o
00:02:03.206  [215/737] Linking static target lib/librte_cfgfile.a
00:02:03.206  [216/737] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o
00:02:03.464  [217/737] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o
00:02:03.464  [218/737] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o
00:02:03.464  [219/737] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o
00:02:03.723  [220/737] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output)
00:02:03.723  [221/737] Generating lib/rte_compressdev_def with a custom command
00:02:03.723  [222/737] Generating lib/rte_compressdev_mingw with a custom command
00:02:03.723  [223/737] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output)
00:02:03.723  [224/737] Generating lib/rte_cryptodev_def with a custom command
00:02:03.723  [225/737] Generating lib/rte_cryptodev_mingw with a custom command
00:02:03.723  [226/737] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o
00:02:03.982  [227/737] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o
00:02:03.982  [228/737] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o
00:02:03.982  [229/737] Linking static target lib/librte_bpf.a
00:02:03.982  [230/737] Linking static target lib/librte_compressdev.a
00:02:03.982  [231/737] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o
00:02:03.982  [232/737] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o
00:02:04.241  [233/737] Generating lib/rte_distributor_def with a custom command
00:02:04.241  [234/737] Generating lib/rte_distributor_mingw with a custom command
00:02:04.241  [235/737] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output)
00:02:04.241  [236/737] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o
00:02:04.241  [237/737] Generating lib/rte_efd_mingw with a custom command
00:02:04.241  [238/737] Generating lib/rte_efd_def with a custom command
00:02:04.241  [239/737] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o
00:02:04.241  [240/737] Linking static target lib/librte_acl.a
00:02:04.241  [241/737] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o
00:02:04.499  [242/737] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o
00:02:04.499  [243/737] Linking static target lib/librte_distributor.a
00:02:04.499  [244/737] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o
00:02:04.499  [245/737] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output)
00:02:04.757  [246/737] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o
00:02:04.757  [247/737] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output)
00:02:05.016  [248/737] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output)
00:02:05.016  [249/737] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o
00:02:05.016  [250/737] Generating lib/rte_eventdev_def with a custom command
00:02:05.016  [251/737] Generating lib/rte_eventdev_mingw with a custom command
00:02:05.016  [252/737] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o
00:02:05.016  [253/737] Linking static target lib/librte_efd.a
00:02:05.274  [254/737] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o
00:02:05.274  [255/737] Generating lib/rte_gpudev_def with a custom command
00:02:05.274  [256/737] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output)
00:02:05.274  [257/737] Generating lib/rte_gpudev_mingw with a custom command
00:02:05.532  [258/737] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o
00:02:05.532  [259/737] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o
00:02:05.532  [260/737] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o
00:02:05.532  [261/737] Linking static target lib/librte_gpudev.a
00:02:05.532  [262/737] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o
00:02:05.532  [263/737] Linking static target lib/librte_cryptodev.a
00:02:05.790  [264/737] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o
00:02:05.790  [265/737] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o
00:02:05.790  [266/737] Generating lib/rte_gro_def with a custom command
00:02:05.790  [267/737] Generating lib/rte_gro_mingw with a custom command
00:02:06.048  [268/737] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o
00:02:06.048  [269/737] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o
00:02:06.049  [270/737] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o
00:02:06.307  [271/737] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o
00:02:06.307  [272/737] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o
00:02:06.307  [273/737] Linking static target lib/librte_gro.a
00:02:06.307  [274/737] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o
00:02:06.307  [275/737] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o
00:02:06.564  [276/737] Generating lib/rte_gso_def with a custom command
00:02:06.564  [277/737] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o
00:02:06.564  [278/737] Generating lib/rte_gso_mingw with a custom command
00:02:06.564  [279/737] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output)
00:02:06.564  [280/737] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o
00:02:06.564  [281/737] Linking static target lib/librte_eventdev.a
00:02:06.564  [282/737] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o
00:02:06.564  [283/737] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output)
00:02:06.564  [284/737] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o
00:02:06.564  [285/737] Linking static target lib/librte_gso.a
00:02:06.822  [286/737] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output)
00:02:06.822  [287/737] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o
00:02:06.822  [288/737] Generating lib/rte_ip_frag_def with a custom command
00:02:06.822  [289/737] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o
00:02:06.822  [290/737] Generating lib/rte_ip_frag_mingw with a custom command
00:02:07.080  [291/737] Generating lib/rte_jobstats_def with a custom command
00:02:07.080  [292/737] Generating lib/rte_jobstats_mingw with a custom command
00:02:07.080  [293/737] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o
00:02:07.081  [294/737] Generating lib/rte_latencystats_def with a custom command
00:02:07.081  [295/737] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o
00:02:07.081  [296/737] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o
00:02:07.081  [297/737] Generating lib/rte_latencystats_mingw with a custom command
00:02:07.081  [298/737] Linking static target lib/librte_jobstats.a
00:02:07.081  [299/737] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o
00:02:07.081  [300/737] Generating lib/rte_lpm_def with a custom command
00:02:07.081  [301/737] Generating lib/rte_lpm_mingw with a custom command
00:02:07.339  [302/737] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o
00:02:07.339  [303/737] Linking static target lib/librte_ip_frag.a
00:02:07.339  [304/737] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output)
00:02:07.598  [305/737] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o
00:02:07.598  [306/737] Linking static target lib/librte_latencystats.a
00:02:07.598  [307/737] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o
00:02:07.598  [308/737] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output)
00:02:07.598  [309/737] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o
00:02:07.598  [310/737] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output)
00:02:07.598  [311/737] Linking static target lib/member/libsketch_avx512_tmp.a
00:02:07.598  [312/737] Generating lib/rte_member_def with a custom command
00:02:07.598  [313/737] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output)
00:02:07.859  [314/737] Generating lib/rte_member_mingw with a custom command
00:02:07.859  [315/737] Generating lib/rte_pcapng_def with a custom command
00:02:07.859  [316/737] Generating lib/rte_pcapng_mingw with a custom command
00:02:07.859  [317/737] Compiling C object lib/librte_member.a.p/member_rte_member.c.o
00:02:07.859  [318/737] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o
00:02:07.859  [319/737] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o
00:02:07.859  [320/737] Linking static target lib/librte_lpm.a
00:02:08.120  [321/737] Compiling C object lib/librte_power.a.p/power_power_common.c.o
00:02:08.120  [322/737] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o
00:02:08.120  [323/737] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o
00:02:08.120  [324/737] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output)
00:02:08.120  [325/737] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output)
00:02:08.377  [326/737] Compiling C object lib/librte_power.a.p/power_rte_power.c.o
00:02:08.377  [327/737] Linking target lib/librte_eal.so.23.0
00:02:08.377  [328/737] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o
00:02:08.377  [329/737] Linking static target lib/librte_pcapng.a
00:02:08.377  [330/737] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output)
00:02:08.377  [331/737] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols
00:02:08.377  [332/737] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o
00:02:08.377  [333/737] Linking target lib/librte_ring.so.23.0
00:02:08.377  [334/737] Linking target lib/librte_meter.so.23.0
00:02:08.635  [335/737] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o
00:02:08.635  [336/737] Linking target lib/librte_pci.so.23.0
00:02:08.635  [337/737] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o
00:02:08.635  [338/737] Linking target lib/librte_timer.so.23.0
00:02:08.635  [339/737] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols
00:02:08.635  [340/737] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols
00:02:08.635  [341/737] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols
00:02:08.635  [342/737] Linking target lib/librte_cfgfile.so.23.0
00:02:08.635  [343/737] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols
00:02:08.635  [344/737] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o
00:02:08.635  [345/737] Linking target lib/librte_acl.so.23.0
00:02:08.635  [346/737] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output)
00:02:08.635  [347/737] Linking target lib/librte_rcu.so.23.0
00:02:08.635  [348/737] Linking target lib/librte_jobstats.so.23.0
00:02:08.635  [349/737] Generating lib/rte_power_def with a custom command
00:02:08.893  [350/737] Generating lib/rte_power_mingw with a custom command
00:02:08.893  [351/737] Linking target lib/librte_mempool.so.23.0
00:02:08.893  [352/737] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o
00:02:08.893  [353/737] Generating lib/rte_rawdev_def with a custom command
00:02:08.893  [354/737] Generating lib/rte_rawdev_mingw with a custom command
00:02:08.893  [355/737] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols
00:02:08.893  [356/737] Generating lib/rte_regexdev_def with a custom command
00:02:08.893  [357/737] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols
00:02:08.893  [358/737] Generating lib/rte_regexdev_mingw with a custom command
00:02:08.893  [359/737] Generating lib/rte_dmadev_def with a custom command
00:02:08.893  [360/737] Generating lib/rte_dmadev_mingw with a custom command
00:02:08.893  [361/737] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o
00:02:08.893  [362/737] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols
00:02:08.893  [363/737] Generating lib/rte_rib_def with a custom command
00:02:08.893  [364/737] Linking target lib/librte_mbuf.so.23.0
00:02:08.893  [365/737] Generating lib/rte_rib_mingw with a custom command
00:02:09.151  [366/737] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols
00:02:09.151  [367/737] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o
00:02:09.151  [368/737] Linking target lib/librte_net.so.23.0
00:02:09.151  [369/737] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o
00:02:09.151  [370/737] Linking target lib/librte_bbdev.so.23.0
00:02:09.151  [371/737] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o
00:02:09.151  [372/737] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o
00:02:09.151  [373/737] Linking target lib/librte_compressdev.so.23.0
00:02:09.151  [374/737] Linking static target lib/librte_power.a
00:02:09.409  [375/737] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols
00:02:09.409  [376/737] Linking target lib/librte_distributor.so.23.0
00:02:09.409  [377/737] Linking target lib/librte_gpudev.so.23.0
00:02:09.409  [378/737] Linking target lib/librte_cryptodev.so.23.0
00:02:09.409  [379/737] Linking target lib/librte_hash.so.23.0
00:02:09.409  [380/737] Linking target lib/librte_cmdline.so.23.0
00:02:09.409  [381/737] Linking static target lib/librte_rawdev.a
00:02:09.409  [382/737] Linking target lib/librte_ethdev.so.23.0
00:02:09.409  [383/737] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols
00:02:09.409  [384/737] Linking static target lib/librte_regexdev.a
00:02:09.409  [385/737] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols
00:02:09.409  [386/737] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols
00:02:09.409  [387/737] Linking target lib/librte_efd.so.23.0
00:02:09.409  [388/737] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output)
00:02:09.667  [389/737] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o
00:02:09.667  [390/737] Linking target lib/librte_metrics.so.23.0
00:02:09.667  [391/737] Linking target lib/librte_bpf.so.23.0
00:02:09.667  [392/737] Linking target lib/librte_gro.so.23.0
00:02:09.667  [393/737] Linking target lib/librte_eventdev.so.23.0
00:02:09.667  [394/737] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o
00:02:09.667  [395/737] Linking target lib/librte_gso.so.23.0
00:02:09.667  [396/737] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols
00:02:09.667  [397/737] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o
00:02:09.667  [398/737] Linking target lib/librte_ip_frag.so.23.0
00:02:09.667  [399/737] Linking target lib/librte_lpm.so.23.0
00:02:09.667  [400/737] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols
00:02:09.667  [401/737] Linking static target lib/librte_member.a
00:02:09.667  [402/737] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o
00:02:09.667  [403/737] Linking target lib/librte_bitratestats.so.23.0
00:02:09.667  [404/737] Linking target lib/librte_latencystats.so.23.0
00:02:09.667  [405/737] Linking static target lib/librte_dmadev.a
00:02:09.667  [406/737] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols
00:02:09.667  [407/737] Linking static target lib/librte_rib.a
00:02:09.667  [408/737] Linking static target lib/librte_reorder.a
00:02:09.667  [409/737] Linking target lib/librte_pcapng.so.23.0
00:02:09.667  [410/737] Generating lib/rte_reorder_def with a custom command
00:02:09.925  [411/737] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols
00:02:09.925  [412/737] Generating lib/rte_reorder_mingw with a custom command
00:02:09.925  [413/737] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols
00:02:09.925  [414/737] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols
00:02:09.925  [415/737] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o
00:02:09.925  [416/737] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output)
00:02:09.925  [417/737] Linking target lib/librte_rawdev.so.23.0
00:02:09.925  [418/737] Generating lib/rte_sched_def with a custom command
00:02:09.925  [419/737] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o
00:02:09.925  [420/737] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o
00:02:09.925  [421/737] Generating lib/rte_sched_mingw with a custom command
00:02:10.184  [422/737] Generating lib/rte_security_def with a custom command
00:02:10.184  [423/737] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output)
00:02:10.184  [424/737] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output)
00:02:10.184  [425/737] Generating lib/rte_security_mingw with a custom command
00:02:10.184  [426/737] Linking target lib/librte_reorder.so.23.0
00:02:10.184  [427/737] Linking target lib/librte_member.so.23.0
00:02:10.184  [428/737] Generating lib/rte_stack_def with a custom command
00:02:10.184  [429/737] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o
00:02:10.184  [430/737] Generating lib/rte_stack_mingw with a custom command
00:02:10.184  [431/737] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o
00:02:10.184  [432/737] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o
00:02:10.184  [433/737] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output)
00:02:10.184  [434/737] Linking static target lib/librte_stack.a
00:02:10.184  [435/737] Linking target lib/librte_rib.so.23.0
00:02:10.184  [436/737] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o
00:02:10.184  [437/737] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output)
00:02:10.443  [438/737] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output)
00:02:10.443  [439/737] Linking target lib/librte_dmadev.so.23.0
00:02:10.443  [440/737] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols
00:02:10.443  [441/737] Linking target lib/librte_regexdev.so.23.0
00:02:10.443  [442/737] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output)
00:02:10.443  [443/737] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output)
00:02:10.443  [444/737] Linking target lib/librte_power.so.23.0
00:02:10.443  [445/737] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols
00:02:10.443  [446/737] Linking target lib/librte_stack.so.23.0
00:02:10.443  [447/737] Compiling C object lib/librte_security.a.p/security_rte_security.c.o
00:02:10.443  [448/737] Generating lib/rte_vhost_def with a custom command
00:02:10.443  [449/737] Linking static target lib/librte_security.a
00:02:10.702  [450/737] Generating lib/rte_vhost_mingw with a custom command
00:02:10.702  [451/737] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o
00:02:10.702  [452/737] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o
00:02:10.960  [453/737] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o
00:02:11.219  [454/737] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output)
00:02:11.219  [455/737] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o
00:02:11.219  [456/737] Linking static target lib/librte_sched.a
00:02:11.219  [457/737] Linking target lib/librte_security.so.23.0
00:02:11.219  [458/737] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o
00:02:11.219  [459/737] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols
00:02:11.219  [460/737] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o
00:02:11.219  [461/737] Generating lib/rte_ipsec_def with a custom command
00:02:11.482  [462/737] Generating lib/rte_ipsec_mingw with a custom command
00:02:11.482  [463/737] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o
00:02:11.482  [464/737] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o
00:02:11.482  [465/737] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o
00:02:11.742  [466/737] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o
00:02:11.742  [467/737] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output)
00:02:11.742  [468/737] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o
00:02:11.742  [469/737] Linking target lib/librte_sched.so.23.0
00:02:12.001  [470/737] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o
00:02:12.001  [471/737] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols
00:02:12.001  [472/737] Generating lib/rte_fib_def with a custom command
00:02:12.001  [473/737] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o
00:02:12.001  [474/737] Generating lib/rte_fib_mingw with a custom command
00:02:12.001  [475/737] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o
00:02:12.001  [476/737] Linking static target lib/librte_ipsec.a
00:02:12.001  [477/737] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o
00:02:12.001  [478/737] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o
00:02:12.259  [479/737] Compiling C object lib/librte_fib.a.p/fib_trie.c.o
00:02:12.259  [480/737] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o
00:02:12.259  [481/737] Linking static target lib/librte_fib.a
00:02:12.517  [482/737] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o
00:02:12.517  [483/737] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o
00:02:12.518  [484/737] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output)
00:02:12.518  [485/737] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o
00:02:12.518  [486/737] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o
00:02:12.518  [487/737] Linking target lib/librte_ipsec.so.23.0
00:02:12.776  [488/737] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o
00:02:12.776  [489/737] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output)
00:02:12.776  [490/737] Linking target lib/librte_fib.so.23.0
00:02:13.034  [491/737] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o
00:02:13.034  [492/737] Generating lib/rte_port_def with a custom command
00:02:13.034  [493/737] Generating lib/rte_port_mingw with a custom command
00:02:13.034  [494/737] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o
00:02:13.034  [495/737] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o
00:02:13.034  [496/737] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o
00:02:13.034  [497/737] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o
00:02:13.034  [498/737] Generating lib/rte_pdump_def with a custom command
00:02:13.034  [499/737] Generating lib/rte_pdump_mingw with a custom command
00:02:13.293  [500/737] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o
00:02:13.293  [501/737] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o
00:02:13.293  [502/737] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o
00:02:13.293  [503/737] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o
00:02:13.293  [504/737] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o
00:02:13.293  [505/737] Linking static target lib/librte_port.a
00:02:13.551  [506/737] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o
00:02:13.551  [507/737] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o
00:02:13.551  [508/737] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o
00:02:13.551  [509/737] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o
00:02:13.809  [510/737] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o
00:02:13.809  [511/737] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o
00:02:13.809  [512/737] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o
00:02:13.809  [513/737] Linking static target lib/librte_pdump.a
00:02:14.068  [514/737] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o
00:02:14.068  [515/737] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output)
00:02:14.068  [516/737] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o
00:02:14.326  [517/737] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output)
00:02:14.326  [518/737] Linking target lib/librte_port.so.23.0
00:02:14.326  [519/737] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o
00:02:14.326  [520/737] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o
00:02:14.326  [521/737] Linking target lib/librte_pdump.so.23.0
00:02:14.326  [522/737] Generating lib/rte_table_def with a custom command
00:02:14.326  [523/737] Generating lib/rte_table_mingw with a custom command
00:02:14.326  [524/737] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols
00:02:14.584  [525/737] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o
00:02:14.584  [526/737] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o
00:02:14.584  [527/737] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o
00:02:14.584  [528/737] Generating lib/rte_pipeline_def with a custom command
00:02:14.584  [529/737] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o
00:02:14.584  [530/737] Linking static target lib/librte_table.a
00:02:14.584  [531/737] Generating lib/rte_pipeline_mingw with a custom command
00:02:14.584  [532/737] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o
00:02:14.842  [533/737] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o
00:02:14.842  [534/737] Compiling C object lib/librte_graph.a.p/graph_node.c.o
00:02:15.100  [535/737] Compiling C object lib/librte_graph.a.p/graph_graph.c.o
00:02:15.100  [536/737] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o
00:02:15.359  [537/737] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o
00:02:15.359  [538/737] Generating lib/rte_graph_def with a custom command
00:02:15.359  [539/737] Generating lib/rte_graph_mingw with a custom command
00:02:15.359  [540/737] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o
00:02:15.617  [541/737] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output)
00:02:15.617  [542/737] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o
00:02:15.617  [543/737] Linking target lib/librte_table.so.23.0
00:02:15.617  [544/737] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o
00:02:15.617  [545/737] Linking static target lib/librte_graph.a
00:02:15.617  [546/737] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols
00:02:15.617  [547/737] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o
00:02:15.876  [548/737] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o
00:02:15.876  [549/737] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o
00:02:15.876  [550/737] Compiling C object lib/librte_node.a.p/node_null.c.o
00:02:16.135  [551/737] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o
00:02:16.135  [552/737] Compiling C object lib/librte_node.a.p/node_log.c.o
00:02:16.135  [553/737] Generating lib/rte_node_def with a custom command
00:02:16.135  [554/737] Generating lib/rte_node_mingw with a custom command
00:02:16.394  [555/737] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o
00:02:16.394  [556/737] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o
00:02:16.394  [557/737] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o
00:02:16.394  [558/737] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o
00:02:16.394  [559/737] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o
00:02:16.394  [560/737] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o
00:02:16.394  [561/737] Generating drivers/rte_bus_pci_def with a custom command
00:02:16.394  [562/737] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o
00:02:16.651  [563/737] Generating drivers/rte_bus_pci_mingw with a custom command
00:02:16.651  [564/737] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o
00:02:16.651  [565/737] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o
00:02:16.651  [566/737] Linking static target lib/librte_node.a
00:02:16.651  [567/737] Generating drivers/rte_bus_vdev_def with a custom command
00:02:16.651  [568/737] Generating drivers/rte_bus_vdev_mingw with a custom command
00:02:16.651  [569/737] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output)
00:02:16.651  [570/737] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o
00:02:16.651  [571/737] Linking target lib/librte_graph.so.23.0
00:02:16.651  [572/737] Generating drivers/rte_mempool_ring_def with a custom command
00:02:16.651  [573/737] Generating drivers/rte_mempool_ring_mingw with a custom command
00:02:16.651  [574/737] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o
00:02:16.651  [575/737] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o
00:02:16.651  [576/737] Linking static target drivers/libtmp_rte_bus_vdev.a
00:02:16.910  [577/737] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols
00:02:16.910  [578/737] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o
00:02:16.910  [579/737] Linking static target drivers/libtmp_rte_bus_pci.a
00:02:16.910  [580/737] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output)
00:02:16.910  [581/737] Linking target lib/librte_node.so.23.0
00:02:16.910  [582/737] Generating drivers/rte_bus_vdev.pmd.c with a custom command
00:02:16.910  [583/737] Generating drivers/rte_bus_pci.pmd.c with a custom command
00:02:16.910  [584/737] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o
00:02:16.910  [585/737] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o
00:02:16.910  [586/737] Linking static target drivers/librte_bus_pci.a
00:02:16.910  [587/737] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o
00:02:16.910  [588/737] Linking static target drivers/librte_bus_vdev.a
00:02:17.170  [589/737] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o
00:02:17.170  [590/737] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output)
00:02:17.170  [591/737] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o
00:02:17.170  [592/737] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o
00:02:17.170  [593/737] Linking target drivers/librte_bus_vdev.so.23.0
00:02:17.429  [594/737] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o
00:02:17.429  [595/737] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o
00:02:17.429  [596/737] Linking static target drivers/libtmp_rte_mempool_ring.a
00:02:17.429  [597/737] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols
00:02:17.429  [598/737] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output)
00:02:17.429  [599/737] Linking target drivers/librte_bus_pci.so.23.0
00:02:17.429  [600/737] Generating drivers/rte_mempool_ring.pmd.c with a custom command
00:02:17.429  [601/737] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o
00:02:17.429  [602/737] Linking static target drivers/librte_mempool_ring.a
00:02:17.429  [603/737] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o
00:02:17.688  [604/737] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o
00:02:17.688  [605/737] Linking target drivers/librte_mempool_ring.so.23.0
00:02:17.688  [606/737] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols
00:02:17.982  [607/737] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o
00:02:17.982  [608/737] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o
00:02:18.268  [609/737] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o
00:02:18.268  [610/737] Linking static target drivers/net/i40e/base/libi40e_base.a
00:02:18.268  [611/737] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o
00:02:18.839  [612/737] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o
00:02:18.839  [613/737] Linking static target drivers/net/i40e/libi40e_avx512_lib.a
00:02:18.839  [614/737] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o
00:02:18.839  [615/737] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o
00:02:18.839  [616/737] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o
00:02:19.098  [617/737] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o
00:02:19.098  [618/737] Generating drivers/rte_net_i40e_def with a custom command
00:02:19.098  [619/737] Generating drivers/rte_net_i40e_mingw with a custom command
00:02:19.357  [620/737] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o
00:02:19.616  [621/737] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o
00:02:19.874  [622/737] Compiling C object app/dpdk-pdump.p/pdump_main.c.o
00:02:19.874  [623/737] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o
00:02:20.132  [624/737] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o
00:02:20.132  [625/737] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o
00:02:20.132  [626/737] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o
00:02:20.132  [627/737] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o
00:02:20.132  [628/737] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o
00:02:20.132  [629/737] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o
00:02:20.390  [630/737] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o
00:02:20.390  [631/737] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o
00:02:20.648  [632/737] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o
00:02:20.906  [633/737] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o
00:02:20.906  [634/737] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o
00:02:20.906  [635/737] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o
00:02:20.906  [636/737] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o
00:02:21.164  [637/737] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o
00:02:21.164  [638/737] Linking static target drivers/libtmp_rte_net_i40e.a
00:02:21.164  [639/737] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o
00:02:21.164  [640/737] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o
00:02:21.164  [641/737] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o
00:02:21.422  [642/737] Generating drivers/rte_net_i40e.pmd.c with a custom command
00:02:21.422  [643/737] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o
00:02:21.422  [644/737] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o
00:02:21.422  [645/737] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o
00:02:21.422  [646/737] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o
00:02:21.422  [647/737] Linking static target drivers/librte_net_i40e.a
00:02:21.681  [648/737] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o
00:02:21.681  [649/737] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o
00:02:21.681  [650/737] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o
00:02:21.681  [651/737] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o
00:02:21.939  [652/737] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o
00:02:21.939  [653/737] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o
00:02:21.939  [654/737] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o
00:02:21.939  [655/737] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o
00:02:22.197  [656/737] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o
00:02:22.197  [657/737] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o
00:02:22.197  [658/737] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o
00:02:22.197  [659/737] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o
00:02:22.197  [660/737] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output)
00:02:22.197  [661/737] Linking target drivers/librte_net_i40e.so.23.0
00:02:22.456  [662/737] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o
00:02:22.456  [663/737] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o
00:02:22.456  [664/737] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o
00:02:22.714  [665/737] Linking static target lib/librte_vhost.a
00:02:22.714  [666/737] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o
00:02:22.714  [667/737] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o
00:02:22.972  [668/737] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o
00:02:23.231  [669/737] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o
00:02:23.231  [670/737] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o
00:02:23.231  [671/737] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o
00:02:23.231  [672/737] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o
00:02:23.231  [673/737] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o
00:02:23.231  [674/737] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o
00:02:23.489  [675/737] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o
00:02:23.749  [676/737] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o
00:02:23.749  [677/737] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o
00:02:23.749  [678/737] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o
00:02:23.749  [679/737] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o
00:02:23.749  [680/737] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o
00:02:24.008  [681/737] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o
00:02:24.008  [682/737] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o
00:02:24.008  [683/737] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o
00:02:24.008  [684/737] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o
00:02:24.008  [685/737] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o
00:02:24.008  [686/737] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o
00:02:24.267  [687/737] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output)
00:02:24.267  [688/737] Linking target lib/librte_vhost.so.23.0
00:02:24.267  [689/737] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o
00:02:24.267  [690/737] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o
00:02:24.526  [691/737] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o
00:02:24.526  [692/737] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o
00:02:24.526  [693/737] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o
00:02:24.526  [694/737] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o
00:02:24.784  [695/737] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o
00:02:25.043  [696/737] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o
00:02:25.043  [697/737] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o
00:02:25.043  [698/737] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o
00:02:25.043  [699/737] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o
00:02:25.303  [700/737] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o
00:02:25.303  [701/737] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o
00:02:25.563  [702/737] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o
00:02:25.563  [703/737] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o
00:02:25.563  [704/737] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o
00:02:25.822  [705/737] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o
00:02:25.822  [706/737] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o
00:02:25.822  [707/737] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o
00:02:26.081  [708/737] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o
00:02:26.342  [709/737] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o
00:02:26.342  [710/737] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o
00:02:26.342  [711/737] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o
00:02:26.342  [712/737] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o
00:02:26.342  [713/737] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o
00:02:26.342  [714/737] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o
00:02:26.601  [715/737] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o
00:02:26.873  [716/737] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o
00:02:26.873  [717/737] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o
00:02:28.780  [718/737] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o
00:02:28.780  [719/737] Linking static target lib/librte_pipeline.a
00:02:29.039  [720/737] Linking target app/dpdk-test-bbdev
00:02:29.039  [721/737] Linking target app/dpdk-test-cmdline
00:02:29.039  [722/737] Linking target app/dpdk-proc-info
00:02:29.039  [723/737] Linking target app/dpdk-test-eventdev
00:02:29.039  [724/737] Linking target app/dpdk-test-crypto-perf
00:02:29.039  [725/737] Linking target app/dpdk-test-compress-perf
00:02:29.299  [726/737] Linking target app/dpdk-test-acl
00:02:29.299  [727/737] Linking target app/dpdk-pdump
00:02:29.299  [728/737] Linking target app/dpdk-test-fib
00:02:29.558  [729/737] Linking target app/dpdk-test-flow-perf
00:02:29.558  [730/737] Linking target app/dpdk-test-pipeline
00:02:29.558  [731/737] Linking target app/dpdk-test-gpudev
00:02:29.558  [732/737] Linking target app/dpdk-test-regex
00:02:29.558  [733/737] Linking target app/dpdk-testpmd
00:02:29.559  [734/737] Linking target app/dpdk-test-security-perf
00:02:29.559  [735/737] Linking target app/dpdk-test-sad
00:02:33.746  [736/737] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output)
00:02:33.746  [737/737] Linking target lib/librte_pipeline.so.23.0
00:02:33.746   16:46:26	-- common/autobuild_common.sh@190 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install
00:02:33.746  ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp'
00:02:33.746  [0/1] Installing files.
00:02:34.008  Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples
00:02:34.008  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:02:34.008  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:02:34.008  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:02:34.008  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:02:34.008  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:02:34.008  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:02:34.008  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:02:34.008  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:02:34.008  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:02:34.008  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:02:34.008  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:02:34.008  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:02:34.008  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:02:34.008  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:02:34.008  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:02:34.008  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:02:34.008  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:02:34.008  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:02:34.008  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:02:34.008  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:02:34.008  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:02:34.008  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:02:34.008  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:02:34.008  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:02:34.008  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:02:34.008  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:02:34.008  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:02:34.008  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:02:34.008  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:02:34.008  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples
00:02:34.008  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples
00:02:34.008  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/kni.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples
00:02:34.008  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples
00:02:34.008  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples
00:02:34.008  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples
00:02:34.008  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples
00:02:34.008  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples
00:02:34.008  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples
00:02:34.008  Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline
00:02:34.008  Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline
00:02:34.008  Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline
00:02:34.009  Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline
00:02:34.009  Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline
00:02:34.009  Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks
00:02:34.009  Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks
00:02:34.009  Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd
00:02:34.009  Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node
00:02:34.009  Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node
00:02:34.009  Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server
00:02:34.009  Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server
00:02:34.009  Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server
00:02:34.009  Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server
00:02:34.009  Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server
00:02:34.009  Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server
00:02:34.009  Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared
00:02:34.009  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process
00:02:34.009  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp
00:02:34.009  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp
00:02:34.009  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp
00:02:34.009  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp
00:02:34.009  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp
00:02:34.009  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp
00:02:34.009  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp
00:02:34.009  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp
00:02:34.009  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp
00:02:34.009  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp
00:02:34.009  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp
00:02:34.009  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server
00:02:34.009  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server
00:02:34.009  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server
00:02:34.009  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server
00:02:34.009  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server
00:02:34.009  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server
00:02:34.009  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client
00:02:34.009  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client
00:02:34.009  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared
00:02:34.009  Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched
00:02:34.009  Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched
00:02:34.009  Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched
00:02:34.009  Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched
00:02:34.009  Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched
00:02:34.009  Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched
00:02:34.009  Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched
00:02:34.009  Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched
00:02:34.009  Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched
00:02:34.009  Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched
00:02:34.009  Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched
00:02:34.009  Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched
00:02:34.009  Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched
00:02:34.009  Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched
00:02:34.009  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event
00:02:34.009  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event
00:02:34.009  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event
00:02:34.009  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event
00:02:34.009  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event
00:02:34.009  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event
00:02:34.009  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event
00:02:34.009  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event
00:02:34.009  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event
00:02:34.009  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event
00:02:34.009  Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool
00:02:34.009  Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app
00:02:34.009  Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app
00:02:34.009  Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app
00:02:34.009  Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app
00:02:34.009  Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib
00:02:34.009  Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib
00:02:34.009  Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib
00:02:34.009  Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa
00:02:34.009  Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa
00:02:34.009  Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa
00:02:34.009  Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost
00:02:34.009  Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost
00:02:34.009  Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost
00:02:34.009  Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost
00:02:34.009  Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb
00:02:34.009  Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb
00:02:34.009  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager
00:02:34.009  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager
00:02:34.009  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager
00:02:34.009  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager
00:02:34.009  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager
00:02:34.009  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager
00:02:34.009  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager
00:02:34.010  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager
00:02:34.010  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager
00:02:34.010  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager
00:02:34.010  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager
00:02:34.010  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager
00:02:34.010  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager
00:02:34.010  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager
00:02:34.010  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager
00:02:34.010  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli
00:02:34.010  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli
00:02:34.010  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli
00:02:34.010  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli
00:02:34.010  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli
00:02:34.010  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli
00:02:34.010  Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer
00:02:34.010  Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer
00:02:34.010  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd
00:02:34.010  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd
00:02:34.010  Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb
00:02:34.010  Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb
00:02:34.010  Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter
00:02:34.010  Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter
00:02:34.010  Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter
00:02:34.010  Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter
00:02:34.010  Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter
00:02:34.010  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:02:34.010  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:02:34.010  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:02:34.010  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:02:34.010  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:02:34.010  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:02:34.010  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:02:34.010  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:02:34.010  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:02:34.010  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:02:34.010  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:02:34.010  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:02:34.010  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:02:34.010  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:02:34.010  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:02:34.010  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:02:34.010  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:02:34.010  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:02:34.010  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:02:34.010  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:02:34.010  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:02:34.010  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:02:34.010  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:02:34.010  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:02:34.010  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:02:34.010  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:02:34.010  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:02:34.010  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:02:34.010  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:02:34.010  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:02:34.010  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:02:34.010  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:02:34.010  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:02:34.010  Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor
00:02:34.010  Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor
00:02:34.010  Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton
00:02:34.010  Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton
00:02:34.010  Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond
00:02:34.010  Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond
00:02:34.010  Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond
00:02:34.010  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:02:34.010  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:02:34.010  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:02:34.010  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:02:34.010  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:02:34.010  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:02:34.010  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:02:34.010  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:02:34.010  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:02:34.010  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:02:34.010  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:02:34.010  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:02:34.010  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:02:34.010  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:02:34.010  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:02:34.010  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:02:34.010  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:02:34.010  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:02:34.010  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:02:34.010  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:02:34.010  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:02:34.010  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:02:34.010  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:02:34.010  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:02:34.011  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:02:34.011  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:02:34.011  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:02:34.011  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:02:34.011  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:02:34.011  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:02:34.011  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:02:34.011  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:02:34.011  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:02:34.011  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:02:34.011  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:02:34.011  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:02:34.011  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:02:34.011  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:02:34.011  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:02:34.011  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:02:34.011  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:02:34.011  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:02:34.011  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:02:34.011  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:02:34.011  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:02:34.011  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:02:34.011  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:02:34.011  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:02:34.011  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:02:34.011  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:02:34.011  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:02:34.011  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:02:34.011  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:02:34.011  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:02:34.011  Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk
00:02:34.011  Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk
00:02:34.011  Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk
00:02:34.011  Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk
00:02:34.011  Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk
00:02:34.011  Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk
00:02:34.011  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto
00:02:34.011  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto
00:02:34.011  Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma
00:02:34.011  Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma
00:02:34.011  Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt
00:02:34.011  Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt
00:02:34.011  Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld
00:02:34.011  Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld
00:02:34.011  Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto
00:02:34.011  Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto
00:02:34.011  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive
00:02:34.011  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive
00:02:34.011  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive
00:02:34.011  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive
00:02:34.011  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent
00:02:34.011  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent
00:02:34.011  Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf
00:02:34.011  Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf
00:02:34.011  Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf
00:02:34.011  Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf
00:02:34.011  Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf
00:02:34.011  Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation
00:02:34.011  Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation
00:02:34.011  Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation
00:02:34.011  Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation
00:02:34.011  Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation
00:02:34.011  Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation
00:02:34.011  Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation
00:02:34.011  Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation
00:02:34.011  Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation
00:02:34.011  Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation
00:02:34.011  Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation
00:02:34.011  Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation
00:02:34.011  Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation
00:02:34.011  Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation
00:02:34.011  Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation
00:02:34.011  Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation
00:02:34.011  Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast
00:02:34.011  Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast
00:02:34.011  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly
00:02:34.011  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly
00:02:34.011  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats
00:02:34.011  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats
00:02:34.011  Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app
00:02:34.011  Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app
00:02:34.011  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation
00:02:34.011  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation
00:02:34.012  Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering
00:02:34.012  Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering
00:02:34.012  Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common
00:02:34.012  Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec
00:02:34.012  Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon
00:02:34.012  Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse
00:02:34.012  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph
00:02:34.012  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph
00:02:34.012  Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient
00:02:34.012  Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient
00:02:34.012  Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq
00:02:34.012  Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq
00:02:34.012  Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/flow_classify.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify
00:02:34.012  Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify
00:02:34.012  Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/ipv4_rules_file.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify
00:02:34.012  Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline
00:02:34.012  Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline
00:02:34.012  Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline
00:02:34.012  Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline
00:02:34.012  Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline
00:02:34.012  Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline
00:02:34.012  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat
00:02:34.012  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat
00:02:34.012  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat
00:02:34.012  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat
00:02:34.012  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline
00:02:34.012  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline
00:02:34.012  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline
00:02:34.012  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline
00:02:34.012  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline
00:02:34.012  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline
00:02:34.012  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline
00:02:34.012  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline
00:02:34.012  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline
00:02:34.012  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline
00:02:34.012  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:34.012  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:34.012  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:34.012  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:34.012  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:34.012  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:34.012  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:34.012  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:34.012  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:34.012  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:34.012  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:34.012  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:34.012  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:34.012  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:34.012  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:34.012  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:34.012  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:34.012  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:34.012  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:34.012  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:34.012  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:34.012  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:34.012  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:34.012  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:34.012  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:34.012  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:34.012  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:34.012  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:34.012  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:34.012  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:34.012  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:34.012  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:34.012  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:34.012  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:34.012  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:34.012  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:34.012  Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering
00:02:34.012  Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering
00:02:34.012  Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering
00:02:34.012  Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores
00:02:34.012  Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores
00:02:34.012  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power
00:02:34.012  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power
00:02:34.012  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power
00:02:34.013  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power
00:02:34.013  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power
00:02:34.013  Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.013  Installing lib/librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.013  Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.013  Installing lib/librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.013  Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.013  Installing lib/librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.013  Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.013  Installing lib/librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.013  Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.013  Installing lib/librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.013  Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.013  Installing lib/librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.013  Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.013  Installing lib/librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.272  Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.272  Installing lib/librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.272  Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.272  Installing lib/librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.272  Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.272  Installing lib/librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.272  Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.272  Installing lib/librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.272  Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.272  Installing lib/librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.272  Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.272  Installing lib/librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.272  Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.272  Installing lib/librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.272  Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.272  Installing lib/librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.272  Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.272  Installing lib/librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.272  Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.272  Installing lib/librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.272  Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.272  Installing lib/librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.272  Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.272  Installing lib/librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.272  Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.272  Installing lib/librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.272  Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.272  Installing lib/librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.272  Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.272  Installing lib/librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.272  Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.272  Installing lib/librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.272  Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.272  Installing lib/librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.272  Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.272  Installing lib/librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.272  Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.272  Installing lib/librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.272  Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.272  Installing lib/librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.272  Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.272  Installing lib/librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.272  Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.272  Installing lib/librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.272  Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.272  Installing lib/librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.272  Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.272  Installing lib/librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.272  Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.272  Installing lib/librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.272  Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.272  Installing lib/librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.272  Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.272  Installing lib/librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.272  Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.272  Installing lib/librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.273  Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.273  Installing lib/librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.273  Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.273  Installing lib/librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.273  Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.273  Installing lib/librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.273  Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.273  Installing lib/librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.273  Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.273  Installing lib/librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.273  Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.273  Installing lib/librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.273  Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.273  Installing lib/librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.273  Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.273  Installing lib/librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.273  Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.273  Installing lib/librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.273  Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.273  Installing lib/librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.273  Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.273  Installing lib/librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.273  Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.273  Installing lib/librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.273  Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.273  Installing lib/librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.273  Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.273  Installing lib/librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.273  Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.273  Installing lib/librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.273  Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.273  Installing lib/librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.273  Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.273  Installing lib/librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.273  Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.273  Installing drivers/librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0
00:02:34.273  Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.273  Installing drivers/librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0
00:02:34.273  Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.273  Installing drivers/librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0
00:02:34.273  Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.273  Installing drivers/librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0
00:02:34.273  Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin
00:02:34.273  Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin
00:02:34.273  Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin
00:02:34.273  Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin
00:02:34.273  Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin
00:02:34.273  Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin
00:02:34.273  Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin
00:02:34.273  Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin
00:02:34.534  Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin
00:02:34.534  Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin
00:02:34.534  Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin
00:02:34.534  Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin
00:02:34.534  Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin
00:02:34.534  Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin
00:02:34.534  Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin
00:02:34.534  Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin
00:02:34.534  Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.534  Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.534  Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.534  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic
00:02:34.534  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic
00:02:34.534  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic
00:02:34.534  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic
00:02:34.534  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic
00:02:34.534  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic
00:02:34.534  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic
00:02:34.534  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic
00:02:34.534  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic
00:02:34.534  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic
00:02:34.534  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic
00:02:34.534  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic
00:02:34.534  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.534  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.534  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.534  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.534  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.534  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.534  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.534  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.534  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.534  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.534  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.534  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.534  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.534  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.534  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.534  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.534  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.534  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.534  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.534  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.534  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.534  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.534  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.534  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.534  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.534  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.534  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.534  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.534  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.534  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.534  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.534  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.534  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.534  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.534  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.534  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.534  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.534  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.534  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.534  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.534  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.534  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.534  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.534  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.534  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.534  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.534  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.534  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.535  Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_empty_poll.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_intel_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.536  Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.537  Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.537  Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.537  Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.537  Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.537  Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.537  Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.537  Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.537  Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.537  Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.537  Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.537  Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.537  Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.537  Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.537  Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.537  Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.537  Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.537  Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.537  Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.537  Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.537  Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.537  Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.537  Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.537  Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.537  Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin
00:02:34.537  Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin
00:02:34.537  Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin
00:02:34.537  Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin
00:02:34.537  Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:34.537  Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig
00:02:34.537  Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig
00:02:34.537  Installing symlink pointing to librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.23
00:02:34.537  Installing symlink pointing to librte_kvargs.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so
00:02:34.537  Installing symlink pointing to librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.23
00:02:34.537  Installing symlink pointing to librte_telemetry.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so
00:02:34.537  Installing symlink pointing to librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.23
00:02:34.537  Installing symlink pointing to librte_eal.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so
00:02:34.537  Installing symlink pointing to librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.23
00:02:34.537  Installing symlink pointing to librte_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so
00:02:34.537  Installing symlink pointing to librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.23
00:02:34.537  Installing symlink pointing to librte_rcu.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so
00:02:34.537  Installing symlink pointing to librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.23
00:02:34.537  Installing symlink pointing to librte_mempool.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so
00:02:34.537  Installing symlink pointing to librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.23
00:02:34.537  Installing symlink pointing to librte_mbuf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so
00:02:34.537  Installing symlink pointing to librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.23
00:02:34.537  Installing symlink pointing to librte_net.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so
00:02:34.537  Installing symlink pointing to librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.23
00:02:34.537  Installing symlink pointing to librte_meter.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so
00:02:34.537  Installing symlink pointing to librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.23
00:02:34.537  Installing symlink pointing to librte_ethdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so
00:02:34.537  Installing symlink pointing to librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.23
00:02:34.537  Installing symlink pointing to librte_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so
00:02:34.537  Installing symlink pointing to librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.23
00:02:34.537  Installing symlink pointing to librte_cmdline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so
00:02:34.537  Installing symlink pointing to librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.23
00:02:34.537  Installing symlink pointing to librte_metrics.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so
00:02:34.537  Installing symlink pointing to librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.23
00:02:34.537  Installing symlink pointing to librte_hash.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so
00:02:34.537  Installing symlink pointing to librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.23
00:02:34.537  Installing symlink pointing to librte_timer.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so
00:02:34.537  Installing symlink pointing to librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.23
00:02:34.537  Installing symlink pointing to librte_acl.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so
00:02:34.537  Installing symlink pointing to librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.23
00:02:34.537  Installing symlink pointing to librte_bbdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so
00:02:34.537  Installing symlink pointing to librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.23
00:02:34.537  Installing symlink pointing to librte_bitratestats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so
00:02:34.537  Installing symlink pointing to librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.23
00:02:34.537  Installing symlink pointing to librte_bpf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so
00:02:34.537  Installing symlink pointing to librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.23
00:02:34.537  Installing symlink pointing to librte_cfgfile.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so
00:02:34.537  Installing symlink pointing to librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.23
00:02:34.537  Installing symlink pointing to librte_compressdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so
00:02:34.537  Installing symlink pointing to librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.23
00:02:34.537  Installing symlink pointing to librte_cryptodev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so
00:02:34.537  Installing symlink pointing to librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.23
00:02:34.537  Installing symlink pointing to librte_distributor.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so
00:02:34.537  Installing symlink pointing to librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.23
00:02:34.537  Installing symlink pointing to librte_efd.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so
00:02:34.537  Installing symlink pointing to librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.23
00:02:34.537  Installing symlink pointing to librte_eventdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so
00:02:34.537  Installing symlink pointing to librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.23
00:02:34.537  './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so'
00:02:34.537  './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23'
00:02:34.537  './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0'
00:02:34.537  './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so'
00:02:34.537  './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23'
00:02:34.537  './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0'
00:02:34.537  './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so'
00:02:34.537  './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23'
00:02:34.537  './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0'
00:02:34.537  './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so'
00:02:34.537  './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23'
00:02:34.537  './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0'
00:02:34.537  Installing symlink pointing to librte_gpudev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so
00:02:34.537  Installing symlink pointing to librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.23
00:02:34.537  Installing symlink pointing to librte_gro.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so
00:02:34.537  Installing symlink pointing to librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.23
00:02:34.537  Installing symlink pointing to librte_gso.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so
00:02:34.537  Installing symlink pointing to librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.23
00:02:34.537  Installing symlink pointing to librte_ip_frag.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so
00:02:34.537  Installing symlink pointing to librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.23
00:02:34.537  Installing symlink pointing to librte_jobstats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so
00:02:34.537  Installing symlink pointing to librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.23
00:02:34.537  Installing symlink pointing to librte_latencystats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so
00:02:34.537  Installing symlink pointing to librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.23
00:02:34.537  Installing symlink pointing to librte_lpm.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so
00:02:34.537  Installing symlink pointing to librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.23
00:02:34.538  Installing symlink pointing to librte_member.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so
00:02:34.538  Installing symlink pointing to librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.23
00:02:34.538  Installing symlink pointing to librte_pcapng.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so
00:02:34.538  Installing symlink pointing to librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.23
00:02:34.538  Installing symlink pointing to librte_power.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so
00:02:34.538  Installing symlink pointing to librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.23
00:02:34.538  Installing symlink pointing to librte_rawdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so
00:02:34.538  Installing symlink pointing to librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.23
00:02:34.538  Installing symlink pointing to librte_regexdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so
00:02:34.538  Installing symlink pointing to librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.23
00:02:34.538  Installing symlink pointing to librte_dmadev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so
00:02:34.538  Installing symlink pointing to librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.23
00:02:34.538  Installing symlink pointing to librte_rib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so
00:02:34.538  Installing symlink pointing to librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.23
00:02:34.538  Installing symlink pointing to librte_reorder.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so
00:02:34.538  Installing symlink pointing to librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.23
00:02:34.538  Installing symlink pointing to librte_sched.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so
00:02:34.538  Installing symlink pointing to librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.23
00:02:34.538  Installing symlink pointing to librte_security.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so
00:02:34.538  Installing symlink pointing to librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.23
00:02:34.538  Installing symlink pointing to librte_stack.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so
00:02:34.538  Installing symlink pointing to librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.23
00:02:34.538  Installing symlink pointing to librte_vhost.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so
00:02:34.538  Installing symlink pointing to librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.23
00:02:34.538  Installing symlink pointing to librte_ipsec.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so
00:02:34.538  Installing symlink pointing to librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.23
00:02:34.538  Installing symlink pointing to librte_fib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so
00:02:34.538  Installing symlink pointing to librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.23
00:02:34.538  Installing symlink pointing to librte_port.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so
00:02:34.538  Installing symlink pointing to librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.23
00:02:34.538  Installing symlink pointing to librte_pdump.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so
00:02:34.538  Installing symlink pointing to librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.23
00:02:34.538  Installing symlink pointing to librte_table.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so
00:02:34.538  Installing symlink pointing to librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.23
00:02:34.538  Installing symlink pointing to librte_pipeline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so
00:02:34.538  Installing symlink pointing to librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.23
00:02:34.538  Installing symlink pointing to librte_graph.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so
00:02:34.538  Installing symlink pointing to librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.23
00:02:34.538  Installing symlink pointing to librte_node.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so
00:02:34.538  Installing symlink pointing to librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23
00:02:34.538  Installing symlink pointing to librte_bus_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so
00:02:34.538  Installing symlink pointing to librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23
00:02:34.538  Installing symlink pointing to librte_bus_vdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so
00:02:34.538  Installing symlink pointing to librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23
00:02:34.538  Installing symlink pointing to librte_mempool_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so
00:02:34.538  Installing symlink pointing to librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23
00:02:34.538  Installing symlink pointing to librte_net_i40e.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so
00:02:34.538  Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0'
00:02:34.538    16:46:27	-- common/autobuild_common.sh@192 -- $ uname -s
00:02:34.538   16:46:27	-- common/autobuild_common.sh@192 -- $ [[ Linux == \F\r\e\e\B\S\D ]]
00:02:34.538   16:46:27	-- common/autobuild_common.sh@203 -- $ cat
00:02:34.538   16:46:27	-- common/autobuild_common.sh@208 -- $ cd /home/vagrant/spdk_repo/spdk
00:02:34.538  
00:02:34.538  real	0m44.502s
00:02:34.538  user	4m14.424s
00:02:34.538  sys	0m54.972s
00:02:34.538   16:46:27	-- common/autotest_common.sh@1115 -- $ xtrace_disable
00:02:34.538   16:46:27	-- common/autotest_common.sh@10 -- $ set +x
00:02:34.538  ************************************
00:02:34.538  END TEST build_native_dpdk
00:02:34.538  ************************************
00:02:34.538   16:46:27	-- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in
00:02:34.538   16:46:27	-- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]]
00:02:34.538   16:46:27	-- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]]
00:02:34.538   16:46:27	-- spdk/autobuild.sh@55 -- $ [[ -n '' ]]
00:02:34.538   16:46:27	-- spdk/autobuild.sh@57 -- $ [[ 1 -eq 1 ]]
00:02:34.538   16:46:27	-- spdk/autobuild.sh@58 -- $ unittest_build
00:02:34.538   16:46:27	-- common/autobuild_common.sh@416 -- $ run_test unittest_build _unittest_build
00:02:34.538   16:46:27	-- common/autotest_common.sh@1087 -- $ '[' 2 -le 1 ']'
00:02:34.538   16:46:27	-- common/autotest_common.sh@1093 -- $ xtrace_disable
00:02:34.538   16:46:27	-- common/autotest_common.sh@10 -- $ set +x
00:02:34.538  ************************************
00:02:34.538  START TEST unittest_build
00:02:34.538  ************************************
00:02:34.538   16:46:27	-- common/autotest_common.sh@1114 -- $ _unittest_build
00:02:34.538   16:46:27	-- common/autobuild_common.sh@407 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --without-shared
00:02:34.796  Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs...
00:02:34.796  DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib
00:02:34.796  DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include
00:02:34.796  Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk
00:02:35.054  Using 'verbs' RDMA provider
00:02:50.514  Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done.
00:03:05.379  Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done.
00:03:05.637  Creating mk/config.mk...done.
00:03:05.637  Creating mk/cc.flags.mk...done.
00:03:05.637  Type 'make' to build.
00:03:05.637   16:46:58	-- common/autobuild_common.sh@408 -- $ make -j10
00:03:05.637  make[1]: Nothing to be done for 'all'.
00:03:23.725    CC lib/ut/ut.o
00:03:23.725    CC lib/ut_mock/mock.o
00:03:23.725    CC lib/log/log_flags.o
00:03:23.725    CC lib/log/log_deprecated.o
00:03:23.725    CC lib/log/log.o
00:03:23.725    LIB libspdk_ut_mock.a
00:03:23.725    LIB libspdk_log.a
00:03:23.725    LIB libspdk_ut.a
00:03:23.725    CC lib/util/base64.o
00:03:23.725    CC lib/util/bit_array.o
00:03:23.725    CC lib/util/cpuset.o
00:03:23.725    CC lib/util/crc32.o
00:03:23.725    CXX lib/trace_parser/trace.o
00:03:23.725    CC lib/util/crc16.o
00:03:23.725    CC lib/ioat/ioat.o
00:03:23.725    CC lib/dma/dma.o
00:03:23.725    CC lib/util/crc32c.o
00:03:23.725    CC lib/vfio_user/host/vfio_user_pci.o
00:03:23.725    CC lib/util/crc32_ieee.o
00:03:23.725    CC lib/util/crc64.o
00:03:23.725    LIB libspdk_dma.a
00:03:23.725    CC lib/util/dif.o
00:03:23.725    CC lib/util/fd.o
00:03:23.725    CC lib/util/file.o
00:03:23.725    CC lib/util/hexlify.o
00:03:23.725    CC lib/util/iov.o
00:03:23.725    CC lib/vfio_user/host/vfio_user.o
00:03:23.725    CC lib/util/math.o
00:03:23.725    CC lib/util/pipe.o
00:03:23.725    CC lib/util/strerror_tls.o
00:03:23.725    LIB libspdk_ioat.a
00:03:23.725    CC lib/util/string.o
00:03:23.725    CC lib/util/uuid.o
00:03:23.725    CC lib/util/fd_group.o
00:03:23.725    CC lib/util/xor.o
00:03:23.725    CC lib/util/zipf.o
00:03:23.725    LIB libspdk_vfio_user.a
00:03:23.725    LIB libspdk_util.a
00:03:23.725    CC lib/conf/conf.o
00:03:23.725    CC lib/vmd/vmd.o
00:03:23.725    CC lib/vmd/led.o
00:03:23.725    CC lib/rdma/common.o
00:03:23.725    CC lib/rdma/rdma_verbs.o
00:03:23.725    CC lib/idxd/idxd.o
00:03:23.725    CC lib/env_dpdk/env.o
00:03:23.725    CC lib/idxd/idxd_user.o
00:03:23.725    CC lib/json/json_parse.o
00:03:23.725    LIB libspdk_trace_parser.a
00:03:23.725    CC lib/env_dpdk/memory.o
00:03:23.984    CC lib/env_dpdk/pci.o
00:03:23.984    CC lib/env_dpdk/init.o
00:03:23.984    LIB libspdk_conf.a
00:03:23.984    CC lib/json/json_util.o
00:03:23.984    CC lib/json/json_write.o
00:03:23.984    CC lib/env_dpdk/threads.o
00:03:23.984    LIB libspdk_rdma.a
00:03:23.984    CC lib/env_dpdk/pci_ioat.o
00:03:23.984    CC lib/env_dpdk/pci_virtio.o
00:03:24.243    CC lib/env_dpdk/pci_vmd.o
00:03:24.243    CC lib/env_dpdk/pci_idxd.o
00:03:24.243    CC lib/env_dpdk/pci_event.o
00:03:24.243    CC lib/env_dpdk/sigbus_handler.o
00:03:24.243    CC lib/env_dpdk/pci_dpdk.o
00:03:24.243    CC lib/env_dpdk/pci_dpdk_2207.o
00:03:24.243    LIB libspdk_json.a
00:03:24.243    LIB libspdk_idxd.a
00:03:24.243    CC lib/env_dpdk/pci_dpdk_2211.o
00:03:24.243    LIB libspdk_vmd.a
00:03:24.243    CC lib/jsonrpc/jsonrpc_server.o
00:03:24.243    CC lib/jsonrpc/jsonrpc_server_tcp.o
00:03:24.243    CC lib/jsonrpc/jsonrpc_client_tcp.o
00:03:24.243    CC lib/jsonrpc/jsonrpc_client.o
00:03:24.502    LIB libspdk_jsonrpc.a
00:03:24.761    CC lib/rpc/rpc.o
00:03:25.020    LIB libspdk_rpc.a
00:03:25.020    LIB libspdk_env_dpdk.a
00:03:25.020    CC lib/trace/trace.o
00:03:25.020    CC lib/trace/trace_rpc.o
00:03:25.020    CC lib/trace/trace_flags.o
00:03:25.020    CC lib/notify/notify.o
00:03:25.020    CC lib/sock/sock_rpc.o
00:03:25.020    CC lib/sock/sock.o
00:03:25.020    CC lib/notify/notify_rpc.o
00:03:25.279    LIB libspdk_notify.a
00:03:25.279    LIB libspdk_trace.a
00:03:25.538    LIB libspdk_sock.a
00:03:25.538    CC lib/thread/thread.o
00:03:25.538    CC lib/thread/iobuf.o
00:03:25.798    CC lib/nvme/nvme_ctrlr_cmd.o
00:03:25.798    CC lib/nvme/nvme_fabric.o
00:03:25.798    CC lib/nvme/nvme_ctrlr.o
00:03:25.798    CC lib/nvme/nvme_ns_cmd.o
00:03:25.798    CC lib/nvme/nvme_ns.o
00:03:25.798    CC lib/nvme/nvme_pcie.o
00:03:25.798    CC lib/nvme/nvme_pcie_common.o
00:03:25.798    CC lib/nvme/nvme_qpair.o
00:03:25.798    CC lib/nvme/nvme.o
00:03:26.057    CC lib/nvme/nvme_quirks.o
00:03:26.317    CC lib/nvme/nvme_transport.o
00:03:26.317    CC lib/nvme/nvme_discovery.o
00:03:26.317    CC lib/nvme/nvme_ctrlr_ocssd_cmd.o
00:03:26.317    CC lib/nvme/nvme_ns_ocssd_cmd.o
00:03:26.576    CC lib/nvme/nvme_tcp.o
00:03:26.576    CC lib/nvme/nvme_opal.o
00:03:26.576    CC lib/nvme/nvme_io_msg.o
00:03:26.576    CC lib/nvme/nvme_poll_group.o
00:03:26.576    CC lib/nvme/nvme_zns.o
00:03:26.835    CC lib/nvme/nvme_cuse.o
00:03:26.835    CC lib/nvme/nvme_vfio_user.o
00:03:26.835    CC lib/nvme/nvme_rdma.o
00:03:26.835    LIB libspdk_thread.a
00:03:27.096    CC lib/init/json_config.o
00:03:27.096    CC lib/accel/accel.o
00:03:27.096    CC lib/accel/accel_rpc.o
00:03:27.096    CC lib/blob/blobstore.o
00:03:27.096    CC lib/virtio/virtio.o
00:03:27.096    CC lib/virtio/virtio_vhost_user.o
00:03:27.356    CC lib/blob/request.o
00:03:27.356    CC lib/init/subsystem.o
00:03:27.356    CC lib/init/subsystem_rpc.o
00:03:27.356    CC lib/init/rpc.o
00:03:27.356    CC lib/accel/accel_sw.o
00:03:27.356    CC lib/virtio/virtio_vfio_user.o
00:03:27.614    CC lib/virtio/virtio_pci.o
00:03:27.614    LIB libspdk_init.a
00:03:27.614    CC lib/blob/zeroes.o
00:03:27.614    CC lib/blob/blob_bs_dev.o
00:03:27.614    CC lib/event/app.o
00:03:27.614    CC lib/event/reactor.o
00:03:27.614    CC lib/event/log_rpc.o
00:03:27.614    CC lib/event/app_rpc.o
00:03:27.873    CC lib/event/scheduler_static.o
00:03:27.873    LIB libspdk_virtio.a
00:03:27.873    LIB libspdk_nvme.a
00:03:28.132    LIB libspdk_accel.a
00:03:28.132    LIB libspdk_event.a
00:03:28.132    CC lib/bdev/bdev_rpc.o
00:03:28.132    CC lib/bdev/bdev.o
00:03:28.132    CC lib/bdev/bdev_zone.o
00:03:28.132    CC lib/bdev/part.o
00:03:28.132    CC lib/bdev/scsi_nvme.o
00:03:30.039    LIB libspdk_blob.a
00:03:30.297    CC lib/lvol/lvol.o
00:03:30.297    CC lib/blobfs/blobfs.o
00:03:30.297    CC lib/blobfs/tree.o
00:03:30.864    LIB libspdk_bdev.a
00:03:30.864    CC lib/scsi/port.o
00:03:30.864    CC lib/scsi/lun.o
00:03:30.864    CC lib/nbd/nbd_rpc.o
00:03:30.864    CC lib/scsi/dev.o
00:03:30.864    CC lib/nbd/nbd.o
00:03:30.864    CC lib/scsi/scsi.o
00:03:30.864    CC lib/ftl/ftl_core.o
00:03:30.864    CC lib/nvmf/ctrlr.o
00:03:31.122    LIB libspdk_blobfs.a
00:03:31.122    CC lib/scsi/scsi_bdev.o
00:03:31.122    CC lib/scsi/scsi_pr.o
00:03:31.122    CC lib/nvmf/ctrlr_discovery.o
00:03:31.122    CC lib/nvmf/ctrlr_bdev.o
00:03:31.122    LIB libspdk_lvol.a
00:03:31.122    CC lib/nvmf/subsystem.o
00:03:31.122    CC lib/scsi/scsi_rpc.o
00:03:31.379    CC lib/scsi/task.o
00:03:31.379    CC lib/ftl/ftl_init.o
00:03:31.379    LIB libspdk_nbd.a
00:03:31.379    CC lib/ftl/ftl_layout.o
00:03:31.379    CC lib/ftl/ftl_debug.o
00:03:31.379    CC lib/ftl/ftl_io.o
00:03:31.379    CC lib/ftl/ftl_sb.o
00:03:31.638    CC lib/nvmf/nvmf.o
00:03:31.638    LIB libspdk_scsi.a
00:03:31.638    CC lib/nvmf/nvmf_rpc.o
00:03:31.638    CC lib/nvmf/transport.o
00:03:31.638    CC lib/nvmf/tcp.o
00:03:31.638    CC lib/nvmf/rdma.o
00:03:31.638    CC lib/ftl/ftl_l2p.o
00:03:31.638    CC lib/iscsi/conn.o
00:03:31.638    CC lib/ftl/ftl_l2p_flat.o
00:03:31.896    CC lib/ftl/ftl_nv_cache.o
00:03:31.896    CC lib/ftl/ftl_band.o
00:03:32.160    CC lib/ftl/ftl_band_ops.o
00:03:32.160    CC lib/ftl/ftl_writer.o
00:03:32.160    CC lib/iscsi/init_grp.o
00:03:32.475    CC lib/iscsi/iscsi.o
00:03:32.475    CC lib/iscsi/md5.o
00:03:32.475    CC lib/vhost/vhost.o
00:03:32.475    CC lib/ftl/ftl_rq.o
00:03:32.475    CC lib/ftl/ftl_reloc.o
00:03:32.475    CC lib/ftl/ftl_l2p_cache.o
00:03:32.475    CC lib/iscsi/param.o
00:03:32.475    CC lib/ftl/ftl_p2l.o
00:03:32.733    CC lib/vhost/vhost_rpc.o
00:03:32.733    CC lib/ftl/mngt/ftl_mngt.o
00:03:32.733    CC lib/ftl/mngt/ftl_mngt_bdev.o
00:03:32.991    CC lib/ftl/mngt/ftl_mngt_shutdown.o
00:03:32.991    CC lib/ftl/mngt/ftl_mngt_startup.o
00:03:32.991    CC lib/ftl/mngt/ftl_mngt_md.o
00:03:32.991    CC lib/ftl/mngt/ftl_mngt_misc.o
00:03:32.991    CC lib/iscsi/portal_grp.o
00:03:32.991    CC lib/iscsi/tgt_node.o
00:03:32.991    CC lib/vhost/vhost_scsi.o
00:03:32.991    CC lib/vhost/vhost_blk.o
00:03:32.991    CC lib/vhost/rte_vhost_user.o
00:03:33.249    CC lib/ftl/mngt/ftl_mngt_ioch.o
00:03:33.249    CC lib/ftl/mngt/ftl_mngt_l2p.o
00:03:33.249    CC lib/ftl/mngt/ftl_mngt_band.o
00:03:33.249    CC lib/ftl/mngt/ftl_mngt_self_test.o
00:03:33.249    CC lib/ftl/mngt/ftl_mngt_p2l.o
00:03:33.507    CC lib/ftl/mngt/ftl_mngt_recovery.o
00:03:33.507    CC lib/ftl/mngt/ftl_mngt_upgrade.o
00:03:33.507    CC lib/iscsi/iscsi_subsystem.o
00:03:33.507    CC lib/iscsi/iscsi_rpc.o
00:03:33.507    CC lib/ftl/utils/ftl_conf.o
00:03:33.507    CC lib/iscsi/task.o
00:03:33.507    CC lib/ftl/utils/ftl_md.o
00:03:33.765    CC lib/ftl/utils/ftl_mempool.o
00:03:33.765    LIB libspdk_nvmf.a
00:03:33.765    CC lib/ftl/utils/ftl_bitmap.o
00:03:33.765    CC lib/ftl/utils/ftl_property.o
00:03:33.765    CC lib/ftl/utils/ftl_layout_tracker_bdev.o
00:03:33.765    CC lib/ftl/upgrade/ftl_layout_upgrade.o
00:03:33.765    LIB libspdk_vhost.a
00:03:33.765    CC lib/ftl/upgrade/ftl_sb_upgrade.o
00:03:33.765    CC lib/ftl/upgrade/ftl_p2l_upgrade.o
00:03:34.023    CC lib/ftl/upgrade/ftl_band_upgrade.o
00:03:34.023    LIB libspdk_iscsi.a
00:03:34.023    CC lib/ftl/upgrade/ftl_chunk_upgrade.o
00:03:34.023    CC lib/ftl/upgrade/ftl_sb_v3.o
00:03:34.023    CC lib/ftl/upgrade/ftl_sb_v5.o
00:03:34.023    CC lib/ftl/nvc/ftl_nvc_dev.o
00:03:34.023    CC lib/ftl/nvc/ftl_nvc_bdev_vss.o
00:03:34.023    CC lib/ftl/base/ftl_base_dev.o
00:03:34.023    CC lib/ftl/base/ftl_base_bdev.o
00:03:34.023    CC lib/ftl/ftl_trace.o
00:03:34.281    LIB libspdk_ftl.a
00:03:34.539    CC module/env_dpdk/env_dpdk_rpc.o
00:03:34.797    CC module/scheduler/dynamic/scheduler_dynamic.o
00:03:34.797    CC module/accel/dsa/accel_dsa.o
00:03:34.797    CC module/blob/bdev/blob_bdev.o
00:03:34.797    CC module/scheduler/dpdk_governor/dpdk_governor.o
00:03:34.797    CC module/scheduler/gscheduler/gscheduler.o
00:03:34.797    CC module/accel/ioat/accel_ioat.o
00:03:34.797    CC module/accel/iaa/accel_iaa.o
00:03:34.797    CC module/sock/posix/posix.o
00:03:34.797    CC module/accel/error/accel_error.o
00:03:34.797    LIB libspdk_env_dpdk_rpc.a
00:03:34.797    CC module/accel/error/accel_error_rpc.o
00:03:34.797    LIB libspdk_scheduler_gscheduler.a
00:03:34.797    LIB libspdk_scheduler_dpdk_governor.a
00:03:34.797    CC module/accel/ioat/accel_ioat_rpc.o
00:03:34.797    CC module/accel/dsa/accel_dsa_rpc.o
00:03:34.797    CC module/accel/iaa/accel_iaa_rpc.o
00:03:34.797    LIB libspdk_scheduler_dynamic.a
00:03:34.797    LIB libspdk_blob_bdev.a
00:03:34.797    LIB libspdk_accel_error.a
00:03:35.055    LIB libspdk_accel_iaa.a
00:03:35.055    LIB libspdk_accel_ioat.a
00:03:35.055    LIB libspdk_accel_dsa.a
00:03:35.055    CC module/bdev/error/vbdev_error.o
00:03:35.055    CC module/bdev/gpt/gpt.o
00:03:35.055    CC module/bdev/delay/vbdev_delay.o
00:03:35.055    CC module/bdev/null/bdev_null.o
00:03:35.055    CC module/bdev/malloc/bdev_malloc.o
00:03:35.055    CC module/bdev/lvol/vbdev_lvol.o
00:03:35.055    CC module/bdev/passthru/vbdev_passthru.o
00:03:35.055    CC module/bdev/nvme/bdev_nvme.o
00:03:35.055    CC module/blobfs/bdev/blobfs_bdev.o
00:03:35.313    CC module/blobfs/bdev/blobfs_bdev_rpc.o
00:03:35.313    CC module/bdev/gpt/vbdev_gpt.o
00:03:35.313    CC module/bdev/null/bdev_null_rpc.o
00:03:35.313    CC module/bdev/error/vbdev_error_rpc.o
00:03:35.313    LIB libspdk_sock_posix.a
00:03:35.313    CC module/bdev/passthru/vbdev_passthru_rpc.o
00:03:35.313    LIB libspdk_blobfs_bdev.a
00:03:35.313    CC module/bdev/delay/vbdev_delay_rpc.o
00:03:35.313    CC module/bdev/malloc/bdev_malloc_rpc.o
00:03:35.571    CC module/bdev/nvme/bdev_nvme_rpc.o
00:03:35.571    CC module/bdev/nvme/nvme_rpc.o
00:03:35.571    LIB libspdk_bdev_error.a
00:03:35.571    LIB libspdk_bdev_null.a
00:03:35.571    LIB libspdk_bdev_gpt.a
00:03:35.571    LIB libspdk_bdev_passthru.a
00:03:35.571    CC module/bdev/nvme/bdev_mdns_client.o
00:03:35.571    CC module/bdev/lvol/vbdev_lvol_rpc.o
00:03:35.571    LIB libspdk_bdev_delay.a
00:03:35.571    LIB libspdk_bdev_malloc.a
00:03:35.571    CC module/bdev/raid/bdev_raid.o
00:03:35.571    CC module/bdev/split/vbdev_split.o
00:03:35.571    CC module/bdev/raid/bdev_raid_rpc.o
00:03:35.571    CC module/bdev/nvme/vbdev_opal.o
00:03:35.571    CC module/bdev/zone_block/vbdev_zone_block.o
00:03:35.571    CC module/bdev/zone_block/vbdev_zone_block_rpc.o
00:03:35.571    CC module/bdev/raid/bdev_raid_sb.o
00:03:35.828    LIB libspdk_bdev_lvol.a
00:03:35.828    CC module/bdev/split/vbdev_split_rpc.o
00:03:35.828    CC module/bdev/raid/raid0.o
00:03:35.828    CC module/bdev/raid/raid1.o
00:03:35.828    CC module/bdev/raid/concat.o
00:03:35.828    CC module/bdev/nvme/vbdev_opal_rpc.o
00:03:35.828    LIB libspdk_bdev_zone_block.a
00:03:36.085    CC module/bdev/aio/bdev_aio.o
00:03:36.085    LIB libspdk_bdev_split.a
00:03:36.085    CC module/bdev/aio/bdev_aio_rpc.o
00:03:36.085    CC module/bdev/nvme/bdev_nvme_cuse_rpc.o
00:03:36.085    CC module/bdev/raid/raid5f.o
00:03:36.085    CC module/bdev/ftl/bdev_ftl.o
00:03:36.085    CC module/bdev/ftl/bdev_ftl_rpc.o
00:03:36.085    CC module/bdev/iscsi/bdev_iscsi.o
00:03:36.085    CC module/bdev/iscsi/bdev_iscsi_rpc.o
00:03:36.085    CC module/bdev/virtio/bdev_virtio_scsi.o
00:03:36.343    CC module/bdev/virtio/bdev_virtio_blk.o
00:03:36.343    LIB libspdk_bdev_aio.a
00:03:36.343    CC module/bdev/virtio/bdev_virtio_rpc.o
00:03:36.343    LIB libspdk_bdev_ftl.a
00:03:36.600    LIB libspdk_bdev_iscsi.a
00:03:36.600    LIB libspdk_bdev_raid.a
00:03:36.600    LIB libspdk_bdev_virtio.a
00:03:37.165    LIB libspdk_bdev_nvme.a
00:03:37.424    CC module/event/subsystems/sock/sock.o
00:03:37.424    CC module/event/subsystems/scheduler/scheduler.o
00:03:37.424    CC module/event/subsystems/vmd/vmd.o
00:03:37.424    CC module/event/subsystems/vmd/vmd_rpc.o
00:03:37.424    CC module/event/subsystems/iobuf/iobuf.o
00:03:37.424    CC module/event/subsystems/iobuf/iobuf_rpc.o
00:03:37.424    CC module/event/subsystems/vhost_blk/vhost_blk.o
00:03:37.682    LIB libspdk_event_iobuf.a
00:03:37.683    LIB libspdk_event_scheduler.a
00:03:37.683    LIB libspdk_event_vhost_blk.a
00:03:37.683    LIB libspdk_event_sock.a
00:03:37.683    LIB libspdk_event_vmd.a
00:03:37.683    CC module/event/subsystems/accel/accel.o
00:03:37.941    LIB libspdk_event_accel.a
00:03:38.199    CC module/event/subsystems/bdev/bdev.o
00:03:38.458    LIB libspdk_event_bdev.a
00:03:38.458    CC module/event/subsystems/nvmf/nvmf_rpc.o
00:03:38.458    CC module/event/subsystems/nvmf/nvmf_tgt.o
00:03:38.458    CC module/event/subsystems/scsi/scsi.o
00:03:38.458    CC module/event/subsystems/nbd/nbd.o
00:03:38.716    LIB libspdk_event_nbd.a
00:03:38.716    LIB libspdk_event_scsi.a
00:03:38.974    LIB libspdk_event_nvmf.a
00:03:38.974    CC module/event/subsystems/iscsi/iscsi.o
00:03:38.974    CC module/event/subsystems/vhost_scsi/vhost_scsi.o
00:03:39.232    LIB libspdk_event_vhost_scsi.a
00:03:39.232    LIB libspdk_event_iscsi.a
00:03:39.490    CXX app/trace/trace.o
00:03:39.490    CC app/trace_record/trace_record.o
00:03:39.490    CC app/spdk_lspci/spdk_lspci.o
00:03:39.490    CC app/iscsi_tgt/iscsi_tgt.o
00:03:39.490    CC app/nvmf_tgt/nvmf_main.o
00:03:39.490    CC examples/accel/perf/accel_perf.o
00:03:39.490    CC app/spdk_tgt/spdk_tgt.o
00:03:39.490    CC examples/blob/hello_world/hello_blob.o
00:03:39.490    CC examples/bdev/hello_world/hello_bdev.o
00:03:39.490    CC test/accel/dif/dif.o
00:03:39.490    LINK spdk_lspci
00:03:39.748    LINK nvmf_tgt
00:03:39.748    LINK iscsi_tgt
00:03:39.748    LINK spdk_trace_record
00:03:39.748    LINK spdk_trace
00:03:39.748    LINK spdk_tgt
00:03:39.748    LINK hello_blob
00:03:39.748    LINK hello_bdev
00:03:40.006    LINK accel_perf
00:03:40.006    LINK dif
00:03:40.263    CC examples/blob/cli/blobcli.o
00:03:40.263    CC examples/ioat/perf/perf.o
00:03:40.521    LINK ioat_perf
00:03:40.778    CC examples/nvme/hello_world/hello_world.o
00:03:40.778    LINK blobcli
00:03:40.778    CC examples/bdev/bdevperf/bdevperf.o
00:03:40.778    LINK hello_world
00:03:41.036    CC examples/ioat/verify/verify.o
00:03:41.036    LINK verify
00:03:41.601    LINK bdevperf
00:03:41.601    CC examples/sock/hello_world/hello_sock.o
00:03:41.859    CC examples/nvme/reconnect/reconnect.o
00:03:41.859    LINK hello_sock
00:03:42.117    LINK reconnect
00:03:42.375    CC app/spdk_nvme_perf/perf.o
00:03:42.632    CC app/spdk_nvme_identify/identify.o
00:03:42.890    CC test/app/bdev_svc/bdev_svc.o
00:03:43.147    LINK bdev_svc
00:03:43.147    CC examples/nvme/nvme_manage/nvme_manage.o
00:03:43.147    CC test/bdev/bdevio/bdevio.o
00:03:43.147    CC examples/nvme/arbitration/arbitration.o
00:03:43.147    LINK spdk_nvme_perf
00:03:43.405    LINK spdk_nvme_identify
00:03:43.405    CC examples/nvme/hotplug/hotplug.o
00:03:43.405    LINK bdevio
00:03:43.663    LINK arbitration
00:03:43.663    LINK nvme_manage
00:03:43.663    LINK hotplug
00:03:43.921    CC examples/vmd/lsvmd/lsvmd.o
00:03:43.921    LINK lsvmd
00:03:44.219    CC examples/nvmf/nvmf/nvmf.o
00:03:44.485    CC examples/vmd/led/led.o
00:03:44.485    CC app/spdk_nvme_discover/discovery_aer.o
00:03:44.485    CC examples/util/zipf/zipf.o
00:03:44.485    LINK led
00:03:44.485    LINK nvmf
00:03:44.743    LINK zipf
00:03:44.743    LINK spdk_nvme_discover
00:03:44.743    CC examples/thread/thread/thread_ex.o
00:03:44.743    CC examples/idxd/perf/perf.o
00:03:45.001    CC examples/nvme/cmb_copy/cmb_copy.o
00:03:45.001    LINK thread
00:03:45.001    CC examples/interrupt_tgt/interrupt_tgt.o
00:03:45.001    CC examples/nvme/abort/abort.o
00:03:45.259    LINK cmb_copy
00:03:45.259    LINK idxd_perf
00:03:45.259    LINK interrupt_tgt
00:03:45.518    LINK abort
00:03:45.776    CC app/spdk_top/spdk_top.o
00:03:45.776    CC examples/nvme/pmr_persistence/pmr_persistence.o
00:03:46.035    CC test/blobfs/mkfs/mkfs.o
00:03:46.035    LINK pmr_persistence
00:03:46.035    LINK mkfs
00:03:46.294    TEST_HEADER include/spdk/accel.h
00:03:46.294    TEST_HEADER include/spdk/accel_module.h
00:03:46.294    TEST_HEADER include/spdk/assert.h
00:03:46.294    TEST_HEADER include/spdk/barrier.h
00:03:46.294    TEST_HEADER include/spdk/base64.h
00:03:46.294    TEST_HEADER include/spdk/bdev.h
00:03:46.294    TEST_HEADER include/spdk/bdev_module.h
00:03:46.294    TEST_HEADER include/spdk/bdev_zone.h
00:03:46.294    TEST_HEADER include/spdk/bit_array.h
00:03:46.294    TEST_HEADER include/spdk/bit_pool.h
00:03:46.294    TEST_HEADER include/spdk/blob.h
00:03:46.294    TEST_HEADER include/spdk/blob_bdev.h
00:03:46.294    TEST_HEADER include/spdk/blobfs.h
00:03:46.294    TEST_HEADER include/spdk/blobfs_bdev.h
00:03:46.294    TEST_HEADER include/spdk/conf.h
00:03:46.294    TEST_HEADER include/spdk/config.h
00:03:46.294    TEST_HEADER include/spdk/cpuset.h
00:03:46.294    TEST_HEADER include/spdk/crc16.h
00:03:46.294    TEST_HEADER include/spdk/crc32.h
00:03:46.294    TEST_HEADER include/spdk/crc64.h
00:03:46.294    TEST_HEADER include/spdk/dif.h
00:03:46.294    TEST_HEADER include/spdk/dma.h
00:03:46.294    TEST_HEADER include/spdk/endian.h
00:03:46.294    TEST_HEADER include/spdk/env.h
00:03:46.294    TEST_HEADER include/spdk/env_dpdk.h
00:03:46.294    TEST_HEADER include/spdk/event.h
00:03:46.294    TEST_HEADER include/spdk/fd.h
00:03:46.294    TEST_HEADER include/spdk/fd_group.h
00:03:46.294    TEST_HEADER include/spdk/file.h
00:03:46.294    TEST_HEADER include/spdk/ftl.h
00:03:46.294    TEST_HEADER include/spdk/gpt_spec.h
00:03:46.294    TEST_HEADER include/spdk/hexlify.h
00:03:46.294    TEST_HEADER include/spdk/histogram_data.h
00:03:46.294    TEST_HEADER include/spdk/idxd.h
00:03:46.294    TEST_HEADER include/spdk/idxd_spec.h
00:03:46.294    TEST_HEADER include/spdk/init.h
00:03:46.294    TEST_HEADER include/spdk/ioat.h
00:03:46.294    TEST_HEADER include/spdk/ioat_spec.h
00:03:46.294    TEST_HEADER include/spdk/iscsi_spec.h
00:03:46.294    TEST_HEADER include/spdk/json.h
00:03:46.294    TEST_HEADER include/spdk/jsonrpc.h
00:03:46.294    TEST_HEADER include/spdk/likely.h
00:03:46.294    TEST_HEADER include/spdk/log.h
00:03:46.294    TEST_HEADER include/spdk/lvol.h
00:03:46.294    TEST_HEADER include/spdk/memory.h
00:03:46.294    TEST_HEADER include/spdk/mmio.h
00:03:46.294    TEST_HEADER include/spdk/nbd.h
00:03:46.294    TEST_HEADER include/spdk/notify.h
00:03:46.294    TEST_HEADER include/spdk/nvme.h
00:03:46.294    TEST_HEADER include/spdk/nvme_intel.h
00:03:46.294    TEST_HEADER include/spdk/nvme_ocssd.h
00:03:46.294    TEST_HEADER include/spdk/nvme_ocssd_spec.h
00:03:46.294    TEST_HEADER include/spdk/nvme_spec.h
00:03:46.294    TEST_HEADER include/spdk/nvme_zns.h
00:03:46.294    TEST_HEADER include/spdk/nvmf.h
00:03:46.294    TEST_HEADER include/spdk/nvmf_cmd.h
00:03:46.294    TEST_HEADER include/spdk/nvmf_fc_spec.h
00:03:46.294    TEST_HEADER include/spdk/nvmf_spec.h
00:03:46.294    TEST_HEADER include/spdk/nvmf_transport.h
00:03:46.294    TEST_HEADER include/spdk/opal.h
00:03:46.294    TEST_HEADER include/spdk/opal_spec.h
00:03:46.294    TEST_HEADER include/spdk/pci_ids.h
00:03:46.294    TEST_HEADER include/spdk/pipe.h
00:03:46.294    TEST_HEADER include/spdk/queue.h
00:03:46.294    TEST_HEADER include/spdk/reduce.h
00:03:46.294    TEST_HEADER include/spdk/rpc.h
00:03:46.294    TEST_HEADER include/spdk/scheduler.h
00:03:46.294    TEST_HEADER include/spdk/scsi.h
00:03:46.294    TEST_HEADER include/spdk/scsi_spec.h
00:03:46.294    TEST_HEADER include/spdk/sock.h
00:03:46.294    TEST_HEADER include/spdk/stdinc.h
00:03:46.294    TEST_HEADER include/spdk/string.h
00:03:46.294    TEST_HEADER include/spdk/thread.h
00:03:46.294    TEST_HEADER include/spdk/trace.h
00:03:46.294    TEST_HEADER include/spdk/trace_parser.h
00:03:46.294    TEST_HEADER include/spdk/tree.h
00:03:46.294    TEST_HEADER include/spdk/ublk.h
00:03:46.294    TEST_HEADER include/spdk/util.h
00:03:46.294    TEST_HEADER include/spdk/uuid.h
00:03:46.294    TEST_HEADER include/spdk/version.h
00:03:46.294    TEST_HEADER include/spdk/vfio_user_pci.h
00:03:46.294    TEST_HEADER include/spdk/vfio_user_spec.h
00:03:46.294    TEST_HEADER include/spdk/vhost.h
00:03:46.294    TEST_HEADER include/spdk/vmd.h
00:03:46.294    TEST_HEADER include/spdk/xor.h
00:03:46.294    TEST_HEADER include/spdk/zipf.h
00:03:46.294    CXX test/cpp_headers/accel.o
00:03:46.551    CXX test/cpp_headers/accel_module.o
00:03:46.551    CC test/dma/test_dma/test_dma.o
00:03:46.551    LINK spdk_top
00:03:46.551    CXX test/cpp_headers/assert.o
00:03:46.551    CXX test/cpp_headers/barrier.o
00:03:46.810    CC test/env/mem_callbacks/mem_callbacks.o
00:03:46.810    CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o
00:03:46.810    CXX test/cpp_headers/base64.o
00:03:46.810    LINK test_dma
00:03:46.810    LINK mem_callbacks
00:03:47.068    CC test/env/vtophys/vtophys.o
00:03:47.068    CXX test/cpp_headers/bdev.o
00:03:47.068    CC app/vhost/vhost.o
00:03:47.326    CXX test/cpp_headers/bdev_module.o
00:03:47.326    LINK vtophys
00:03:47.326    CC app/spdk_dd/spdk_dd.o
00:03:47.326    LINK nvme_fuzz
00:03:47.326    LINK vhost
00:03:47.326    CC app/fio/nvme/fio_plugin.o
00:03:47.326    CXX test/cpp_headers/bdev_zone.o
00:03:47.584    CXX test/cpp_headers/bit_array.o
00:03:47.584    LINK spdk_dd
00:03:47.842    CXX test/cpp_headers/bit_pool.o
00:03:47.842    CC test/event/event_perf/event_perf.o
00:03:47.842    CXX test/cpp_headers/blob.o
00:03:47.842    LINK spdk_nvme
00:03:47.842    LINK event_perf
00:03:48.100    CC test/env/env_dpdk_post_init/env_dpdk_post_init.o
00:03:48.100    CXX test/cpp_headers/blob_bdev.o
00:03:48.100    CC test/lvol/esnap/esnap.o
00:03:48.667    LINK env_dpdk_post_init
00:03:48.667    CXX test/cpp_headers/blobfs.o
00:03:48.667    CXX test/cpp_headers/blobfs_bdev.o
00:03:48.667    CC test/event/reactor/reactor.o
00:03:48.667    CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o
00:03:48.925    CXX test/cpp_headers/conf.o
00:03:48.925    LINK reactor
00:03:48.925    CXX test/cpp_headers/config.o
00:03:48.925    CXX test/cpp_headers/cpuset.o
00:03:49.183    CC app/fio/bdev/fio_plugin.o
00:03:49.183    CXX test/cpp_headers/crc16.o
00:03:49.750    CXX test/cpp_headers/crc32.o
00:03:49.750    LINK spdk_bdev
00:03:49.750    CC test/event/reactor_perf/reactor_perf.o
00:03:49.750    CXX test/cpp_headers/crc64.o
00:03:49.750    CC test/event/app_repeat/app_repeat.o
00:03:50.008    LINK reactor_perf
00:03:50.008    CXX test/cpp_headers/dif.o
00:03:50.008    CC test/env/memory/memory_ut.o
00:03:50.008    LINK app_repeat
00:03:50.008    CC test/event/scheduler/scheduler.o
00:03:50.008    CXX test/cpp_headers/dma.o
00:03:50.267    CXX test/cpp_headers/endian.o
00:03:50.267    LINK scheduler
00:03:50.526    CXX test/cpp_headers/env.o
00:03:50.526    LINK iscsi_fuzz
00:03:50.526    LINK memory_ut
00:03:50.526    CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o
00:03:50.526    CXX test/cpp_headers/env_dpdk.o
00:03:50.526    CXX test/cpp_headers/event.o
00:03:50.785    CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o
00:03:50.785    CC test/env/pci/pci_ut.o
00:03:50.785    CXX test/cpp_headers/fd.o
00:03:50.785    CC test/app/histogram_perf/histogram_perf.o
00:03:50.785    CXX test/cpp_headers/fd_group.o
00:03:51.045    LINK histogram_perf
00:03:51.045    LINK pci_ut
00:03:51.045    LINK vhost_fuzz
00:03:51.045    CXX test/cpp_headers/file.o
00:03:51.304    CXX test/cpp_headers/ftl.o
00:03:51.304    CXX test/cpp_headers/gpt_spec.o
00:03:51.304    CC test/app/jsoncat/jsoncat.o
00:03:51.304    CXX test/cpp_headers/hexlify.o
00:03:51.304    CC test/app/stub/stub.o
00:03:51.563    LINK jsoncat
00:03:51.563    CC test/nvme/aer/aer.o
00:03:51.563    CC test/nvme/reset/reset.o
00:03:51.563    CXX test/cpp_headers/histogram_data.o
00:03:51.563    LINK stub
00:03:51.563    CXX test/cpp_headers/idxd.o
00:03:51.822    LINK reset
00:03:51.822    LINK aer
00:03:51.822    CXX test/cpp_headers/idxd_spec.o
00:03:51.822    CC test/nvme/e2edp/nvme_dp.o
00:03:51.822    CC test/nvme/sgl/sgl.o
00:03:52.080    CXX test/cpp_headers/init.o
00:03:52.080    LINK nvme_dp
00:03:52.080    LINK sgl
00:03:52.080    CXX test/cpp_headers/ioat.o
00:03:52.080    CXX test/cpp_headers/ioat_spec.o
00:03:52.340    CXX test/cpp_headers/iscsi_spec.o
00:03:52.340    CC test/nvme/overhead/overhead.o
00:03:52.598    CXX test/cpp_headers/json.o
00:03:52.857    CXX test/cpp_headers/jsonrpc.o
00:03:52.857    LINK overhead
00:03:52.857    CC test/nvme/err_injection/err_injection.o
00:03:52.857    CC test/nvme/startup/startup.o
00:03:52.857    CXX test/cpp_headers/likely.o
00:03:52.857    CXX test/cpp_headers/log.o
00:03:52.857    LINK esnap
00:03:53.115    CXX test/cpp_headers/lvol.o
00:03:53.115    LINK startup
00:03:53.115    LINK err_injection
00:03:53.115    CC test/rpc_client/rpc_client_test.o
00:03:53.115    CC test/nvme/connect_stress/connect_stress.o
00:03:53.115    CC test/nvme/reserve/reserve.o
00:03:53.115    CC test/nvme/simple_copy/simple_copy.o
00:03:53.373    CC test/nvme/boot_partition/boot_partition.o
00:03:53.373    LINK connect_stress
00:03:53.373    CXX test/cpp_headers/memory.o
00:03:53.373    LINK rpc_client_test
00:03:53.373    LINK reserve
00:03:53.373    LINK simple_copy
00:03:53.632    LINK boot_partition
00:03:53.632    CC test/thread/poller_perf/poller_perf.o
00:03:53.632    CXX test/cpp_headers/mmio.o
00:03:53.632    LINK poller_perf
00:03:53.632    CXX test/cpp_headers/nbd.o
00:03:53.632    CXX test/cpp_headers/notify.o
00:03:53.891    CC test/thread/lock/spdk_lock.o
00:03:53.891    CXX test/cpp_headers/nvme.o
00:03:54.151    CC test/unit/include/spdk/histogram_data.h/histogram_ut.o
00:03:54.151    CXX test/cpp_headers/nvme_intel.o
00:03:54.151    CC test/unit/lib/bdev/bdev.c/bdev_ut.o
00:03:54.151    CC test/unit/lib/accel/accel.c/accel_ut.o
00:03:54.151    LINK histogram_ut
00:03:54.409    CXX test/cpp_headers/nvme_ocssd.o
00:03:54.409    CC test/unit/lib/bdev/part.c/part_ut.o
00:03:54.409    CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o
00:03:54.409    CC test/nvme/compliance/nvme_compliance.o
00:03:54.409    CXX test/cpp_headers/nvme_ocssd_spec.o
00:03:54.409    CXX test/cpp_headers/nvme_spec.o
00:03:54.409    CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o
00:03:54.668    CC test/nvme/fused_ordering/fused_ordering.o
00:03:54.668    CXX test/cpp_headers/nvme_zns.o
00:03:54.668    CC test/nvme/doorbell_aers/doorbell_aers.o
00:03:54.668    LINK scsi_nvme_ut
00:03:54.926    LINK nvme_compliance
00:03:54.926    CXX test/cpp_headers/nvmf.o
00:03:54.926    LINK fused_ordering
00:03:54.926    LINK gpt_ut
00:03:54.926    CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o
00:03:55.184    LINK doorbell_aers
00:03:55.184    CXX test/cpp_headers/nvmf_cmd.o
00:03:55.184    CC test/nvme/fdp/fdp.o
00:03:55.442    CXX test/cpp_headers/nvmf_fc_spec.o
00:03:55.707    LINK spdk_lock
00:03:55.707    LINK fdp
00:03:55.707    CXX test/cpp_headers/nvmf_spec.o
00:03:56.008    CXX test/cpp_headers/nvmf_transport.o
00:03:56.008    CXX test/cpp_headers/opal.o
00:03:56.008    LINK vbdev_lvol_ut
00:03:56.008    CXX test/cpp_headers/opal_spec.o
00:03:56.008    CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o
00:03:56.008    CXX test/cpp_headers/pci_ids.o
00:03:56.289    CXX test/cpp_headers/pipe.o
00:03:56.289    CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o
00:03:56.289    CXX test/cpp_headers/queue.o
00:03:56.289    CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o
00:03:56.289    CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o
00:03:56.289    CXX test/cpp_headers/reduce.o
00:03:56.289    CXX test/cpp_headers/rpc.o
00:03:56.289    LINK accel_ut
00:03:56.548    CXX test/cpp_headers/scheduler.o
00:03:56.548    CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o
00:03:56.548    LINK bdev_zone_ut
00:03:56.548    CC test/nvme/cuse/cuse.o
00:03:56.548    CXX test/cpp_headers/scsi.o
00:03:56.807    CXX test/cpp_headers/scsi_spec.o
00:03:56.807    CXX test/cpp_headers/sock.o
00:03:56.807    CXX test/cpp_headers/stdinc.o
00:03:56.807    CXX test/cpp_headers/string.o
00:03:56.807    LINK vbdev_zone_block_ut
00:03:56.807    CXX test/cpp_headers/thread.o
00:03:57.067    CXX test/cpp_headers/trace.o
00:03:57.067    CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o
00:03:57.067    CC test/unit/lib/blob/blob.c/blob_ut.o
00:03:57.067    CXX test/cpp_headers/trace_parser.o
00:03:57.067    CXX test/cpp_headers/tree.o
00:03:57.326    CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o
00:03:57.326    CXX test/cpp_headers/ublk.o
00:03:57.326    CXX test/cpp_headers/util.o
00:03:57.326    LINK cuse
00:03:57.326    LINK part_ut
00:03:57.585    LINK blob_bdev_ut
00:03:57.585    CXX test/cpp_headers/uuid.o
00:03:57.585    LINK bdev_raid_sb_ut
00:03:57.585    CXX test/cpp_headers/version.o
00:03:57.585    CC test/unit/lib/bdev/raid/concat.c/concat_ut.o
00:03:57.585    CXX test/cpp_headers/vfio_user_pci.o
00:03:57.844    CXX test/cpp_headers/vfio_user_spec.o
00:03:57.844    CXX test/cpp_headers/vhost.o
00:03:57.844    CXX test/cpp_headers/vmd.o
00:03:57.844    CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o
00:03:57.844    CC test/unit/lib/bdev/raid/raid5f.c/raid5f_ut.o
00:03:57.844    CXX test/cpp_headers/xor.o
00:03:58.103    CXX test/cpp_headers/zipf.o
00:03:58.103    LINK bdev_raid_ut
00:03:58.103    LINK concat_ut
00:03:58.103    CC test/unit/lib/blobfs/tree.c/tree_ut.o
00:03:58.103    CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o
00:03:58.362    CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o
00:03:58.362    CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o
00:03:58.362    LINK raid1_ut
00:03:58.362    LINK tree_ut
00:03:58.621    LINK blobfs_bdev_ut
00:03:58.621    CC test/unit/lib/dma/dma.c/dma_ut.o
00:03:58.621    CC test/unit/lib/event/app.c/app_ut.o
00:03:58.880    LINK raid5f_ut
00:03:58.880    CC test/unit/lib/event/reactor.c/reactor_ut.o
00:03:58.880    LINK bdev_ut
00:03:59.139    LINK dma_ut
00:03:59.139    LINK bdev_ut
00:03:59.139    CC test/unit/lib/ioat/ioat.c/ioat_ut.o
00:03:59.139    LINK app_ut
00:03:59.399    CC test/unit/lib/iscsi/conn.c/conn_ut.o
00:03:59.399    LINK blobfs_async_ut
00:03:59.399    CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o
00:03:59.399    CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o
00:03:59.399    LINK blobfs_sync_ut
00:03:59.399    LINK reactor_ut
00:03:59.399    LINK ioat_ut
00:03:59.658    CC test/unit/lib/iscsi/param.c/param_ut.o
00:03:59.658    CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o
00:03:59.658    LINK init_grp_ut
00:03:59.658    CC test/unit/lib/json/json_parse.c/json_parse_ut.o
00:03:59.917    CC test/unit/lib/json/json_util.c/json_util_ut.o
00:03:59.917    CC test/unit/lib/json/json_write.c/json_write_ut.o
00:03:59.917    CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o
00:03:59.917    LINK param_ut
00:04:00.176    LINK conn_ut
00:04:00.176    LINK portal_grp_ut
00:04:00.176    CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o
00:04:00.176    LINK json_util_ut
00:04:00.435    LINK bdev_nvme_ut
00:04:00.435    CC test/unit/lib/log/log.c/log_ut.o
00:04:00.435    CC test/unit/lib/lvol/lvol.c/lvol_ut.o
00:04:00.695    LINK jsonrpc_server_ut
00:04:00.695    LINK json_write_ut
00:04:00.695    CC test/unit/lib/notify/notify.c/notify_ut.o
00:04:00.695    LINK tgt_node_ut
00:04:00.695    LINK log_ut
00:04:00.695    CC test/unit/lib/nvme/nvme.c/nvme_ut.o
00:04:00.695    CC test/unit/lib/nvmf/tcp.c/tcp_ut.o
00:04:00.954    CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o
00:04:00.954    LINK notify_ut
00:04:00.954    CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o
00:04:00.954    CC test/unit/lib/scsi/dev.c/dev_ut.o
00:04:00.954    CC test/unit/lib/scsi/lun.c/lun_ut.o
00:04:01.525    LINK iscsi_ut
00:04:01.525    LINK dev_ut
00:04:01.525    LINK lun_ut
00:04:01.785    CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o
00:04:01.785    CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o
00:04:01.785    LINK nvme_ut
00:04:01.785    CC test/unit/lib/scsi/scsi.c/scsi_ut.o
00:04:02.045    CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o
00:04:02.045    LINK scsi_ut
00:04:02.045    LINK lvol_ut
00:04:02.045    LINK json_parse_ut
00:04:02.304    CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o
00:04:02.562    CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o
00:04:02.562    CC test/unit/lib/sock/sock.c/sock_ut.o
00:04:02.562    LINK ctrlr_bdev_ut
00:04:02.822    LINK subsystem_ut
00:04:02.822    LINK scsi_pr_ut
00:04:02.822    CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o
00:04:03.082    LINK scsi_bdev_ut
00:04:03.082    LINK blob_ut
00:04:03.082    CC test/unit/lib/nvmf/rdma.c/rdma_ut.o
00:04:03.082    CC test/unit/lib/nvmf/transport.c/transport_ut.o
00:04:03.082    LINK ctrlr_discovery_ut
00:04:03.082    CC test/unit/lib/sock/posix.c/posix_ut.o
00:04:03.341    CC test/unit/lib/thread/thread.c/thread_ut.o
00:04:03.341    CC test/unit/lib/thread/iobuf.c/iobuf_ut.o
00:04:03.601    LINK ctrlr_ut
00:04:03.860    LINK sock_ut
00:04:03.860    CC test/unit/lib/util/base64.c/base64_ut.o
00:04:04.119    LINK nvmf_ut
00:04:04.119    LINK posix_ut
00:04:04.119    LINK iobuf_ut
00:04:04.119    CC test/unit/lib/util/bit_array.c/bit_array_ut.o
00:04:04.119    LINK base64_ut
00:04:04.379    CC test/unit/lib/util/cpuset.c/cpuset_ut.o
00:04:04.379    CC test/unit/lib/util/crc16.c/crc16_ut.o
00:04:04.379    CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o
00:04:04.379    CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o
00:04:04.379    LINK bit_array_ut
00:04:04.379    LINK crc32_ieee_ut
00:04:04.379    LINK cpuset_ut
00:04:04.638    LINK crc16_ut
00:04:04.638    LINK nvme_ctrlr_ut
00:04:04.638    LINK pci_event_ut
00:04:04.638    LINK tcp_ut
00:04:04.639    CC test/unit/lib/util/crc32c.c/crc32c_ut.o
00:04:04.898    CC test/unit/lib/util/crc64.c/crc64_ut.o
00:04:04.898    CC test/unit/lib/util/dif.c/dif_ut.o
00:04:04.898    CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o
00:04:04.898    CC test/unit/lib/util/iov.c/iov_ut.o
00:04:04.898    LINK crc32c_ut
00:04:04.899    LINK crc64_ut
00:04:05.158    CC test/unit/lib/init/subsystem.c/subsystem_ut.o
00:04:05.158    CC test/unit/lib/util/math.c/math_ut.o
00:04:05.158    LINK thread_ut
00:04:05.158    CC test/unit/lib/util/pipe.c/pipe_ut.o
00:04:05.158    LINK iov_ut
00:04:05.158    CC test/unit/lib/util/string.c/string_ut.o
00:04:05.158    LINK math_ut
00:04:05.418    CC test/unit/lib/util/xor.c/xor_ut.o
00:04:05.418    LINK subsystem_ut
00:04:05.418    LINK pipe_ut
00:04:05.418    CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o
00:04:05.418    LINK string_ut
00:04:05.418    CC test/unit/lib/rpc/rpc.c/rpc_ut.o
00:04:05.677    LINK xor_ut
00:04:05.677    CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o
00:04:05.677    CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o
00:04:05.677    CC test/unit/lib/idxd/idxd.c/idxd_ut.o
00:04:05.677    LINK dif_ut
00:04:05.936    LINK rpc_ut
00:04:05.936    CC test/unit/lib/vhost/vhost.c/vhost_ut.o
00:04:05.936    CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o
00:04:05.936    LINK nvme_ctrlr_cmd_ut
00:04:06.195    LINK transport_ut
00:04:06.195    LINK rdma_ut
00:04:06.195    CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o
00:04:06.195    LINK idxd_user_ut
00:04:06.195    LINK nvme_ctrlr_ocssd_cmd_ut
00:04:06.453    CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o
00:04:06.453    LINK nvme_ns_ut
00:04:06.453    LINK idxd_ut
00:04:06.453    CC test/unit/lib/rdma/common.c/common_ut.o
00:04:06.453    CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o
00:04:06.453    CC test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut.o
00:04:06.711    CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o
00:04:06.711    CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o
00:04:06.711    CC test/unit/lib/ftl/ftl_band.c/ftl_band_ut.o
00:04:06.969    LINK ftl_l2p_ut
00:04:06.970    LINK common_ut
00:04:06.970    LINK nvme_quirks_ut
00:04:07.228    CC test/unit/lib/ftl/ftl_io.c/ftl_io_ut.o
00:04:07.228    CC test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut.o
00:04:07.228    LINK nvme_poll_group_ut
00:04:07.228    CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o
00:04:07.486    LINK ftl_bitmap_ut
00:04:07.486    LINK nvme_ns_ocssd_cmd_ut
00:04:07.486    CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o
00:04:07.744    LINK ftl_io_ut
00:04:07.744    LINK nvme_ns_cmd_ut
00:04:07.744    LINK nvme_qpair_ut
00:04:08.003    CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o
00:04:08.003    CC test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut.o
00:04:08.003    LINK nvme_pcie_ut
00:04:08.003    CC test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut.o
00:04:08.003    LINK ftl_band_ut
00:04:08.003    LINK vhost_ut
00:04:08.003    CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o
00:04:08.262    CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o
00:04:08.262    LINK ftl_mempool_ut
00:04:08.262    CC test/unit/lib/ftl/ftl_sb/ftl_sb_ut.o
00:04:08.262    CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o
00:04:08.520    CC test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut.o
00:04:08.520    LINK nvme_transport_ut
00:04:08.520    LINK nvme_io_msg_ut
00:04:08.520    LINK ftl_mngt_ut
00:04:08.520    CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o
00:04:08.778    CC test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut.o
00:04:09.036    LINK nvme_fabric_ut
00:04:09.036    LINK nvme_opal_ut
00:04:09.294    LINK nvme_pcie_common_ut
00:04:09.861    LINK ftl_layout_upgrade_ut
00:04:09.861    LINK ftl_sb_ut
00:04:09.861    LINK nvme_tcp_ut
00:04:10.119    LINK nvme_cuse_ut
00:04:10.686    LINK nvme_rdma_ut
00:04:10.686  
00:04:10.686  real	1m36.165s
00:04:10.686  user	7m33.964s
00:04:10.686  sys	1m49.817s
00:04:10.686  ************************************
00:04:10.686  END TEST unittest_build
00:04:10.686  ************************************
00:04:10.686   16:48:03	-- common/autotest_common.sh@1115 -- $ xtrace_disable
00:04:10.686   16:48:03	-- common/autotest_common.sh@10 -- $ set +x
00:04:10.945    16:48:03	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:04:10.945     16:48:03	-- common/autotest_common.sh@1690 -- # lcov --version
00:04:10.945     16:48:03	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:04:10.945    16:48:03	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:04:10.945    16:48:03	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:04:10.945    16:48:03	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:04:10.945    16:48:03	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:04:10.945    16:48:03	-- scripts/common.sh@335 -- # IFS=.-:
00:04:10.945    16:48:03	-- scripts/common.sh@335 -- # read -ra ver1
00:04:10.945    16:48:03	-- scripts/common.sh@336 -- # IFS=.-:
00:04:10.945    16:48:03	-- scripts/common.sh@336 -- # read -ra ver2
00:04:10.945    16:48:03	-- scripts/common.sh@337 -- # local 'op=<'
00:04:10.945    16:48:03	-- scripts/common.sh@339 -- # ver1_l=2
00:04:10.945    16:48:03	-- scripts/common.sh@340 -- # ver2_l=1
00:04:10.945    16:48:03	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:04:10.945    16:48:03	-- scripts/common.sh@343 -- # case "$op" in
00:04:10.945    16:48:03	-- scripts/common.sh@344 -- # : 1
00:04:10.945    16:48:03	-- scripts/common.sh@363 -- # (( v = 0 ))
00:04:10.945    16:48:03	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:04:10.945     16:48:03	-- scripts/common.sh@364 -- # decimal 1
00:04:10.945     16:48:03	-- scripts/common.sh@352 -- # local d=1
00:04:10.945     16:48:03	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:04:10.945     16:48:03	-- scripts/common.sh@354 -- # echo 1
00:04:10.945    16:48:03	-- scripts/common.sh@364 -- # ver1[v]=1
00:04:10.945     16:48:03	-- scripts/common.sh@365 -- # decimal 2
00:04:10.945     16:48:03	-- scripts/common.sh@352 -- # local d=2
00:04:10.945     16:48:03	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:04:10.945     16:48:03	-- scripts/common.sh@354 -- # echo 2
00:04:10.945    16:48:03	-- scripts/common.sh@365 -- # ver2[v]=2
00:04:10.945    16:48:03	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:04:10.945    16:48:03	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:04:10.945    16:48:03	-- scripts/common.sh@367 -- # return 0
00:04:10.945    16:48:03	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:04:10.945    16:48:03	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:04:10.945  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:10.945  		--rc genhtml_branch_coverage=1
00:04:10.945  		--rc genhtml_function_coverage=1
00:04:10.945  		--rc genhtml_legend=1
00:04:10.945  		--rc geninfo_all_blocks=1
00:04:10.945  		--rc geninfo_unexecuted_blocks=1
00:04:10.945  		
00:04:10.945  		'
00:04:10.945    16:48:03	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:04:10.945  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:10.945  		--rc genhtml_branch_coverage=1
00:04:10.945  		--rc genhtml_function_coverage=1
00:04:10.945  		--rc genhtml_legend=1
00:04:10.945  		--rc geninfo_all_blocks=1
00:04:10.945  		--rc geninfo_unexecuted_blocks=1
00:04:10.945  		
00:04:10.945  		'
00:04:10.945    16:48:03	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:04:10.945  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:10.945  		--rc genhtml_branch_coverage=1
00:04:10.945  		--rc genhtml_function_coverage=1
00:04:10.945  		--rc genhtml_legend=1
00:04:10.945  		--rc geninfo_all_blocks=1
00:04:10.945  		--rc geninfo_unexecuted_blocks=1
00:04:10.945  		
00:04:10.945  		'
00:04:10.945    16:48:03	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:04:10.945  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:10.945  		--rc genhtml_branch_coverage=1
00:04:10.945  		--rc genhtml_function_coverage=1
00:04:10.945  		--rc genhtml_legend=1
00:04:10.945  		--rc geninfo_all_blocks=1
00:04:10.945  		--rc geninfo_unexecuted_blocks=1
00:04:10.945  		
00:04:10.945  		'
00:04:10.945   16:48:03	-- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:04:10.945     16:48:03	-- nvmf/common.sh@7 -- # uname -s
00:04:10.945    16:48:03	-- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:04:10.945    16:48:03	-- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:04:10.945    16:48:03	-- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:04:10.945    16:48:03	-- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:04:10.945    16:48:03	-- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:04:10.945    16:48:03	-- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:04:10.945    16:48:03	-- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:04:10.945    16:48:03	-- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:04:10.945    16:48:03	-- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:04:10.945     16:48:03	-- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:04:10.945    16:48:03	-- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:36d5bf56-4e93-4531-b548-f512f9a0b3d0
00:04:10.945    16:48:03	-- nvmf/common.sh@18 -- # NVME_HOSTID=36d5bf56-4e93-4531-b548-f512f9a0b3d0
00:04:10.945    16:48:03	-- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:04:10.945    16:48:03	-- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:04:10.945    16:48:03	-- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback
00:04:10.945    16:48:03	-- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:04:10.945     16:48:03	-- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]]
00:04:10.945     16:48:03	-- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:04:10.945     16:48:03	-- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:04:10.945      16:48:03	-- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:04:10.945      16:48:03	-- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:04:10.945      16:48:03	-- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:04:10.945      16:48:03	-- paths/export.sh@5 -- # export PATH
00:04:10.945      16:48:03	-- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:04:10.945    16:48:03	-- nvmf/common.sh@46 -- # : 0
00:04:10.945    16:48:03	-- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID
00:04:10.945    16:48:03	-- nvmf/common.sh@48 -- # build_nvmf_app_args
00:04:10.945    16:48:03	-- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']'
00:04:10.945    16:48:03	-- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:04:10.945    16:48:03	-- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:04:10.945    16:48:03	-- nvmf/common.sh@32 -- # '[' -n '' ']'
00:04:10.945    16:48:03	-- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']'
00:04:10.945    16:48:03	-- nvmf/common.sh@50 -- # have_pci_nics=0
00:04:10.945   16:48:03	-- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']'
00:04:10.945    16:48:03	-- spdk/autotest.sh@32 -- # uname -s
00:04:10.945   16:48:03	-- spdk/autotest.sh@32 -- # '[' Linux = Linux ']'
00:04:10.945   16:48:03	-- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/share/apport/apport -p%p -s%s -c%c -d%d -P%P -u%u -g%g -- %E'
00:04:10.945   16:48:03	-- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps
00:04:10.945   16:48:03	-- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t'
00:04:10.945   16:48:03	-- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps
00:04:10.945   16:48:03	-- spdk/autotest.sh@44 -- # modprobe nbd
00:04:10.945    16:48:03	-- spdk/autotest.sh@46 -- # type -P udevadm
00:04:10.945   16:48:03	-- spdk/autotest.sh@46 -- # udevadm=/usr/bin/udevadm
00:04:10.945   16:48:03	-- spdk/autotest.sh@48 -- # udevadm_pid=103991
00:04:10.945   16:48:03	-- spdk/autotest.sh@47 -- # /usr/bin/udevadm monitor --property
00:04:10.945   16:48:03	-- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power
00:04:11.204   16:48:03	-- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power
00:04:11.204   16:48:03	-- spdk/autotest.sh@54 -- # echo 104008
00:04:11.204   16:48:03	-- spdk/autotest.sh@56 -- # echo 104015
00:04:11.204   16:48:03	-- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power
00:04:11.204   16:48:03	-- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]]
00:04:11.204   16:48:03	-- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT
00:04:11.204   16:48:03	-- spdk/autotest.sh@68 -- # timing_enter autotest
00:04:11.204   16:48:03	-- common/autotest_common.sh@722 -- # xtrace_disable
00:04:11.204   16:48:03	-- common/autotest_common.sh@10 -- # set +x
00:04:11.204   16:48:03	-- spdk/autotest.sh@70 -- # create_test_list
00:04:11.204   16:48:03	-- common/autotest_common.sh@746 -- # xtrace_disable
00:04:11.204   16:48:03	-- common/autotest_common.sh@10 -- # set +x
00:04:11.204     16:48:03	-- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh
00:04:11.204    16:48:03	-- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk
00:04:11.204   16:48:03	-- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk
00:04:11.204   16:48:03	-- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output
00:04:11.204   16:48:03	-- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk
00:04:11.204   16:48:03	-- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod
00:04:11.204    16:48:03	-- common/autotest_common.sh@1450 -- # uname
00:04:11.204   16:48:03	-- common/autotest_common.sh@1450 -- # '[' Linux = FreeBSD ']'
00:04:11.204   16:48:03	-- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf
00:04:11.204    16:48:03	-- common/autotest_common.sh@1470 -- # uname
00:04:11.204   16:48:03	-- common/autotest_common.sh@1470 -- # [[ Linux = FreeBSD ]]
00:04:11.204   16:48:03	-- spdk/autotest.sh@79 -- # [[ y == y ]]
00:04:11.204   16:48:03	-- spdk/autotest.sh@81 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version
00:04:11.204  lcov: LCOV version 1.15
00:04:11.204   16:48:03	-- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info
00:04:29.287  /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found
00:04:29.287  geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno
00:04:29.287  /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found
00:04:29.287  geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno
00:04:29.287  /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found
00:04:29.287  geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno
00:04:55.828   16:48:48	-- spdk/autotest.sh@87 -- # timing_enter pre_cleanup
00:04:55.828   16:48:48	-- common/autotest_common.sh@722 -- # xtrace_disable
00:04:55.828   16:48:48	-- common/autotest_common.sh@10 -- # set +x
00:04:55.828   16:48:48	-- spdk/autotest.sh@89 -- # rm -f
00:04:55.828   16:48:48	-- spdk/autotest.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:04:56.086  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev
00:04:56.346  0000:00:06.0 (1b36 0010): Already using the nvme driver
00:04:56.346   16:48:49	-- spdk/autotest.sh@94 -- # get_zoned_devs
00:04:56.346   16:48:49	-- common/autotest_common.sh@1664 -- # zoned_devs=()
00:04:56.346   16:48:49	-- common/autotest_common.sh@1664 -- # local -gA zoned_devs
00:04:56.346   16:48:49	-- common/autotest_common.sh@1665 -- # local nvme bdf
00:04:56.346   16:48:49	-- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme*
00:04:56.346   16:48:49	-- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1
00:04:56.346   16:48:49	-- common/autotest_common.sh@1657 -- # local device=nvme0n1
00:04:56.346   16:48:49	-- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]]
00:04:56.346   16:48:49	-- common/autotest_common.sh@1660 -- # [[ none != none ]]
00:04:56.346   16:48:49	-- spdk/autotest.sh@96 -- # (( 0 > 0 ))
00:04:56.346    16:48:49	-- spdk/autotest.sh@108 -- # ls /dev/nvme0n1
00:04:56.346    16:48:49	-- spdk/autotest.sh@108 -- # grep -v p
00:04:56.346   16:48:49	-- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true)
00:04:56.346   16:48:49	-- spdk/autotest.sh@110 -- # [[ -z '' ]]
00:04:56.346   16:48:49	-- spdk/autotest.sh@111 -- # block_in_use /dev/nvme0n1
00:04:56.346   16:48:49	-- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt
00:04:56.346   16:48:49	-- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1
00:04:56.346  No valid GPT data, bailing
00:04:56.346    16:48:49	-- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1
00:04:56.346   16:48:49	-- scripts/common.sh@393 -- # pt=
00:04:56.346   16:48:49	-- scripts/common.sh@394 -- # return 1
00:04:56.346   16:48:49	-- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1
00:04:56.346  1+0 records in
00:04:56.346  1+0 records out
00:04:56.346  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00645732 s, 162 MB/s
00:04:56.346   16:48:49	-- spdk/autotest.sh@116 -- # sync
00:04:56.346   16:48:49	-- spdk/autotest.sh@118 -- # xtrace_disable_per_cmd reap_spdk_processes
00:04:56.346   16:48:49	-- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null'
00:04:56.346    16:48:49	-- common/autotest_common.sh@22 -- # reap_spdk_processes
00:04:58.250    16:48:50	-- spdk/autotest.sh@122 -- # uname -s
00:04:58.250   16:48:50	-- spdk/autotest.sh@122 -- # '[' Linux = Linux ']'
00:04:58.250   16:48:50	-- spdk/autotest.sh@123 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh
00:04:58.250   16:48:50	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:04:58.250   16:48:50	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:04:58.250   16:48:50	-- common/autotest_common.sh@10 -- # set +x
00:04:58.250  ************************************
00:04:58.250  START TEST setup.sh
00:04:58.250  ************************************
00:04:58.250   16:48:50	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh
00:04:58.250  * Looking for test storage...
00:04:58.250  * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup
00:04:58.250     16:48:50	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:04:58.250      16:48:50	-- common/autotest_common.sh@1690 -- # lcov --version
00:04:58.250      16:48:50	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:04:58.250     16:48:50	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:04:58.250     16:48:50	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:04:58.250     16:48:50	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:04:58.250     16:48:50	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:04:58.250     16:48:50	-- scripts/common.sh@335 -- # IFS=.-:
00:04:58.250     16:48:50	-- scripts/common.sh@335 -- # read -ra ver1
00:04:58.250     16:48:50	-- scripts/common.sh@336 -- # IFS=.-:
00:04:58.250     16:48:50	-- scripts/common.sh@336 -- # read -ra ver2
00:04:58.250     16:48:50	-- scripts/common.sh@337 -- # local 'op=<'
00:04:58.250     16:48:50	-- scripts/common.sh@339 -- # ver1_l=2
00:04:58.250     16:48:50	-- scripts/common.sh@340 -- # ver2_l=1
00:04:58.250     16:48:50	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:04:58.250     16:48:50	-- scripts/common.sh@343 -- # case "$op" in
00:04:58.250     16:48:50	-- scripts/common.sh@344 -- # : 1
00:04:58.250     16:48:50	-- scripts/common.sh@363 -- # (( v = 0 ))
00:04:58.250     16:48:50	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:04:58.250      16:48:50	-- scripts/common.sh@364 -- # decimal 1
00:04:58.250      16:48:50	-- scripts/common.sh@352 -- # local d=1
00:04:58.250      16:48:50	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:04:58.250      16:48:50	-- scripts/common.sh@354 -- # echo 1
00:04:58.250     16:48:50	-- scripts/common.sh@364 -- # ver1[v]=1
00:04:58.250      16:48:50	-- scripts/common.sh@365 -- # decimal 2
00:04:58.250      16:48:50	-- scripts/common.sh@352 -- # local d=2
00:04:58.250      16:48:50	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:04:58.250      16:48:50	-- scripts/common.sh@354 -- # echo 2
00:04:58.250     16:48:50	-- scripts/common.sh@365 -- # ver2[v]=2
00:04:58.250     16:48:50	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:04:58.250     16:48:50	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:04:58.250     16:48:50	-- scripts/common.sh@367 -- # return 0
00:04:58.250     16:48:50	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:04:58.250     16:48:50	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:04:58.250  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:58.250  		--rc genhtml_branch_coverage=1
00:04:58.250  		--rc genhtml_function_coverage=1
00:04:58.250  		--rc genhtml_legend=1
00:04:58.250  		--rc geninfo_all_blocks=1
00:04:58.250  		--rc geninfo_unexecuted_blocks=1
00:04:58.250  		
00:04:58.250  		'
00:04:58.250     16:48:50	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:04:58.250  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:58.250  		--rc genhtml_branch_coverage=1
00:04:58.250  		--rc genhtml_function_coverage=1
00:04:58.250  		--rc genhtml_legend=1
00:04:58.250  		--rc geninfo_all_blocks=1
00:04:58.250  		--rc geninfo_unexecuted_blocks=1
00:04:58.250  		
00:04:58.250  		'
00:04:58.250     16:48:50	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:04:58.250  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:58.250  		--rc genhtml_branch_coverage=1
00:04:58.250  		--rc genhtml_function_coverage=1
00:04:58.250  		--rc genhtml_legend=1
00:04:58.250  		--rc geninfo_all_blocks=1
00:04:58.250  		--rc geninfo_unexecuted_blocks=1
00:04:58.250  		
00:04:58.251  		'
00:04:58.251     16:48:50	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:04:58.251  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:58.251  		--rc genhtml_branch_coverage=1
00:04:58.251  		--rc genhtml_function_coverage=1
00:04:58.251  		--rc genhtml_legend=1
00:04:58.251  		--rc geninfo_all_blocks=1
00:04:58.251  		--rc geninfo_unexecuted_blocks=1
00:04:58.251  		
00:04:58.251  		'
00:04:58.251    16:48:50	-- setup/test-setup.sh@10 -- # uname -s
00:04:58.251   16:48:50	-- setup/test-setup.sh@10 -- # [[ Linux == Linux ]]
00:04:58.251   16:48:50	-- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh
00:04:58.251   16:48:50	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:04:58.251   16:48:50	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:04:58.251   16:48:50	-- common/autotest_common.sh@10 -- # set +x
00:04:58.251  ************************************
00:04:58.251  START TEST acl
00:04:58.251  ************************************
00:04:58.251   16:48:50	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh
00:04:58.251  * Looking for test storage...
00:04:58.251  * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup
00:04:58.251     16:48:51	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:04:58.251      16:48:51	-- common/autotest_common.sh@1690 -- # lcov --version
00:04:58.251      16:48:51	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:04:58.510     16:48:51	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:04:58.510     16:48:51	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:04:58.510     16:48:51	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:04:58.511     16:48:51	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:04:58.511     16:48:51	-- scripts/common.sh@335 -- # IFS=.-:
00:04:58.511     16:48:51	-- scripts/common.sh@335 -- # read -ra ver1
00:04:58.511     16:48:51	-- scripts/common.sh@336 -- # IFS=.-:
00:04:58.511     16:48:51	-- scripts/common.sh@336 -- # read -ra ver2
00:04:58.511     16:48:51	-- scripts/common.sh@337 -- # local 'op=<'
00:04:58.511     16:48:51	-- scripts/common.sh@339 -- # ver1_l=2
00:04:58.511     16:48:51	-- scripts/common.sh@340 -- # ver2_l=1
00:04:58.511     16:48:51	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:04:58.511     16:48:51	-- scripts/common.sh@343 -- # case "$op" in
00:04:58.511     16:48:51	-- scripts/common.sh@344 -- # : 1
00:04:58.511     16:48:51	-- scripts/common.sh@363 -- # (( v = 0 ))
00:04:58.511     16:48:51	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:04:58.511      16:48:51	-- scripts/common.sh@364 -- # decimal 1
00:04:58.511      16:48:51	-- scripts/common.sh@352 -- # local d=1
00:04:58.511      16:48:51	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:04:58.511      16:48:51	-- scripts/common.sh@354 -- # echo 1
00:04:58.511     16:48:51	-- scripts/common.sh@364 -- # ver1[v]=1
00:04:58.511      16:48:51	-- scripts/common.sh@365 -- # decimal 2
00:04:58.511      16:48:51	-- scripts/common.sh@352 -- # local d=2
00:04:58.511      16:48:51	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:04:58.511      16:48:51	-- scripts/common.sh@354 -- # echo 2
00:04:58.511     16:48:51	-- scripts/common.sh@365 -- # ver2[v]=2
00:04:58.511     16:48:51	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:04:58.511     16:48:51	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:04:58.511     16:48:51	-- scripts/common.sh@367 -- # return 0
00:04:58.511     16:48:51	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:04:58.511     16:48:51	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:04:58.511  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:58.511  		--rc genhtml_branch_coverage=1
00:04:58.511  		--rc genhtml_function_coverage=1
00:04:58.511  		--rc genhtml_legend=1
00:04:58.511  		--rc geninfo_all_blocks=1
00:04:58.511  		--rc geninfo_unexecuted_blocks=1
00:04:58.511  		
00:04:58.511  		'
00:04:58.511     16:48:51	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:04:58.511  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:58.511  		--rc genhtml_branch_coverage=1
00:04:58.511  		--rc genhtml_function_coverage=1
00:04:58.511  		--rc genhtml_legend=1
00:04:58.511  		--rc geninfo_all_blocks=1
00:04:58.511  		--rc geninfo_unexecuted_blocks=1
00:04:58.511  		
00:04:58.511  		'
00:04:58.511     16:48:51	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:04:58.511  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:58.511  		--rc genhtml_branch_coverage=1
00:04:58.511  		--rc genhtml_function_coverage=1
00:04:58.511  		--rc genhtml_legend=1
00:04:58.511  		--rc geninfo_all_blocks=1
00:04:58.511  		--rc geninfo_unexecuted_blocks=1
00:04:58.511  		
00:04:58.511  		'
00:04:58.511     16:48:51	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:04:58.511  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:58.511  		--rc genhtml_branch_coverage=1
00:04:58.511  		--rc genhtml_function_coverage=1
00:04:58.511  		--rc genhtml_legend=1
00:04:58.511  		--rc geninfo_all_blocks=1
00:04:58.511  		--rc geninfo_unexecuted_blocks=1
00:04:58.511  		
00:04:58.511  		'
00:04:58.511   16:48:51	-- setup/acl.sh@10 -- # get_zoned_devs
00:04:58.511   16:48:51	-- common/autotest_common.sh@1664 -- # zoned_devs=()
00:04:58.511   16:48:51	-- common/autotest_common.sh@1664 -- # local -gA zoned_devs
00:04:58.511   16:48:51	-- common/autotest_common.sh@1665 -- # local nvme bdf
00:04:58.511   16:48:51	-- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme*
00:04:58.511   16:48:51	-- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1
00:04:58.511   16:48:51	-- common/autotest_common.sh@1657 -- # local device=nvme0n1
00:04:58.511   16:48:51	-- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]]
00:04:58.511   16:48:51	-- common/autotest_common.sh@1660 -- # [[ none != none ]]
00:04:58.511   16:48:51	-- setup/acl.sh@12 -- # devs=()
00:04:58.511   16:48:51	-- setup/acl.sh@12 -- # declare -a devs
00:04:58.511   16:48:51	-- setup/acl.sh@13 -- # drivers=()
00:04:58.511   16:48:51	-- setup/acl.sh@13 -- # declare -A drivers
00:04:58.511   16:48:51	-- setup/acl.sh@51 -- # setup reset
00:04:58.511   16:48:51	-- setup/common.sh@9 -- # [[ reset == output ]]
00:04:58.511   16:48:51	-- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:04:59.078   16:48:51	-- setup/acl.sh@52 -- # collect_setup_devs
00:04:59.078   16:48:51	-- setup/acl.sh@16 -- # local dev driver
00:04:59.078   16:48:51	-- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _
00:04:59.078    16:48:51	-- setup/acl.sh@15 -- # setup output status
00:04:59.078    16:48:51	-- setup/common.sh@9 -- # [[ output == output ]]
00:04:59.078    16:48:51	-- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status
00:04:59.078  Hugepages
00:04:59.078  node     hugesize     free /  total
00:04:59.078   16:48:51	-- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]]
00:04:59.078   16:48:51	-- setup/acl.sh@19 -- # continue
00:04:59.078   16:48:51	-- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _
00:04:59.078  
00:04:59.078  Type                      BDF             Vendor Device NUMA    Driver           Device     Block devices
00:04:59.078   16:48:51	-- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]]
00:04:59.078   16:48:51	-- setup/acl.sh@19 -- # continue
00:04:59.078   16:48:51	-- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _
00:04:59.336   16:48:51	-- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]]
00:04:59.336   16:48:51	-- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]]
00:04:59.336   16:48:51	-- setup/acl.sh@20 -- # continue
00:04:59.336   16:48:51	-- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _
00:04:59.336   16:48:52	-- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]]
00:04:59.336   16:48:52	-- setup/acl.sh@20 -- # [[ nvme == nvme ]]
00:04:59.336   16:48:52	-- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]]
00:04:59.336   16:48:52	-- setup/acl.sh@22 -- # devs+=("$dev")
00:04:59.336   16:48:52	-- setup/acl.sh@22 -- # drivers["$dev"]=nvme
00:04:59.336   16:48:52	-- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _
00:04:59.336   16:48:52	-- setup/acl.sh@24 -- # (( 1 > 0 ))
00:04:59.336   16:48:52	-- setup/acl.sh@54 -- # run_test denied denied
00:04:59.336   16:48:52	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:04:59.337   16:48:52	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:04:59.337   16:48:52	-- common/autotest_common.sh@10 -- # set +x
00:04:59.337  ************************************
00:04:59.337  START TEST denied
00:04:59.337  ************************************
00:04:59.337   16:48:52	-- common/autotest_common.sh@1114 -- # denied
00:04:59.337   16:48:52	-- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0'
00:04:59.337   16:48:52	-- setup/acl.sh@38 -- # setup output config
00:04:59.337   16:48:52	-- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0'
00:04:59.337   16:48:52	-- setup/common.sh@9 -- # [[ output == output ]]
00:04:59.337   16:48:52	-- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config
00:05:00.711  0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0
00:05:00.711   16:48:53	-- setup/acl.sh@40 -- # verify 0000:00:06.0
00:05:00.711   16:48:53	-- setup/acl.sh@28 -- # local dev driver
00:05:00.711   16:48:53	-- setup/acl.sh@30 -- # for dev in "$@"
00:05:00.711   16:48:53	-- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]]
00:05:00.711    16:48:53	-- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver
00:05:00.711   16:48:53	-- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme
00:05:00.711   16:48:53	-- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]]
00:05:00.711   16:48:53	-- setup/acl.sh@41 -- # setup reset
00:05:00.711   16:48:53	-- setup/common.sh@9 -- # [[ reset == output ]]
00:05:00.711   16:48:53	-- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:05:01.278  
00:05:01.278  real	0m1.951s
00:05:01.278  user	0m0.492s
00:05:01.278  sys	0m1.522s
00:05:01.278   16:48:54	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:05:01.278  ************************************
00:05:01.278  END TEST denied
00:05:01.278  ************************************
00:05:01.278   16:48:54	-- common/autotest_common.sh@10 -- # set +x
00:05:01.278   16:48:54	-- setup/acl.sh@55 -- # run_test allowed allowed
00:05:01.278   16:48:54	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:05:01.278   16:48:54	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:05:01.278   16:48:54	-- common/autotest_common.sh@10 -- # set +x
00:05:01.536  ************************************
00:05:01.536  START TEST allowed
00:05:01.536  ************************************
00:05:01.536   16:48:54	-- common/autotest_common.sh@1114 -- # allowed
00:05:01.536   16:48:54	-- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0
00:05:01.536   16:48:54	-- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*'
00:05:01.536   16:48:54	-- setup/acl.sh@45 -- # setup output config
00:05:01.536   16:48:54	-- setup/common.sh@9 -- # [[ output == output ]]
00:05:01.536   16:48:54	-- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config
00:05:04.067  0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic
00:05:04.068   16:48:56	-- setup/acl.sh@47 -- # verify
00:05:04.068   16:48:56	-- setup/acl.sh@28 -- # local dev driver
00:05:04.068   16:48:56	-- setup/acl.sh@48 -- # setup reset
00:05:04.068   16:48:56	-- setup/common.sh@9 -- # [[ reset == output ]]
00:05:04.068   16:48:56	-- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:05:04.326  ************************************
00:05:04.326  END TEST allowed
00:05:04.326  ************************************
00:05:04.326  
00:05:04.326  real	0m2.948s
00:05:04.326  user	0m0.476s
00:05:04.326  sys	0m2.474s
00:05:04.326   16:48:57	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:05:04.326   16:48:57	-- common/autotest_common.sh@10 -- # set +x
00:05:04.326  
00:05:04.326  real	0m6.191s
00:05:04.326  user	0m1.626s
00:05:04.326  sys	0m4.717s
00:05:04.326   16:48:57	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:05:04.326   16:48:57	-- common/autotest_common.sh@10 -- # set +x
00:05:04.326  ************************************
00:05:04.326  END TEST acl
00:05:04.326  ************************************
00:05:04.586   16:48:57	-- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh
00:05:04.586   16:48:57	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:05:04.586   16:48:57	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:05:04.586   16:48:57	-- common/autotest_common.sh@10 -- # set +x
00:05:04.586  ************************************
00:05:04.586  START TEST hugepages
00:05:04.586  ************************************
00:05:04.586   16:48:57	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh
00:05:04.586  * Looking for test storage...
00:05:04.586  * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup
00:05:04.586     16:48:57	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:05:04.586      16:48:57	-- common/autotest_common.sh@1690 -- # lcov --version
00:05:04.586      16:48:57	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:05:04.586     16:48:57	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:05:04.586     16:48:57	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:05:04.586     16:48:57	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:05:04.586     16:48:57	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:05:04.586     16:48:57	-- scripts/common.sh@335 -- # IFS=.-:
00:05:04.586     16:48:57	-- scripts/common.sh@335 -- # read -ra ver1
00:05:04.586     16:48:57	-- scripts/common.sh@336 -- # IFS=.-:
00:05:04.586     16:48:57	-- scripts/common.sh@336 -- # read -ra ver2
00:05:04.586     16:48:57	-- scripts/common.sh@337 -- # local 'op=<'
00:05:04.586     16:48:57	-- scripts/common.sh@339 -- # ver1_l=2
00:05:04.586     16:48:57	-- scripts/common.sh@340 -- # ver2_l=1
00:05:04.586     16:48:57	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:05:04.586     16:48:57	-- scripts/common.sh@343 -- # case "$op" in
00:05:04.586     16:48:57	-- scripts/common.sh@344 -- # : 1
00:05:04.586     16:48:57	-- scripts/common.sh@363 -- # (( v = 0 ))
00:05:04.586     16:48:57	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:05:04.586      16:48:57	-- scripts/common.sh@364 -- # decimal 1
00:05:04.586      16:48:57	-- scripts/common.sh@352 -- # local d=1
00:05:04.586      16:48:57	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:05:04.586      16:48:57	-- scripts/common.sh@354 -- # echo 1
00:05:04.586     16:48:57	-- scripts/common.sh@364 -- # ver1[v]=1
00:05:04.586      16:48:57	-- scripts/common.sh@365 -- # decimal 2
00:05:04.586      16:48:57	-- scripts/common.sh@352 -- # local d=2
00:05:04.586      16:48:57	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:05:04.586      16:48:57	-- scripts/common.sh@354 -- # echo 2
00:05:04.586     16:48:57	-- scripts/common.sh@365 -- # ver2[v]=2
00:05:04.586     16:48:57	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:05:04.586     16:48:57	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:05:04.586     16:48:57	-- scripts/common.sh@367 -- # return 0
00:05:04.587     16:48:57	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:05:04.587     16:48:57	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:05:04.587  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:04.587  		--rc genhtml_branch_coverage=1
00:05:04.587  		--rc genhtml_function_coverage=1
00:05:04.587  		--rc genhtml_legend=1
00:05:04.587  		--rc geninfo_all_blocks=1
00:05:04.587  		--rc geninfo_unexecuted_blocks=1
00:05:04.587  		
00:05:04.587  		'
00:05:04.587     16:48:57	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:05:04.587  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:04.587  		--rc genhtml_branch_coverage=1
00:05:04.587  		--rc genhtml_function_coverage=1
00:05:04.587  		--rc genhtml_legend=1
00:05:04.587  		--rc geninfo_all_blocks=1
00:05:04.587  		--rc geninfo_unexecuted_blocks=1
00:05:04.587  		
00:05:04.587  		'
00:05:04.587     16:48:57	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:05:04.587  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:04.587  		--rc genhtml_branch_coverage=1
00:05:04.587  		--rc genhtml_function_coverage=1
00:05:04.587  		--rc genhtml_legend=1
00:05:04.587  		--rc geninfo_all_blocks=1
00:05:04.587  		--rc geninfo_unexecuted_blocks=1
00:05:04.587  		
00:05:04.587  		'
00:05:04.587     16:48:57	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:05:04.587  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:04.587  		--rc genhtml_branch_coverage=1
00:05:04.587  		--rc genhtml_function_coverage=1
00:05:04.587  		--rc genhtml_legend=1
00:05:04.587  		--rc geninfo_all_blocks=1
00:05:04.587  		--rc geninfo_unexecuted_blocks=1
00:05:04.587  		
00:05:04.587  		'
00:05:04.587   16:48:57	-- setup/hugepages.sh@10 -- # nodes_sys=()
00:05:04.587   16:48:57	-- setup/hugepages.sh@10 -- # declare -a nodes_sys
00:05:04.587   16:48:57	-- setup/hugepages.sh@12 -- # declare -i default_hugepages=0
00:05:04.587   16:48:57	-- setup/hugepages.sh@13 -- # declare -i no_nodes=0
00:05:04.587   16:48:57	-- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0
00:05:04.587    16:48:57	-- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize
00:05:04.587    16:48:57	-- setup/common.sh@17 -- # local get=Hugepagesize
00:05:04.587    16:48:57	-- setup/common.sh@18 -- # local node=
00:05:04.587    16:48:57	-- setup/common.sh@19 -- # local var val
00:05:04.587    16:48:57	-- setup/common.sh@20 -- # local mem_f mem
00:05:04.587    16:48:57	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:05:04.587    16:48:57	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:05:04.587    16:48:57	-- setup/common.sh@25 -- # [[ -n '' ]]
00:05:04.587    16:48:57	-- setup/common.sh@28 -- # mapfile -t mem
00:05:04.587    16:48:57	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:05:04.587    16:48:57	-- setup/common.sh@31 -- # IFS=': '
00:05:04.587    16:48:57	-- setup/common.sh@31 -- # read -r var val _
00:05:04.587     16:48:57	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12242972 kB' 'MemFree:         2112772 kB' 'MemAvailable:    7388596 kB' 'Buffers:           39848 kB' 'Cached:          5335104 kB' 'SwapCached:            0 kB' 'Active:          1375336 kB' 'Inactive:        4128944 kB' 'Active(anon):       1052 kB' 'Inactive(anon):   140200 kB' 'Active(file):    1374284 kB' 'Inactive(file):  3988744 kB' 'Unevictable:       29168 kB' 'Mlocked:           27632 kB' 'SwapTotal:             0 kB' 'SwapFree:              0 kB' 'Dirty:               408 kB' 'Writeback:             0 kB' 'AnonPages:        158624 kB' 'Mapped:            68680 kB' 'Shmem:              2600 kB' 'KReclaimable:     234100 kB' 'Slab:             302224 kB' 'SReclaimable:     234100 kB' 'SUnreclaim:        68124 kB' 'KernelStack:        4540 kB' 'PageTables:         3728 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:     4024332 kB' 'Committed_AS:     505056 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:       19676 kB' 'VmallocChunk:          0 kB' 'Percpu:             8256 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'HugePages_Total:    2048' 'HugePages_Free:     2048' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         4194304 kB' 'DirectMap4k:      141164 kB' 'DirectMap2M:     4052992 kB' 'DirectMap1G:    10485760 kB'
00:05:04.587    16:48:57	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:04.587    16:48:57	-- setup/common.sh@32 -- # continue
00:05:04.587    16:48:57	-- setup/common.sh@31 -- # IFS=': '
00:05:04.587    16:48:57	-- setup/common.sh@31 -- # read -r var val _
00:05:04.587    16:48:57	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:04.587    16:48:57	-- setup/common.sh@32 -- # continue
00:05:04.587    16:48:57	-- setup/common.sh@31 -- # IFS=': '
00:05:04.587    16:48:57	-- setup/common.sh@31 -- # read -r var val _
00:05:04.587    16:48:57	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:04.587    16:48:57	-- setup/common.sh@32 -- # continue
00:05:04.587    16:48:57	-- setup/common.sh@31 -- # IFS=': '
00:05:04.587    16:48:57	-- setup/common.sh@31 -- # read -r var val _
00:05:04.587    16:48:57	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:04.587    16:48:57	-- setup/common.sh@32 -- # continue
00:05:04.587    16:48:57	-- setup/common.sh@31 -- # IFS=': '
00:05:04.587    16:48:57	-- setup/common.sh@31 -- # read -r var val _
00:05:04.587    16:48:57	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:04.587    16:48:57	-- setup/common.sh@32 -- # continue
00:05:04.587    16:48:57	-- setup/common.sh@31 -- # IFS=': '
00:05:04.587    16:48:57	-- setup/common.sh@31 -- # read -r var val _
00:05:04.587    16:48:57	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:04.587    16:48:57	-- setup/common.sh@32 -- # continue
00:05:04.587    16:48:57	-- setup/common.sh@31 -- # IFS=': '
00:05:04.587    16:48:57	-- setup/common.sh@31 -- # read -r var val _
00:05:04.587    16:48:57	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:04.587    16:48:57	-- setup/common.sh@32 -- # continue
00:05:04.587    16:48:57	-- setup/common.sh@31 -- # IFS=': '
00:05:04.587    16:48:57	-- setup/common.sh@31 -- # read -r var val _
00:05:04.587    16:48:57	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:04.587    16:48:57	-- setup/common.sh@32 -- # continue
00:05:04.587    16:48:57	-- setup/common.sh@31 -- # IFS=': '
00:05:04.587    16:48:57	-- setup/common.sh@31 -- # read -r var val _
00:05:04.587    16:48:57	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:04.587    16:48:57	-- setup/common.sh@32 -- # continue
00:05:04.587    16:48:57	-- setup/common.sh@31 -- # IFS=': '
00:05:04.587    16:48:57	-- setup/common.sh@31 -- # read -r var val _
00:05:04.587    16:48:57	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:04.587    16:48:57	-- setup/common.sh@32 -- # continue
00:05:04.587    16:48:57	-- setup/common.sh@31 -- # IFS=': '
00:05:04.587    16:48:57	-- setup/common.sh@31 -- # read -r var val _
00:05:04.587    16:48:57	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:04.587    16:48:57	-- setup/common.sh@32 -- # continue
00:05:04.587    16:48:57	-- setup/common.sh@31 -- # IFS=': '
00:05:04.587    16:48:57	-- setup/common.sh@31 -- # read -r var val _
00:05:04.587    16:48:57	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:04.587    16:48:57	-- setup/common.sh@32 -- # continue
00:05:04.587    16:48:57	-- setup/common.sh@31 -- # IFS=': '
00:05:04.587    16:48:57	-- setup/common.sh@31 -- # read -r var val _
00:05:04.587    16:48:57	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:04.587    16:48:57	-- setup/common.sh@32 -- # continue
00:05:04.587    16:48:57	-- setup/common.sh@31 -- # IFS=': '
00:05:04.587    16:48:57	-- setup/common.sh@31 -- # read -r var val _
00:05:04.587    16:48:57	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:04.587    16:48:57	-- setup/common.sh@32 -- # continue
00:05:04.587    16:48:57	-- setup/common.sh@31 -- # IFS=': '
00:05:04.587    16:48:57	-- setup/common.sh@31 -- # read -r var val _
00:05:04.587    16:48:57	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:04.587    16:48:57	-- setup/common.sh@32 -- # continue
00:05:04.587    16:48:57	-- setup/common.sh@31 -- # IFS=': '
00:05:04.587    16:48:57	-- setup/common.sh@31 -- # read -r var val _
00:05:04.587    16:48:57	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:04.587    16:48:57	-- setup/common.sh@32 -- # continue
00:05:04.587    16:48:57	-- setup/common.sh@31 -- # IFS=': '
00:05:04.587    16:48:57	-- setup/common.sh@31 -- # read -r var val _
00:05:04.587    16:48:57	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:04.587    16:48:57	-- setup/common.sh@32 -- # continue
00:05:04.587    16:48:57	-- setup/common.sh@31 -- # IFS=': '
00:05:04.587    16:48:57	-- setup/common.sh@31 -- # read -r var val _
00:05:04.587    16:48:57	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:04.587    16:48:57	-- setup/common.sh@32 -- # continue
00:05:04.587    16:48:57	-- setup/common.sh@31 -- # IFS=': '
00:05:04.587    16:48:57	-- setup/common.sh@31 -- # read -r var val _
00:05:04.587    16:48:57	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:04.587    16:48:57	-- setup/common.sh@32 -- # continue
00:05:04.587    16:48:57	-- setup/common.sh@31 -- # IFS=': '
00:05:04.587    16:48:57	-- setup/common.sh@31 -- # read -r var val _
00:05:04.587    16:48:57	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:04.587    16:48:57	-- setup/common.sh@32 -- # continue
00:05:04.587    16:48:57	-- setup/common.sh@31 -- # IFS=': '
00:05:04.587    16:48:57	-- setup/common.sh@31 -- # read -r var val _
00:05:04.587    16:48:57	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:04.587    16:48:57	-- setup/common.sh@32 -- # continue
00:05:04.587    16:48:57	-- setup/common.sh@31 -- # IFS=': '
00:05:04.587    16:48:57	-- setup/common.sh@31 -- # read -r var val _
00:05:04.587    16:48:57	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:04.587    16:48:57	-- setup/common.sh@32 -- # continue
00:05:04.587    16:48:57	-- setup/common.sh@31 -- # IFS=': '
00:05:04.587    16:48:57	-- setup/common.sh@31 -- # read -r var val _
00:05:04.587    16:48:57	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:04.587    16:48:57	-- setup/common.sh@32 -- # continue
00:05:04.587    16:48:57	-- setup/common.sh@31 -- # IFS=': '
00:05:04.587    16:48:57	-- setup/common.sh@31 -- # read -r var val _
00:05:04.587    16:48:57	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:04.587    16:48:57	-- setup/common.sh@32 -- # continue
00:05:04.587    16:48:57	-- setup/common.sh@31 -- # IFS=': '
00:05:04.587    16:48:57	-- setup/common.sh@31 -- # read -r var val _
00:05:04.587    16:48:57	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:04.587    16:48:57	-- setup/common.sh@32 -- # continue
00:05:04.587    16:48:57	-- setup/common.sh@31 -- # IFS=': '
00:05:04.588    16:48:57	-- setup/common.sh@31 -- # read -r var val _
00:05:04.588    16:48:57	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:04.588    16:48:57	-- setup/common.sh@32 -- # continue
00:05:04.588    16:48:57	-- setup/common.sh@31 -- # IFS=': '
00:05:04.588    16:48:57	-- setup/common.sh@31 -- # read -r var val _
00:05:04.588    16:48:57	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:04.588    16:48:57	-- setup/common.sh@32 -- # continue
00:05:04.588    16:48:57	-- setup/common.sh@31 -- # IFS=': '
00:05:04.588    16:48:57	-- setup/common.sh@31 -- # read -r var val _
00:05:04.588    16:48:57	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:04.588    16:48:57	-- setup/common.sh@32 -- # continue
00:05:04.588    16:48:57	-- setup/common.sh@31 -- # IFS=': '
00:05:04.588    16:48:57	-- setup/common.sh@31 -- # read -r var val _
00:05:04.588    16:48:57	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:04.588    16:48:57	-- setup/common.sh@32 -- # continue
00:05:04.588    16:48:57	-- setup/common.sh@31 -- # IFS=': '
00:05:04.588    16:48:57	-- setup/common.sh@31 -- # read -r var val _
00:05:04.588    16:48:57	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:04.588    16:48:57	-- setup/common.sh@32 -- # continue
00:05:04.588    16:48:57	-- setup/common.sh@31 -- # IFS=': '
00:05:04.588    16:48:57	-- setup/common.sh@31 -- # read -r var val _
00:05:04.588    16:48:57	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:04.588    16:48:57	-- setup/common.sh@32 -- # continue
00:05:04.588    16:48:57	-- setup/common.sh@31 -- # IFS=': '
00:05:04.588    16:48:57	-- setup/common.sh@31 -- # read -r var val _
00:05:04.588    16:48:57	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:04.588    16:48:57	-- setup/common.sh@32 -- # continue
00:05:04.588    16:48:57	-- setup/common.sh@31 -- # IFS=': '
00:05:04.588    16:48:57	-- setup/common.sh@31 -- # read -r var val _
00:05:04.588    16:48:57	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:04.588    16:48:57	-- setup/common.sh@32 -- # continue
00:05:04.588    16:48:57	-- setup/common.sh@31 -- # IFS=': '
00:05:04.588    16:48:57	-- setup/common.sh@31 -- # read -r var val _
00:05:04.588    16:48:57	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:04.588    16:48:57	-- setup/common.sh@32 -- # continue
00:05:04.588    16:48:57	-- setup/common.sh@31 -- # IFS=': '
00:05:04.588    16:48:57	-- setup/common.sh@31 -- # read -r var val _
00:05:04.588    16:48:57	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:04.588    16:48:57	-- setup/common.sh@32 -- # continue
00:05:04.588    16:48:57	-- setup/common.sh@31 -- # IFS=': '
00:05:04.588    16:48:57	-- setup/common.sh@31 -- # read -r var val _
00:05:04.588    16:48:57	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:04.588    16:48:57	-- setup/common.sh@32 -- # continue
00:05:04.588    16:48:57	-- setup/common.sh@31 -- # IFS=': '
00:05:04.588    16:48:57	-- setup/common.sh@31 -- # read -r var val _
00:05:04.588    16:48:57	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:04.588    16:48:57	-- setup/common.sh@32 -- # continue
00:05:04.588    16:48:57	-- setup/common.sh@31 -- # IFS=': '
00:05:04.588    16:48:57	-- setup/common.sh@31 -- # read -r var val _
00:05:04.588    16:48:57	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:04.588    16:48:57	-- setup/common.sh@32 -- # continue
00:05:04.588    16:48:57	-- setup/common.sh@31 -- # IFS=': '
00:05:04.588    16:48:57	-- setup/common.sh@31 -- # read -r var val _
00:05:04.588    16:48:57	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:04.588    16:48:57	-- setup/common.sh@32 -- # continue
00:05:04.588    16:48:57	-- setup/common.sh@31 -- # IFS=': '
00:05:04.588    16:48:57	-- setup/common.sh@31 -- # read -r var val _
00:05:04.588    16:48:57	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:04.588    16:48:57	-- setup/common.sh@32 -- # continue
00:05:04.588    16:48:57	-- setup/common.sh@31 -- # IFS=': '
00:05:04.588    16:48:57	-- setup/common.sh@31 -- # read -r var val _
00:05:04.588    16:48:57	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:04.588    16:48:57	-- setup/common.sh@32 -- # continue
00:05:04.588    16:48:57	-- setup/common.sh@31 -- # IFS=': '
00:05:04.588    16:48:57	-- setup/common.sh@31 -- # read -r var val _
00:05:04.588    16:48:57	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:04.588    16:48:57	-- setup/common.sh@32 -- # continue
00:05:04.588    16:48:57	-- setup/common.sh@31 -- # IFS=': '
00:05:04.588    16:48:57	-- setup/common.sh@31 -- # read -r var val _
00:05:04.588    16:48:57	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:04.588    16:48:57	-- setup/common.sh@32 -- # continue
00:05:04.588    16:48:57	-- setup/common.sh@31 -- # IFS=': '
00:05:04.588    16:48:57	-- setup/common.sh@31 -- # read -r var val _
00:05:04.588    16:48:57	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:04.588    16:48:57	-- setup/common.sh@32 -- # continue
00:05:04.588    16:48:57	-- setup/common.sh@31 -- # IFS=': '
00:05:04.588    16:48:57	-- setup/common.sh@31 -- # read -r var val _
00:05:04.588    16:48:57	-- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:04.588    16:48:57	-- setup/common.sh@32 -- # continue
00:05:04.588    16:48:57	-- setup/common.sh@31 -- # IFS=': '
00:05:04.588    16:48:57	-- setup/common.sh@31 -- # read -r var val _
00:05:04.588    16:48:57	-- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:04.588    16:48:57	-- setup/common.sh@32 -- # continue
00:05:04.588    16:48:57	-- setup/common.sh@31 -- # IFS=': '
00:05:04.588    16:48:57	-- setup/common.sh@31 -- # read -r var val _
00:05:04.588    16:48:57	-- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:04.588    16:48:57	-- setup/common.sh@33 -- # echo 2048
00:05:04.588    16:48:57	-- setup/common.sh@33 -- # return 0
00:05:04.588   16:48:57	-- setup/hugepages.sh@16 -- # default_hugepages=2048
00:05:04.588   16:48:57	-- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
00:05:04.588   16:48:57	-- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages
00:05:04.588   16:48:57	-- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC
00:05:04.588   16:48:57	-- setup/hugepages.sh@22 -- # unset -v HUGEMEM
00:05:04.588   16:48:57	-- setup/hugepages.sh@23 -- # unset -v HUGENODE
00:05:04.588   16:48:57	-- setup/hugepages.sh@24 -- # unset -v NRHUGE
00:05:04.588   16:48:57	-- setup/hugepages.sh@207 -- # get_nodes
00:05:04.588   16:48:57	-- setup/hugepages.sh@27 -- # local node
00:05:04.588   16:48:57	-- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9])
00:05:04.847   16:48:57	-- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048
00:05:04.847   16:48:57	-- setup/hugepages.sh@32 -- # no_nodes=1
00:05:04.847   16:48:57	-- setup/hugepages.sh@33 -- # (( no_nodes > 0 ))
00:05:04.847   16:48:57	-- setup/hugepages.sh@208 -- # clear_hp
00:05:04.847   16:48:57	-- setup/hugepages.sh@37 -- # local node hp
00:05:04.847   16:48:57	-- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}"
00:05:04.847   16:48:57	-- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"*
00:05:04.847   16:48:57	-- setup/hugepages.sh@41 -- # echo 0
00:05:04.847   16:48:57	-- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"*
00:05:04.847   16:48:57	-- setup/hugepages.sh@41 -- # echo 0
00:05:04.847   16:48:57	-- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes
00:05:04.847   16:48:57	-- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes
00:05:04.847   16:48:57	-- setup/hugepages.sh@210 -- # run_test default_setup default_setup
00:05:04.847   16:48:57	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:05:04.847   16:48:57	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:05:04.847   16:48:57	-- common/autotest_common.sh@10 -- # set +x
00:05:04.847  ************************************
00:05:04.847  START TEST default_setup
00:05:04.847  ************************************
00:05:04.847   16:48:57	-- common/autotest_common.sh@1114 -- # default_setup
00:05:04.847   16:48:57	-- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0
00:05:04.847   16:48:57	-- setup/hugepages.sh@49 -- # local size=2097152
00:05:04.847   16:48:57	-- setup/hugepages.sh@50 -- # (( 2 > 1 ))
00:05:04.847   16:48:57	-- setup/hugepages.sh@51 -- # shift
00:05:04.847   16:48:57	-- setup/hugepages.sh@52 -- # node_ids=('0')
00:05:04.847   16:48:57	-- setup/hugepages.sh@52 -- # local node_ids
00:05:04.847   16:48:57	-- setup/hugepages.sh@55 -- # (( size >= default_hugepages ))
00:05:04.847   16:48:57	-- setup/hugepages.sh@57 -- # nr_hugepages=1024
00:05:04.847   16:48:57	-- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0
00:05:04.847   16:48:57	-- setup/hugepages.sh@62 -- # user_nodes=('0')
00:05:04.847   16:48:57	-- setup/hugepages.sh@62 -- # local user_nodes
00:05:04.847   16:48:57	-- setup/hugepages.sh@64 -- # local _nr_hugepages=1024
00:05:04.847   16:48:57	-- setup/hugepages.sh@65 -- # local _no_nodes=1
00:05:04.847   16:48:57	-- setup/hugepages.sh@67 -- # nodes_test=()
00:05:04.847   16:48:57	-- setup/hugepages.sh@67 -- # local -g nodes_test
00:05:04.847   16:48:57	-- setup/hugepages.sh@69 -- # (( 1 > 0 ))
00:05:04.847   16:48:57	-- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}"
00:05:04.847   16:48:57	-- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024
00:05:04.847   16:48:57	-- setup/hugepages.sh@73 -- # return 0
00:05:04.847   16:48:57	-- setup/hugepages.sh@137 -- # setup output
00:05:04.847   16:48:57	-- setup/common.sh@9 -- # [[ output == output ]]
00:05:04.847   16:48:57	-- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:05:05.105  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev
00:05:05.364  0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic
00:05:06.301   16:48:58	-- setup/hugepages.sh@138 -- # verify_nr_hugepages
00:05:06.301   16:48:58	-- setup/hugepages.sh@89 -- # local node
00:05:06.301   16:48:58	-- setup/hugepages.sh@90 -- # local sorted_t
00:05:06.301   16:48:58	-- setup/hugepages.sh@91 -- # local sorted_s
00:05:06.301   16:48:58	-- setup/hugepages.sh@92 -- # local surp
00:05:06.301   16:48:58	-- setup/hugepages.sh@93 -- # local resv
00:05:06.301   16:48:58	-- setup/hugepages.sh@94 -- # local anon
00:05:06.301   16:48:58	-- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]]
00:05:06.301    16:48:58	-- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages
00:05:06.301    16:48:58	-- setup/common.sh@17 -- # local get=AnonHugePages
00:05:06.301    16:48:58	-- setup/common.sh@18 -- # local node=
00:05:06.301    16:48:58	-- setup/common.sh@19 -- # local var val
00:05:06.301    16:48:58	-- setup/common.sh@20 -- # local mem_f mem
00:05:06.301    16:48:58	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:05:06.301    16:48:58	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:05:06.301    16:48:58	-- setup/common.sh@25 -- # [[ -n '' ]]
00:05:06.301    16:48:58	-- setup/common.sh@28 -- # mapfile -t mem
00:05:06.301    16:48:58	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:05:06.301    16:48:58	-- setup/common.sh@31 -- # IFS=': '
00:05:06.301    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.301     16:48:59	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12242972 kB' 'MemFree:         4212888 kB' 'MemAvailable:    9488928 kB' 'Buffers:           39848 kB' 'Cached:          5335116 kB' 'SwapCached:            0 kB' 'Active:          1375448 kB' 'Inactive:        4130568 kB' 'Active(anon):       1060 kB' 'Inactive(anon):   141652 kB' 'Active(file):    1374388 kB' 'Inactive(file):  3988916 kB' 'Unevictable:       29168 kB' 'Mlocked:           27632 kB' 'SwapTotal:             0 kB' 'SwapFree:              0 kB' 'Dirty:               404 kB' 'Writeback:             0 kB' 'AnonPages:        160328 kB' 'Mapped:            68012 kB' 'Shmem:              2596 kB' 'KReclaimable:     234040 kB' 'Slab:             302256 kB' 'SReclaimable:     234040 kB' 'SUnreclaim:        68216 kB' 'KernelStack:        4432 kB' 'PageTables:         3616 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:     5072908 kB' 'Committed_AS:     506076 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:       19612 kB' 'VmallocChunk:          0 kB' 'Percpu:             8256 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'HugePages_Total:    1024' 'HugePages_Free:     1024' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2097152 kB' 'DirectMap4k:      141164 kB' 'DirectMap2M:     4052992 kB' 'DirectMap1G:    10485760 kB'
00:05:06.301    16:48:59	-- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:06.301    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.301    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.301    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.301    16:48:59	-- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:06.301    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.301    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.301    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.301    16:48:59	-- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:06.301    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.301    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.301    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.301    16:48:59	-- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:06.301    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.302    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.302    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.302    16:48:59	-- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:06.302    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.302    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.302    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.302    16:48:59	-- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:06.302    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.302    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.302    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.302    16:48:59	-- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:06.302    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.302    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.302    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.302    16:48:59	-- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:06.302    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.302    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.302    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.302    16:48:59	-- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:06.302    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.302    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.302    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.302    16:48:59	-- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:06.302    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.302    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.302    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.302    16:48:59	-- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:06.302    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.302    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.302    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.302    16:48:59	-- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:06.302    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.302    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.302    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.302    16:48:59	-- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:06.302    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.302    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.302    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.302    16:48:59	-- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:06.302    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.302    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.302    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.302    16:48:59	-- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:06.302    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.302    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.302    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.302    16:48:59	-- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:06.302    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.302    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.302    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.302    16:48:59	-- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:06.302    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.302    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.302    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.302    16:48:59	-- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:06.302    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.302    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.302    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.302    16:48:59	-- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:06.302    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.302    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.302    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.302    16:48:59	-- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:06.302    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.302    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.302    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.302    16:48:59	-- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:06.302    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.302    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.302    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.302    16:48:59	-- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:06.302    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.302    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.302    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.302    16:48:59	-- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:06.302    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.302    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.302    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.302    16:48:59	-- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:06.302    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.302    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.302    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.302    16:48:59	-- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:06.302    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.302    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.302    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.302    16:48:59	-- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:06.302    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.302    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.302    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.302    16:48:59	-- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:06.302    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.302    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.302    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.302    16:48:59	-- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:06.302    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.302    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.302    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.302    16:48:59	-- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:06.302    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.302    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.302    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.302    16:48:59	-- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:06.302    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.302    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.302    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.302    16:48:59	-- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:06.302    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.302    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.302    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.302    16:48:59	-- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:06.302    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.302    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.302    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.302    16:48:59	-- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:06.302    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.302    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.302    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.302    16:48:59	-- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:06.302    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.302    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.302    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.302    16:48:59	-- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:06.302    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.302    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.302    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.302    16:48:59	-- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:06.302    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.302    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.302    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.302    16:48:59	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:06.302    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.302    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.302    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.302    16:48:59	-- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:06.302    16:48:59	-- setup/common.sh@33 -- # echo 0
00:05:06.302    16:48:59	-- setup/common.sh@33 -- # return 0
00:05:06.302   16:48:59	-- setup/hugepages.sh@97 -- # anon=0
00:05:06.302    16:48:59	-- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp
00:05:06.302    16:48:59	-- setup/common.sh@17 -- # local get=HugePages_Surp
00:05:06.302    16:48:59	-- setup/common.sh@18 -- # local node=
00:05:06.302    16:48:59	-- setup/common.sh@19 -- # local var val
00:05:06.302    16:48:59	-- setup/common.sh@20 -- # local mem_f mem
00:05:06.302    16:48:59	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:05:06.302    16:48:59	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:05:06.302    16:48:59	-- setup/common.sh@25 -- # [[ -n '' ]]
00:05:06.302    16:48:59	-- setup/common.sh@28 -- # mapfile -t mem
00:05:06.302    16:48:59	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:05:06.302    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.302    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.303     16:48:59	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12242972 kB' 'MemFree:         4212888 kB' 'MemAvailable:    9488928 kB' 'Buffers:           39848 kB' 'Cached:          5335116 kB' 'SwapCached:            0 kB' 'Active:          1375448 kB' 'Inactive:        4130352 kB' 'Active(anon):       1060 kB' 'Inactive(anon):   141436 kB' 'Active(file):    1374388 kB' 'Inactive(file):  3988916 kB' 'Unevictable:       29168 kB' 'Mlocked:           27632 kB' 'SwapTotal:             0 kB' 'SwapFree:              0 kB' 'Dirty:               404 kB' 'Writeback:             0 kB' 'AnonPages:        160148 kB' 'Mapped:            68012 kB' 'Shmem:              2596 kB' 'KReclaimable:     234040 kB' 'Slab:             302256 kB' 'SReclaimable:     234040 kB' 'SUnreclaim:        68216 kB' 'KernelStack:        4496 kB' 'PageTables:         3788 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:     5072908 kB' 'Committed_AS:     506076 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:       19612 kB' 'VmallocChunk:          0 kB' 'Percpu:             8256 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'HugePages_Total:    1024' 'HugePages_Free:     1024' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2097152 kB' 'DirectMap4k:      141164 kB' 'DirectMap2M:     4052992 kB' 'DirectMap1G:    10485760 kB'
00:05:06.303    16:48:59	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:06.303    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.303    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.303    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.303    16:48:59	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:06.303    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.303    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.303    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.303    16:48:59	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:06.303    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.303    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.303    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.303    16:48:59	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:06.303    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.303    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.303    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.303    16:48:59	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:06.303    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.303    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.303    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.303    16:48:59	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:06.303    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.303    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.303    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.303    16:48:59	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:06.303    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.303    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.303    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.303    16:48:59	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:06.303    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.303    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.303    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.303    16:48:59	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:06.303    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.303    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.303    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.303    16:48:59	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:06.303    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.303    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.303    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.303    16:48:59	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:06.303    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.303    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.303    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.303    16:48:59	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:06.303    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.303    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.303    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.303    16:48:59	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:06.303    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.303    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.303    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.303    16:48:59	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:06.303    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.303    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.303    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.303    16:48:59	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:06.303    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.303    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.303    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.303    16:48:59	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:06.303    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.303    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.303    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.303    16:48:59	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:06.303    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.303    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.303    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.303    16:48:59	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:06.303    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.303    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.303    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.303    16:48:59	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:06.303    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.303    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.303    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.303    16:48:59	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:06.303    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.303    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.303    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.303    16:48:59	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:06.303    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.303    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.303    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.303    16:48:59	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:06.303    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.303    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.303    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.303    16:48:59	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:06.303    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.303    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.303    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.303    16:48:59	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:06.303    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.303    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.303    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.303    16:48:59	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:06.303    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.303    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.303    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.303    16:48:59	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:06.303    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.303    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.303    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.303    16:48:59	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:06.303    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.303    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.303    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.303    16:48:59	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:06.303    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.303    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.303    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.303    16:48:59	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:06.303    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.303    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.303    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.303    16:48:59	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:06.303    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.303    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.303    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.303    16:48:59	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:06.303    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.303    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.303    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.303    16:48:59	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:06.303    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.303    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.303    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.303    16:48:59	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:06.303    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.303    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.303    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.303    16:48:59	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:06.303    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.303    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.303    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.303    16:48:59	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:06.303    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.303    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.303    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.303    16:48:59	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:06.303    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.303    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.303    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.303    16:48:59	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:06.303    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.303    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.303    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.303    16:48:59	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:06.303    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.303    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.303    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.303    16:48:59	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:06.303    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.303    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.303    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.303    16:48:59	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:06.303    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.303    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.303    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.304    16:48:59	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:06.304    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.304    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.304    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.304    16:48:59	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:06.304    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.304    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.304    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.304    16:48:59	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:06.304    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.304    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.304    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.304    16:48:59	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:06.304    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.304    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.304    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.304    16:48:59	-- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:06.304    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.304    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.304    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.304    16:48:59	-- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:06.304    16:48:59	-- setup/common.sh@33 -- # echo 0
00:05:06.304    16:48:59	-- setup/common.sh@33 -- # return 0
00:05:06.304   16:48:59	-- setup/hugepages.sh@99 -- # surp=0
00:05:06.304    16:48:59	-- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd
00:05:06.304    16:48:59	-- setup/common.sh@17 -- # local get=HugePages_Rsvd
00:05:06.304    16:48:59	-- setup/common.sh@18 -- # local node=
00:05:06.304    16:48:59	-- setup/common.sh@19 -- # local var val
00:05:06.304    16:48:59	-- setup/common.sh@20 -- # local mem_f mem
00:05:06.304    16:48:59	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:05:06.304    16:48:59	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:05:06.304    16:48:59	-- setup/common.sh@25 -- # [[ -n '' ]]
00:05:06.304    16:48:59	-- setup/common.sh@28 -- # mapfile -t mem
00:05:06.304    16:48:59	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:05:06.304    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.304    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.304     16:48:59	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12242972 kB' 'MemFree:         4213896 kB' 'MemAvailable:    9489936 kB' 'Buffers:           39848 kB' 'Cached:          5335116 kB' 'SwapCached:            0 kB' 'Active:          1375448 kB' 'Inactive:        4130368 kB' 'Active(anon):       1060 kB' 'Inactive(anon):   141452 kB' 'Active(file):    1374388 kB' 'Inactive(file):  3988916 kB' 'Unevictable:       29168 kB' 'Mlocked:           27632 kB' 'SwapTotal:             0 kB' 'SwapFree:              0 kB' 'Dirty:               404 kB' 'Writeback:             0 kB' 'AnonPages:        160124 kB' 'Mapped:            68012 kB' 'Shmem:              2596 kB' 'KReclaimable:     234040 kB' 'Slab:             302256 kB' 'SReclaimable:     234040 kB' 'SUnreclaim:        68216 kB' 'KernelStack:        4400 kB' 'PageTables:         3536 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:     5072908 kB' 'Committed_AS:     506076 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:       19612 kB' 'VmallocChunk:          0 kB' 'Percpu:             8256 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'HugePages_Total:    1024' 'HugePages_Free:     1024' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2097152 kB' 'DirectMap4k:      141164 kB' 'DirectMap2M:     4052992 kB' 'DirectMap1G:    10485760 kB'
00:05:06.304    16:48:59	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:06.304    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.304    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.304    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.304    16:48:59	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:06.304    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.304    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.304    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.304    16:48:59	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:06.304    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.304    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.304    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.304    16:48:59	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:06.304    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.304    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.304    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.304    16:48:59	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:06.304    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.304    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.304    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.304    16:48:59	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:06.304    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.304    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.304    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.304    16:48:59	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:06.304    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.304    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.304    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.304    16:48:59	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:06.304    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.304    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.304    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.304    16:48:59	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:06.304    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.304    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.304    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.304    16:48:59	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:06.304    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.304    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.304    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.304    16:48:59	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:06.304    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.304    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.304    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.304    16:48:59	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:06.304    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.304    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.304    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.304    16:48:59	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:06.304    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.304    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.304    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.304    16:48:59	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:06.304    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.304    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.304    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.304    16:48:59	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:06.304    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.304    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.304    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.304    16:48:59	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:06.304    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.304    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.304    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.304    16:48:59	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:06.304    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.304    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.304    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.304    16:48:59	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:06.304    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.304    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.304    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.304    16:48:59	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:06.304    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.304    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.304    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.304    16:48:59	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:06.304    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.304    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.304    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.304    16:48:59	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:06.304    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.304    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.304    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.304    16:48:59	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:06.304    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.304    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.304    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.304    16:48:59	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:06.304    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.304    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.304    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.304    16:48:59	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:06.304    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.304    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.304    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.304    16:48:59	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:06.304    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.304    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.304    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.304    16:48:59	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:06.304    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.304    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.304    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.304    16:48:59	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:06.304    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.304    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.304    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.304    16:48:59	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:06.304    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.304    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.305    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.305    16:48:59	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:06.305    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.305    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.305    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.305    16:48:59	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:06.305    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.305    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.305    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.305    16:48:59	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:06.305    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.305    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.305    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.305    16:48:59	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:06.305    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.305    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.305    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.305    16:48:59	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:06.305    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.305    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.305    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.305    16:48:59	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:06.305    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.305    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.305    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.305    16:48:59	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:06.305    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.305    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.305    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.305    16:48:59	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:06.305    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.305    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.305    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.305    16:48:59	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:06.305    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.305    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.305    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.305    16:48:59	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:06.305    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.305    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.305    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.305    16:48:59	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:06.305    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.305    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.305    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.305    16:48:59	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:06.305    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.305    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.305    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.305    16:48:59	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:06.305    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.305    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.305    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.305    16:48:59	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:06.305    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.305    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.305    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.305    16:48:59	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:06.305    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.305    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.305    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.305    16:48:59	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:06.305    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.305    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.305    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.305    16:48:59	-- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:06.305    16:48:59	-- setup/common.sh@33 -- # echo 0
00:05:06.305    16:48:59	-- setup/common.sh@33 -- # return 0
00:05:06.305   16:48:59	-- setup/hugepages.sh@100 -- # resv=0
00:05:06.305  nr_hugepages=1024
00:05:06.305   16:48:59	-- setup/hugepages.sh@102 -- # echo nr_hugepages=1024
00:05:06.305  resv_hugepages=0
00:05:06.305   16:48:59	-- setup/hugepages.sh@103 -- # echo resv_hugepages=0
00:05:06.305  surplus_hugepages=0
00:05:06.305   16:48:59	-- setup/hugepages.sh@104 -- # echo surplus_hugepages=0
00:05:06.305  anon_hugepages=0
00:05:06.305   16:48:59	-- setup/hugepages.sh@105 -- # echo anon_hugepages=0
00:05:06.305   16:48:59	-- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv ))
00:05:06.305   16:48:59	-- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages ))
00:05:06.305    16:48:59	-- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total
00:05:06.305    16:48:59	-- setup/common.sh@17 -- # local get=HugePages_Total
00:05:06.305    16:48:59	-- setup/common.sh@18 -- # local node=
00:05:06.305    16:48:59	-- setup/common.sh@19 -- # local var val
00:05:06.305    16:48:59	-- setup/common.sh@20 -- # local mem_f mem
00:05:06.305    16:48:59	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:05:06.305    16:48:59	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:05:06.305    16:48:59	-- setup/common.sh@25 -- # [[ -n '' ]]
00:05:06.305    16:48:59	-- setup/common.sh@28 -- # mapfile -t mem
00:05:06.305    16:48:59	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:05:06.305    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.305    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.305     16:48:59	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12242972 kB' 'MemFree:         4213392 kB' 'MemAvailable:    9489432 kB' 'Buffers:           39848 kB' 'Cached:          5335116 kB' 'SwapCached:            0 kB' 'Active:          1375448 kB' 'Inactive:        4130368 kB' 'Active(anon):       1060 kB' 'Inactive(anon):   141452 kB' 'Active(file):    1374388 kB' 'Inactive(file):  3988916 kB' 'Unevictable:       29168 kB' 'Mlocked:           27632 kB' 'SwapTotal:             0 kB' 'SwapFree:              0 kB' 'Dirty:               404 kB' 'Writeback:             0 kB' 'AnonPages:        160124 kB' 'Mapped:            68012 kB' 'Shmem:              2596 kB' 'KReclaimable:     234040 kB' 'Slab:             302256 kB' 'SReclaimable:     234040 kB' 'SUnreclaim:        68216 kB' 'KernelStack:        4468 kB' 'PageTables:         3536 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:     5072908 kB' 'Committed_AS:     506076 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:       19644 kB' 'VmallocChunk:          0 kB' 'Percpu:             8256 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'HugePages_Total:    1024' 'HugePages_Free:     1024' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2097152 kB' 'DirectMap4k:      141164 kB' 'DirectMap2M:     4052992 kB' 'DirectMap1G:    10485760 kB'
00:05:06.305    16:48:59	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:06.305    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.305    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.305    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.305    16:48:59	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:06.305    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.305    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.305    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.305    16:48:59	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:06.305    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.305    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.305    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.305    16:48:59	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:06.305    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.305    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.305    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.305    16:48:59	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:06.305    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.305    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.305    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.305    16:48:59	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:06.305    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.305    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.305    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.305    16:48:59	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:06.305    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.305    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.305    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.305    16:48:59	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:06.305    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.305    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.305    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.305    16:48:59	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:06.305    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.305    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.305    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.305    16:48:59	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:06.305    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.305    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.305    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.305    16:48:59	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:06.305    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.305    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.305    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.305    16:48:59	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:06.305    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.305    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.305    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.305    16:48:59	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:06.305    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.305    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.305    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.305    16:48:59	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:06.305    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.305    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.305    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.305    16:48:59	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:06.306    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.306    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.306    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.306    16:48:59	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:06.306    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.306    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.306    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.306    16:48:59	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:06.306    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.306    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.306    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.306    16:48:59	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:06.306    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.306    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.306    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.306    16:48:59	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:06.306    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.306    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.306    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.306    16:48:59	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:06.306    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.306    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.306    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.306    16:48:59	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:06.306    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.306    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.306    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.306    16:48:59	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:06.306    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.306    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.306    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.306    16:48:59	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:06.306    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.306    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.306    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.306    16:48:59	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:06.306    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.306    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.306    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.306    16:48:59	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:06.306    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.306    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.306    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.306    16:48:59	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:06.306    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.306    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.306    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.306    16:48:59	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:06.306    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.306    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.306    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.306    16:48:59	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:06.306    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.306    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.306    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.306    16:48:59	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:06.306    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.306    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.306    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.306    16:48:59	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:06.306    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.306    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.306    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.306    16:48:59	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:06.306    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.306    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.306    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.306    16:48:59	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:06.306    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.306    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.306    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.306    16:48:59	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:06.306    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.306    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.306    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.306    16:48:59	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:06.306    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.306    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.306    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.306    16:48:59	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:06.306    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.306    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.306    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.306    16:48:59	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:06.306    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.306    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.306    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.306    16:48:59	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:06.306    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.306    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.306    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.306    16:48:59	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:06.306    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.306    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.306    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.306    16:48:59	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:06.306    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.306    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.306    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.306    16:48:59	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:06.306    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.306    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.306    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.306    16:48:59	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:06.306    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.306    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.306    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.306    16:48:59	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:06.306    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.306    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.306    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.306    16:48:59	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:06.306    16:48:59	-- setup/common.sh@33 -- # echo 1024
00:05:06.306    16:48:59	-- setup/common.sh@33 -- # return 0
00:05:06.306   16:48:59	-- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv ))
00:05:06.306   16:48:59	-- setup/hugepages.sh@112 -- # get_nodes
00:05:06.306   16:48:59	-- setup/hugepages.sh@27 -- # local node
00:05:06.306   16:48:59	-- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9])
00:05:06.306   16:48:59	-- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024
00:05:06.306   16:48:59	-- setup/hugepages.sh@32 -- # no_nodes=1
00:05:06.306   16:48:59	-- setup/hugepages.sh@33 -- # (( no_nodes > 0 ))
00:05:06.306   16:48:59	-- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}"
00:05:06.306   16:48:59	-- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv ))
00:05:06.306    16:48:59	-- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0
00:05:06.306    16:48:59	-- setup/common.sh@17 -- # local get=HugePages_Surp
00:05:06.306    16:48:59	-- setup/common.sh@18 -- # local node=0
00:05:06.306    16:48:59	-- setup/common.sh@19 -- # local var val
00:05:06.306    16:48:59	-- setup/common.sh@20 -- # local mem_f mem
00:05:06.306    16:48:59	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:05:06.306    16:48:59	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]]
00:05:06.306    16:48:59	-- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo
00:05:06.306    16:48:59	-- setup/common.sh@28 -- # mapfile -t mem
00:05:06.306    16:48:59	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:05:06.306    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.307     16:48:59	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12242972 kB' 'MemFree:         4212888 kB' 'MemUsed:         8030084 kB' 'SwapCached:            0 kB' 'Active:          1375440 kB' 'Inactive:        4130460 kB' 'Active(anon):       1052 kB' 'Inactive(anon):   141544 kB' 'Active(file):    1374388 kB' 'Inactive(file):  3988916 kB' 'Unevictable:       29168 kB' 'Mlocked:           27632 kB' 'Dirty:               404 kB' 'Writeback:             0 kB' 'FilePages:       5374964 kB' 'Mapped:            67984 kB' 'AnonPages:        159932 kB' 'Shmem:              2596 kB' 'KernelStack:        4468 kB' 'PageTables:         3780 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'KReclaimable:     234040 kB' 'Slab:             302256 kB' 'SReclaimable:     234040 kB' 'SUnreclaim:        68216 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:        0 kB' 'FilePmdMapped:        0 kB' 'HugePages_Total:  1024' 'HugePages_Free:   1024' 'HugePages_Surp:      0'
00:05:06.307    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.307    16:48:59	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:06.307    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.307    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.307    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.307    16:48:59	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:06.307    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.307    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.307    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.307    16:48:59	-- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:06.307    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.307    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.307    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.307    16:48:59	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:06.307    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.307    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.307    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.307    16:48:59	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:06.307    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.307    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.307    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.307    16:48:59	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:06.307    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.307    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.307    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.307    16:48:59	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:06.307    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.307    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.307    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.307    16:48:59	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:06.307    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.307    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.307    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.307    16:48:59	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:06.307    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.307    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.307    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.307    16:48:59	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:06.307    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.307    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.307    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.307    16:48:59	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:06.307    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.307    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.307    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.307    16:48:59	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:06.307    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.307    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.307    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.307    16:48:59	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:06.307    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.307    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.307    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.307    16:48:59	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:06.307    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.307    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.307    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.307    16:48:59	-- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:06.307    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.307    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.307    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.307    16:48:59	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:06.307    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.307    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.307    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.307    16:48:59	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:06.307    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.307    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.307    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.307    16:48:59	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:06.307    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.307    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.307    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.307    16:48:59	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:06.307    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.307    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.307    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.307    16:48:59	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:06.307    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.307    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.307    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.307    16:48:59	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:06.307    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.307    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.307    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.307    16:48:59	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:06.307    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.307    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.307    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.307    16:48:59	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:06.307    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.307    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.307    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.307    16:48:59	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:06.307    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.307    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.307    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.307    16:48:59	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:06.307    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.307    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.307    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.307    16:48:59	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:06.307    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.307    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.307    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.307    16:48:59	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:06.307    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.307    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.307    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.307    16:48:59	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:06.307    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.307    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.307    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.307    16:48:59	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:06.307    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.307    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.307    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.307    16:48:59	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:06.307    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.307    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.307    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.307    16:48:59	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:06.307    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.307    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.307    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.307    16:48:59	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:06.307    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.307    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.307    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.307    16:48:59	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:06.307    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.307    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.307    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.307    16:48:59	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:06.307    16:48:59	-- setup/common.sh@32 -- # continue
00:05:06.307    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:06.307    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:06.307    16:48:59	-- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:06.307    16:48:59	-- setup/common.sh@33 -- # echo 0
00:05:06.307    16:48:59	-- setup/common.sh@33 -- # return 0
00:05:06.307   16:48:59	-- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 ))
00:05:06.307   16:48:59	-- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}"
00:05:06.307   16:48:59	-- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1
00:05:06.307   16:48:59	-- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1
00:05:06.307  node0=1024 expecting 1024
00:05:06.307   16:48:59	-- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024'
00:05:06.307   16:48:59	-- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]]
00:05:06.307  
00:05:06.307  real	0m1.649s
00:05:06.307  user	0m0.367s
00:05:06.307  sys	0m1.311s
00:05:06.307   16:48:59	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:05:06.307   16:48:59	-- common/autotest_common.sh@10 -- # set +x
00:05:06.307  ************************************
00:05:06.307  END TEST default_setup
00:05:06.308  ************************************
00:05:06.565   16:48:59	-- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc
00:05:06.565   16:48:59	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:05:06.565   16:48:59	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:05:06.565   16:48:59	-- common/autotest_common.sh@10 -- # set +x
00:05:06.565  ************************************
00:05:06.565  START TEST per_node_1G_alloc
00:05:06.565  ************************************
00:05:06.565   16:48:59	-- common/autotest_common.sh@1114 -- # per_node_1G_alloc
00:05:06.565   16:48:59	-- setup/hugepages.sh@143 -- # local IFS=,
00:05:06.565   16:48:59	-- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0
00:05:06.565   16:48:59	-- setup/hugepages.sh@49 -- # local size=1048576
00:05:06.565   16:48:59	-- setup/hugepages.sh@50 -- # (( 2 > 1 ))
00:05:06.565   16:48:59	-- setup/hugepages.sh@51 -- # shift
00:05:06.565   16:48:59	-- setup/hugepages.sh@52 -- # node_ids=('0')
00:05:06.565   16:48:59	-- setup/hugepages.sh@52 -- # local node_ids
00:05:06.565   16:48:59	-- setup/hugepages.sh@55 -- # (( size >= default_hugepages ))
00:05:06.565   16:48:59	-- setup/hugepages.sh@57 -- # nr_hugepages=512
00:05:06.565   16:48:59	-- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0
00:05:06.565   16:48:59	-- setup/hugepages.sh@62 -- # user_nodes=('0')
00:05:06.565   16:48:59	-- setup/hugepages.sh@62 -- # local user_nodes
00:05:06.565   16:48:59	-- setup/hugepages.sh@64 -- # local _nr_hugepages=512
00:05:06.565   16:48:59	-- setup/hugepages.sh@65 -- # local _no_nodes=1
00:05:06.565   16:48:59	-- setup/hugepages.sh@67 -- # nodes_test=()
00:05:06.565   16:48:59	-- setup/hugepages.sh@67 -- # local -g nodes_test
00:05:06.565   16:48:59	-- setup/hugepages.sh@69 -- # (( 1 > 0 ))
00:05:06.565   16:48:59	-- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}"
00:05:06.565   16:48:59	-- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512
00:05:06.565   16:48:59	-- setup/hugepages.sh@73 -- # return 0
00:05:06.565   16:48:59	-- setup/hugepages.sh@146 -- # NRHUGE=512
00:05:06.565   16:48:59	-- setup/hugepages.sh@146 -- # HUGENODE=0
00:05:06.565   16:48:59	-- setup/hugepages.sh@146 -- # setup output
00:05:06.565   16:48:59	-- setup/common.sh@9 -- # [[ output == output ]]
00:05:06.565   16:48:59	-- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:05:06.823  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev
00:05:06.823  0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver
00:05:07.083   16:48:59	-- setup/hugepages.sh@147 -- # nr_hugepages=512
00:05:07.083   16:48:59	-- setup/hugepages.sh@147 -- # verify_nr_hugepages
00:05:07.083   16:48:59	-- setup/hugepages.sh@89 -- # local node
00:05:07.083   16:48:59	-- setup/hugepages.sh@90 -- # local sorted_t
00:05:07.083   16:48:59	-- setup/hugepages.sh@91 -- # local sorted_s
00:05:07.083   16:48:59	-- setup/hugepages.sh@92 -- # local surp
00:05:07.083   16:48:59	-- setup/hugepages.sh@93 -- # local resv
00:05:07.083   16:48:59	-- setup/hugepages.sh@94 -- # local anon
00:05:07.083   16:48:59	-- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]]
00:05:07.083    16:48:59	-- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages
00:05:07.083    16:48:59	-- setup/common.sh@17 -- # local get=AnonHugePages
00:05:07.083    16:48:59	-- setup/common.sh@18 -- # local node=
00:05:07.083    16:48:59	-- setup/common.sh@19 -- # local var val
00:05:07.083    16:48:59	-- setup/common.sh@20 -- # local mem_f mem
00:05:07.083    16:48:59	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:05:07.083    16:48:59	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:05:07.083    16:48:59	-- setup/common.sh@25 -- # [[ -n '' ]]
00:05:07.083    16:48:59	-- setup/common.sh@28 -- # mapfile -t mem
00:05:07.083    16:48:59	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:05:07.083    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.083    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.083     16:48:59	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12242972 kB' 'MemFree:         5258808 kB' 'MemAvailable:   10534856 kB' 'Buffers:           39856 kB' 'Cached:          5335116 kB' 'SwapCached:            0 kB' 'Active:          1375460 kB' 'Inactive:        4131068 kB' 'Active(anon):       1052 kB' 'Inactive(anon):   142164 kB' 'Active(file):    1374408 kB' 'Inactive(file):  3988904 kB' 'Unevictable:       29168 kB' 'Mlocked:           27632 kB' 'SwapTotal:             0 kB' 'SwapFree:              0 kB' 'Dirty:               696 kB' 'Writeback:             0 kB' 'AnonPages:        160656 kB' 'Mapped:            68324 kB' 'Shmem:              2596 kB' 'KReclaimable:     234040 kB' 'Slab:             301936 kB' 'SReclaimable:     234040 kB' 'SUnreclaim:        67896 kB' 'KernelStack:        4588 kB' 'PageTables:         3820 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:     5597196 kB' 'Committed_AS:     506052 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:       19660 kB' 'VmallocChunk:          0 kB' 'Percpu:             8256 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'HugePages_Total:     512' 'HugePages_Free:      512' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         1048576 kB' 'DirectMap4k:      141164 kB' 'DirectMap2M:     4052992 kB' 'DirectMap1G:    10485760 kB'
00:05:07.083    16:48:59	-- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:07.083    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.083    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.083    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.083    16:48:59	-- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:07.083    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.083    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.083    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.083    16:48:59	-- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:07.083    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.083    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.083    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.083    16:48:59	-- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:07.083    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.083    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.083    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.083    16:48:59	-- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:07.083    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.083    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.083    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.083    16:48:59	-- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:07.083    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.083    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.083    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.083    16:48:59	-- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:07.083    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.083    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.083    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.083    16:48:59	-- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:07.083    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.083    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.083    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.083    16:48:59	-- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:07.083    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.083    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.083    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.083    16:48:59	-- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:07.083    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.083    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.083    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.083    16:48:59	-- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:07.083    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.083    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.083    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.083    16:48:59	-- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:07.083    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.083    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.083    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.083    16:48:59	-- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:07.083    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.083    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.083    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.083    16:48:59	-- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:07.083    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.083    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.083    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.083    16:48:59	-- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:07.083    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.083    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.083    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.083    16:48:59	-- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:07.083    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.083    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.083    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.083    16:48:59	-- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:07.083    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.083    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.083    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.083    16:48:59	-- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:07.083    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.083    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.083    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.083    16:48:59	-- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:07.083    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.083    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.083    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.083    16:48:59	-- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:07.083    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.083    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.083    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.083    16:48:59	-- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:07.083    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.083    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.083    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.083    16:48:59	-- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:07.083    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.083    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.083    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.083    16:48:59	-- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:07.083    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.083    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.083    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.083    16:48:59	-- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:07.083    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.083    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.083    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.083    16:48:59	-- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:07.083    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.083    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.083    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.084    16:48:59	-- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:07.084    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.084    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.084    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.084    16:48:59	-- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:07.084    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.084    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.084    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.084    16:48:59	-- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:07.084    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.084    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.084    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.084    16:48:59	-- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:07.084    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.084    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.084    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.084    16:48:59	-- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:07.084    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.084    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.084    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.084    16:48:59	-- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:07.084    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.084    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.084    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.084    16:48:59	-- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:07.084    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.084    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.084    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.084    16:48:59	-- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:07.084    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.084    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.084    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.084    16:48:59	-- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:07.084    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.084    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.084    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.084    16:48:59	-- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:07.084    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.084    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.084    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.084    16:48:59	-- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:07.084    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.084    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.084    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.084    16:48:59	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:07.084    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.084    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.084    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.084    16:48:59	-- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:07.084    16:48:59	-- setup/common.sh@33 -- # echo 0
00:05:07.084    16:48:59	-- setup/common.sh@33 -- # return 0
00:05:07.084   16:48:59	-- setup/hugepages.sh@97 -- # anon=0
00:05:07.084    16:48:59	-- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp
00:05:07.084    16:48:59	-- setup/common.sh@17 -- # local get=HugePages_Surp
00:05:07.084    16:48:59	-- setup/common.sh@18 -- # local node=
00:05:07.084    16:48:59	-- setup/common.sh@19 -- # local var val
00:05:07.084    16:48:59	-- setup/common.sh@20 -- # local mem_f mem
00:05:07.084    16:48:59	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:05:07.084    16:48:59	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:05:07.084    16:48:59	-- setup/common.sh@25 -- # [[ -n '' ]]
00:05:07.084    16:48:59	-- setup/common.sh@28 -- # mapfile -t mem
00:05:07.084    16:48:59	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:05:07.084    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.084     16:48:59	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12242972 kB' 'MemFree:         5258304 kB' 'MemAvailable:   10534352 kB' 'Buffers:           39856 kB' 'Cached:          5335116 kB' 'SwapCached:            0 kB' 'Active:          1375460 kB' 'Inactive:        4130924 kB' 'Active(anon):       1052 kB' 'Inactive(anon):   142020 kB' 'Active(file):    1374408 kB' 'Inactive(file):  3988904 kB' 'Unevictable:       29168 kB' 'Mlocked:           27632 kB' 'SwapTotal:             0 kB' 'SwapFree:              0 kB' 'Dirty:               696 kB' 'Writeback:             0 kB' 'AnonPages:        160472 kB' 'Mapped:            68324 kB' 'Shmem:              2596 kB' 'KReclaimable:     234040 kB' 'Slab:             301968 kB' 'SReclaimable:     234040 kB' 'SUnreclaim:        67928 kB' 'KernelStack:        4568 kB' 'PageTables:         3980 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:     5597196 kB' 'Committed_AS:     506052 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:       19660 kB' 'VmallocChunk:          0 kB' 'Percpu:             8256 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'HugePages_Total:     512' 'HugePages_Free:      512' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         1048576 kB' 'DirectMap4k:      141164 kB' 'DirectMap2M:     4052992 kB' 'DirectMap1G:    10485760 kB'
00:05:07.084    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.084    16:48:59	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:07.084    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.084    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.084    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.084    16:48:59	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:07.084    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.084    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.084    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.084    16:48:59	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:07.084    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.084    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.084    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.084    16:48:59	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:07.084    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.084    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.084    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.084    16:48:59	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:07.084    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.084    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.084    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.084    16:48:59	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:07.084    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.084    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.084    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.084    16:48:59	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:07.084    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.084    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.084    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.084    16:48:59	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:07.084    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.084    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.084    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.084    16:48:59	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:07.084    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.084    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.084    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.084    16:48:59	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:07.084    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.084    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.084    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.084    16:48:59	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:07.084    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.084    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.084    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.084    16:48:59	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:07.084    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.084    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.084    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.084    16:48:59	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:07.084    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.084    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.084    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.084    16:48:59	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:07.084    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.084    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.084    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.084    16:48:59	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:07.084    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.084    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.084    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.084    16:48:59	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:07.084    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.084    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.084    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.084    16:48:59	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:07.084    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.084    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.084    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.084    16:48:59	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:07.084    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.084    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.084    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.084    16:48:59	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:07.084    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.084    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.084    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.084    16:48:59	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:07.084    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.084    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.084    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.085    16:48:59	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:07.085    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.085    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.085    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.085    16:48:59	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:07.085    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.085    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.085    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.085    16:48:59	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:07.085    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.085    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.085    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.085    16:48:59	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:07.085    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.085    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.085    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.085    16:48:59	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:07.085    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.085    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.085    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.085    16:48:59	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:07.085    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.085    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.085    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.085    16:48:59	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:07.085    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.085    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.085    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.085    16:48:59	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:07.085    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.085    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.085    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.085    16:48:59	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:07.085    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.085    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.085    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.085    16:48:59	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:07.085    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.085    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.085    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.085    16:48:59	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:07.085    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.085    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.085    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.085    16:48:59	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:07.085    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.085    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.085    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.085    16:48:59	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:07.085    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.085    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.085    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.085    16:48:59	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:07.085    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.085    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.085    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.085    16:48:59	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:07.085    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.085    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.085    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.085    16:48:59	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:07.085    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.085    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.085    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.085    16:48:59	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:07.085    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.085    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.085    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.085    16:48:59	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:07.085    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.085    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.085    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.085    16:48:59	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:07.085    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.085    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.085    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.085    16:48:59	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:07.085    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.085    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.085    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.085    16:48:59	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:07.085    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.085    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.085    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.085    16:48:59	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:07.085    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.085    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.085    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.085    16:48:59	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:07.085    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.085    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.085    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.085    16:48:59	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:07.085    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.085    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.085    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.085    16:48:59	-- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:07.085    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.085    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.085    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.085    16:48:59	-- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:07.085    16:48:59	-- setup/common.sh@33 -- # echo 0
00:05:07.085    16:48:59	-- setup/common.sh@33 -- # return 0
00:05:07.085   16:48:59	-- setup/hugepages.sh@99 -- # surp=0
00:05:07.085    16:48:59	-- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd
00:05:07.085    16:48:59	-- setup/common.sh@17 -- # local get=HugePages_Rsvd
00:05:07.085    16:48:59	-- setup/common.sh@18 -- # local node=
00:05:07.085    16:48:59	-- setup/common.sh@19 -- # local var val
00:05:07.085    16:48:59	-- setup/common.sh@20 -- # local mem_f mem
00:05:07.085    16:48:59	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:05:07.085    16:48:59	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:05:07.085    16:48:59	-- setup/common.sh@25 -- # [[ -n '' ]]
00:05:07.085    16:48:59	-- setup/common.sh@28 -- # mapfile -t mem
00:05:07.085    16:48:59	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:05:07.085    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.085    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.085     16:48:59	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12242972 kB' 'MemFree:         5258304 kB' 'MemAvailable:   10534352 kB' 'Buffers:           39856 kB' 'Cached:          5335116 kB' 'SwapCached:            0 kB' 'Active:          1375452 kB' 'Inactive:        4130588 kB' 'Active(anon):       1044 kB' 'Inactive(anon):   141684 kB' 'Active(file):    1374408 kB' 'Inactive(file):  3988904 kB' 'Unevictable:       29168 kB' 'Mlocked:           27632 kB' 'SwapTotal:             0 kB' 'SwapFree:              0 kB' 'Dirty:               696 kB' 'Writeback:             0 kB' 'AnonPages:        160152 kB' 'Mapped:            68296 kB' 'Shmem:              2596 kB' 'KReclaimable:     234040 kB' 'Slab:             302072 kB' 'SReclaimable:     234040 kB' 'SUnreclaim:        68032 kB' 'KernelStack:        4444 kB' 'PageTables:         3824 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:     5597196 kB' 'Committed_AS:     506052 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:       19676 kB' 'VmallocChunk:          0 kB' 'Percpu:             8256 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'HugePages_Total:     512' 'HugePages_Free:      512' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         1048576 kB' 'DirectMap4k:      141164 kB' 'DirectMap2M:     4052992 kB' 'DirectMap1G:    10485760 kB'
00:05:07.085    16:48:59	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:07.085    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.085    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.085    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.085    16:48:59	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:07.085    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.085    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.085    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.085    16:48:59	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:07.346    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.346    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.346    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.346    16:48:59	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:07.346    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.346    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.346    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.346    16:48:59	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:07.346    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.346    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.346    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.346    16:48:59	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:07.346    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.346    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.346    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.346    16:48:59	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:07.346    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.346    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.346    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.346    16:48:59	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:07.346    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.346    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.346    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.346    16:48:59	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:07.346    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.346    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.346    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.346    16:48:59	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:07.346    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.346    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.346    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.346    16:48:59	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:07.346    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.346    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.346    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.346    16:48:59	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:07.346    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.346    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.346    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.346    16:48:59	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:07.346    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.346    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.346    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.346    16:48:59	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:07.346    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.346    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.346    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.347    16:48:59	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:07.347    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.347    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.347    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.347    16:48:59	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:07.347    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.347    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.347    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.347    16:48:59	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:07.347    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.347    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.347    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.347    16:48:59	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:07.347    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.347    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.347    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.347    16:48:59	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:07.347    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.347    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.347    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.347    16:48:59	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:07.347    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.347    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.347    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.347    16:48:59	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:07.347    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.347    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.347    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.347    16:48:59	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:07.347    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.347    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.347    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.347    16:48:59	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:07.347    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.347    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.347    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.347    16:48:59	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:07.347    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.347    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.347    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.347    16:48:59	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:07.347    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.347    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.347    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.347    16:48:59	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:07.347    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.347    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.347    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.347    16:48:59	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:07.347    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.347    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.347    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.347    16:48:59	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:07.347    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.347    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.347    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.347    16:48:59	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:07.347    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.347    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.347    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.347    16:48:59	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:07.347    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.347    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.347    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.347    16:48:59	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:07.347    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.347    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.347    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.347    16:48:59	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:07.347    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.347    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.347    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.347    16:48:59	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:07.347    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.347    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.347    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.347    16:48:59	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:07.347    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.347    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.347    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.347    16:48:59	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:07.347    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.347    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.347    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.347    16:48:59	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:07.347    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.347    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.347    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.347    16:48:59	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:07.347    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.347    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.347    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.347    16:48:59	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:07.347    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.347    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.347    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.347    16:48:59	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:07.347    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.347    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.347    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.347    16:48:59	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:07.347    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.347    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.347    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.347    16:48:59	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:07.347    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.347    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.347    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.347    16:48:59	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:07.347    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.347    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.347    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.347    16:48:59	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:07.347    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.347    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.347    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.347    16:48:59	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:07.347    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.347    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.347    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.347    16:48:59	-- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:07.347    16:48:59	-- setup/common.sh@33 -- # echo 0
00:05:07.347    16:48:59	-- setup/common.sh@33 -- # return 0
00:05:07.347   16:48:59	-- setup/hugepages.sh@100 -- # resv=0
00:05:07.347  nr_hugepages=512
00:05:07.347   16:48:59	-- setup/hugepages.sh@102 -- # echo nr_hugepages=512
00:05:07.347  resv_hugepages=0
00:05:07.347   16:48:59	-- setup/hugepages.sh@103 -- # echo resv_hugepages=0
00:05:07.347  surplus_hugepages=0
00:05:07.347   16:48:59	-- setup/hugepages.sh@104 -- # echo surplus_hugepages=0
00:05:07.347  anon_hugepages=0
00:05:07.347   16:48:59	-- setup/hugepages.sh@105 -- # echo anon_hugepages=0
00:05:07.347   16:48:59	-- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv ))
00:05:07.347   16:48:59	-- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages ))
00:05:07.347    16:48:59	-- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total
00:05:07.347    16:48:59	-- setup/common.sh@17 -- # local get=HugePages_Total
00:05:07.347    16:48:59	-- setup/common.sh@18 -- # local node=
00:05:07.347    16:48:59	-- setup/common.sh@19 -- # local var val
00:05:07.347    16:48:59	-- setup/common.sh@20 -- # local mem_f mem
00:05:07.347    16:48:59	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:05:07.347    16:48:59	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:05:07.347    16:48:59	-- setup/common.sh@25 -- # [[ -n '' ]]
00:05:07.347    16:48:59	-- setup/common.sh@28 -- # mapfile -t mem
00:05:07.347    16:48:59	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:05:07.347    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.347     16:48:59	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12242972 kB' 'MemFree:         5258304 kB' 'MemAvailable:   10534352 kB' 'Buffers:           39856 kB' 'Cached:          5335116 kB' 'SwapCached:            0 kB' 'Active:          1375452 kB' 'Inactive:        4130308 kB' 'Active(anon):       1044 kB' 'Inactive(anon):   141404 kB' 'Active(file):    1374408 kB' 'Inactive(file):  3988904 kB' 'Unevictable:       29168 kB' 'Mlocked:           27632 kB' 'SwapTotal:             0 kB' 'SwapFree:              0 kB' 'Dirty:               696 kB' 'Writeback:             0 kB' 'AnonPages:        160124 kB' 'Mapped:            68296 kB' 'Shmem:              2596 kB' 'KReclaimable:     234040 kB' 'Slab:             302072 kB' 'SReclaimable:     234040 kB' 'SUnreclaim:        68032 kB' 'KernelStack:        4496 kB' 'PageTables:         3788 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:     5597196 kB' 'Committed_AS:     506052 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:       19676 kB' 'VmallocChunk:          0 kB' 'Percpu:             8256 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'HugePages_Total:     512' 'HugePages_Free:      512' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         1048576 kB' 'DirectMap4k:      141164 kB' 'DirectMap2M:     4052992 kB' 'DirectMap1G:    10485760 kB'
00:05:07.347    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.347    16:48:59	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:07.347    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.347    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.348    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.348    16:48:59	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:07.348    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.348    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.348    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.348    16:48:59	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:07.348    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.348    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.348    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.348    16:48:59	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:07.348    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.348    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.348    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.348    16:48:59	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:07.348    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.348    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.348    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.348    16:48:59	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:07.348    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.348    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.348    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.348    16:48:59	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:07.348    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.348    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.348    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.348    16:48:59	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:07.348    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.348    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.348    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.348    16:48:59	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:07.348    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.348    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.348    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.348    16:48:59	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:07.348    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.348    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.348    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.348    16:48:59	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:07.348    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.348    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.348    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.348    16:48:59	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:07.348    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.348    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.348    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.348    16:48:59	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:07.348    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.348    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.348    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.348    16:48:59	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:07.348    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.348    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.348    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.348    16:48:59	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:07.348    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.348    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.348    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.348    16:48:59	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:07.348    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.348    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.348    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.348    16:48:59	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:07.348    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.348    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.348    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.348    16:48:59	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:07.348    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.348    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.348    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.348    16:48:59	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:07.348    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.348    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.348    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.348    16:48:59	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:07.348    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.348    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.348    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.348    16:48:59	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:07.348    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.348    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.348    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.348    16:48:59	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:07.348    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.348    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.348    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.348    16:48:59	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:07.348    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.348    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.348    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.348    16:48:59	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:07.348    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.348    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.348    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.348    16:48:59	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:07.348    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.348    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.348    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.348    16:48:59	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:07.348    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.348    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.348    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.348    16:48:59	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:07.348    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.348    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.348    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.348    16:48:59	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:07.348    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.348    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.348    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.348    16:48:59	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:07.348    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.348    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.348    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.348    16:48:59	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:07.348    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.348    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.348    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.348    16:48:59	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:07.348    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.348    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.348    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.348    16:48:59	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:07.348    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.348    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.348    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.348    16:48:59	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:07.348    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.348    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.348    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.348    16:48:59	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:07.348    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.348    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.348    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.348    16:48:59	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:07.348    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.348    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.348    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.348    16:48:59	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:07.348    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.348    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.348    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.348    16:48:59	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:07.348    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.348    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.348    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.348    16:48:59	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:07.348    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.348    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.348    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.348    16:48:59	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:07.348    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.348    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.348    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.348    16:48:59	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:07.348    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.348    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.348    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.348    16:48:59	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:07.348    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.348    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.348    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.348    16:48:59	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:07.348    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.349    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.349    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.349    16:48:59	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:07.349    16:48:59	-- setup/common.sh@33 -- # echo 512
00:05:07.349    16:48:59	-- setup/common.sh@33 -- # return 0
00:05:07.349   16:48:59	-- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv ))
00:05:07.349   16:48:59	-- setup/hugepages.sh@112 -- # get_nodes
00:05:07.349   16:48:59	-- setup/hugepages.sh@27 -- # local node
00:05:07.349   16:48:59	-- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9])
00:05:07.349   16:48:59	-- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512
00:05:07.349   16:48:59	-- setup/hugepages.sh@32 -- # no_nodes=1
00:05:07.349   16:48:59	-- setup/hugepages.sh@33 -- # (( no_nodes > 0 ))
00:05:07.349   16:48:59	-- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}"
00:05:07.349   16:48:59	-- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv ))
00:05:07.349    16:48:59	-- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0
00:05:07.349    16:48:59	-- setup/common.sh@17 -- # local get=HugePages_Surp
00:05:07.349    16:48:59	-- setup/common.sh@18 -- # local node=0
00:05:07.349    16:48:59	-- setup/common.sh@19 -- # local var val
00:05:07.349    16:48:59	-- setup/common.sh@20 -- # local mem_f mem
00:05:07.349    16:48:59	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:05:07.349    16:48:59	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]]
00:05:07.349    16:48:59	-- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo
00:05:07.349    16:48:59	-- setup/common.sh@28 -- # mapfile -t mem
00:05:07.349    16:48:59	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:05:07.349    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.349     16:48:59	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12242972 kB' 'MemFree:         5258304 kB' 'MemUsed:         6984668 kB' 'SwapCached:            0 kB' 'Active:          1375452 kB' 'Inactive:        4130308 kB' 'Active(anon):       1044 kB' 'Inactive(anon):   141404 kB' 'Active(file):    1374408 kB' 'Inactive(file):  3988904 kB' 'Unevictable:       29168 kB' 'Mlocked:           27632 kB' 'Dirty:               696 kB' 'Writeback:             0 kB' 'FilePages:       5374972 kB' 'Mapped:            68296 kB' 'AnonPages:        160124 kB' 'Shmem:              2596 kB' 'KernelStack:        4496 kB' 'PageTables:         3788 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'KReclaimable:     234040 kB' 'Slab:             302072 kB' 'SReclaimable:     234040 kB' 'SUnreclaim:        68032 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:        0 kB' 'FilePmdMapped:        0 kB' 'HugePages_Total:   512' 'HugePages_Free:    512' 'HugePages_Surp:      0'
00:05:07.349    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.349    16:48:59	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:07.349    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.349    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.349    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.349    16:48:59	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:07.349    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.349    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.349    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.349    16:48:59	-- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:07.349    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.349    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.349    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.349    16:48:59	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:07.349    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.349    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.349    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.349    16:48:59	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:07.349    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.349    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.349    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.349    16:48:59	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:07.349    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.349    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.349    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.349    16:48:59	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:07.349    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.349    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.349    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.349    16:48:59	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:07.349    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.349    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.349    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.349    16:48:59	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:07.349    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.349    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.349    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.349    16:48:59	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:07.349    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.349    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.349    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.349    16:48:59	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:07.349    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.349    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.349    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.349    16:48:59	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:07.349    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.349    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.349    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.349    16:48:59	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:07.349    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.349    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.349    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.349    16:48:59	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:07.349    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.349    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.349    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.349    16:48:59	-- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:07.349    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.349    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.349    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.349    16:48:59	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:07.349    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.349    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.349    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.349    16:48:59	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:07.349    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.349    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.349    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.349    16:48:59	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:07.349    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.349    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.349    16:48:59	-- setup/common.sh@31 -- # read -r var val _
00:05:07.349    16:48:59	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:07.349    16:48:59	-- setup/common.sh@32 -- # continue
00:05:07.349    16:48:59	-- setup/common.sh@31 -- # IFS=': '
00:05:07.349    16:49:00	-- setup/common.sh@31 -- # read -r var val _
00:05:07.349    16:49:00	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:07.349    16:49:00	-- setup/common.sh@32 -- # continue
00:05:07.349    16:49:00	-- setup/common.sh@31 -- # IFS=': '
00:05:07.349    16:49:00	-- setup/common.sh@31 -- # read -r var val _
00:05:07.349    16:49:00	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:07.349    16:49:00	-- setup/common.sh@32 -- # continue
00:05:07.349    16:49:00	-- setup/common.sh@31 -- # IFS=': '
00:05:07.349    16:49:00	-- setup/common.sh@31 -- # read -r var val _
00:05:07.349    16:49:00	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:07.349    16:49:00	-- setup/common.sh@32 -- # continue
00:05:07.349    16:49:00	-- setup/common.sh@31 -- # IFS=': '
00:05:07.349    16:49:00	-- setup/common.sh@31 -- # read -r var val _
00:05:07.349    16:49:00	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:07.349    16:49:00	-- setup/common.sh@32 -- # continue
00:05:07.349    16:49:00	-- setup/common.sh@31 -- # IFS=': '
00:05:07.349    16:49:00	-- setup/common.sh@31 -- # read -r var val _
00:05:07.349    16:49:00	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:07.349    16:49:00	-- setup/common.sh@32 -- # continue
00:05:07.349    16:49:00	-- setup/common.sh@31 -- # IFS=': '
00:05:07.349    16:49:00	-- setup/common.sh@31 -- # read -r var val _
00:05:07.349    16:49:00	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:07.349    16:49:00	-- setup/common.sh@32 -- # continue
00:05:07.349    16:49:00	-- setup/common.sh@31 -- # IFS=': '
00:05:07.349    16:49:00	-- setup/common.sh@31 -- # read -r var val _
00:05:07.349    16:49:00	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:07.349    16:49:00	-- setup/common.sh@32 -- # continue
00:05:07.349    16:49:00	-- setup/common.sh@31 -- # IFS=': '
00:05:07.349    16:49:00	-- setup/common.sh@31 -- # read -r var val _
00:05:07.349    16:49:00	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:07.349    16:49:00	-- setup/common.sh@32 -- # continue
00:05:07.349    16:49:00	-- setup/common.sh@31 -- # IFS=': '
00:05:07.349    16:49:00	-- setup/common.sh@31 -- # read -r var val _
00:05:07.349    16:49:00	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:07.349    16:49:00	-- setup/common.sh@32 -- # continue
00:05:07.349    16:49:00	-- setup/common.sh@31 -- # IFS=': '
00:05:07.349    16:49:00	-- setup/common.sh@31 -- # read -r var val _
00:05:07.349    16:49:00	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:07.349    16:49:00	-- setup/common.sh@32 -- # continue
00:05:07.349    16:49:00	-- setup/common.sh@31 -- # IFS=': '
00:05:07.349    16:49:00	-- setup/common.sh@31 -- # read -r var val _
00:05:07.349    16:49:00	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:07.349    16:49:00	-- setup/common.sh@32 -- # continue
00:05:07.349    16:49:00	-- setup/common.sh@31 -- # IFS=': '
00:05:07.349    16:49:00	-- setup/common.sh@31 -- # read -r var val _
00:05:07.349    16:49:00	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:07.349    16:49:00	-- setup/common.sh@32 -- # continue
00:05:07.349    16:49:00	-- setup/common.sh@31 -- # IFS=': '
00:05:07.350    16:49:00	-- setup/common.sh@31 -- # read -r var val _
00:05:07.350    16:49:00	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:07.350    16:49:00	-- setup/common.sh@32 -- # continue
00:05:07.350    16:49:00	-- setup/common.sh@31 -- # IFS=': '
00:05:07.350    16:49:00	-- setup/common.sh@31 -- # read -r var val _
00:05:07.350    16:49:00	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:07.350    16:49:00	-- setup/common.sh@32 -- # continue
00:05:07.350    16:49:00	-- setup/common.sh@31 -- # IFS=': '
00:05:07.350    16:49:00	-- setup/common.sh@31 -- # read -r var val _
00:05:07.350    16:49:00	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:07.350    16:49:00	-- setup/common.sh@32 -- # continue
00:05:07.350    16:49:00	-- setup/common.sh@31 -- # IFS=': '
00:05:07.350    16:49:00	-- setup/common.sh@31 -- # read -r var val _
00:05:07.350    16:49:00	-- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:07.350    16:49:00	-- setup/common.sh@33 -- # echo 0
00:05:07.350    16:49:00	-- setup/common.sh@33 -- # return 0
00:05:07.350   16:49:00	-- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 ))
00:05:07.350   16:49:00	-- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}"
00:05:07.350   16:49:00	-- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1
00:05:07.350   16:49:00	-- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1
00:05:07.350  node0=512 expecting 512
00:05:07.350   16:49:00	-- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512'
00:05:07.350   16:49:00	-- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]]
00:05:07.350  
00:05:07.350  real	0m0.817s
00:05:07.350  user	0m0.315s
00:05:07.350  sys	0m0.550s
00:05:07.350   16:49:00	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:05:07.350   16:49:00	-- common/autotest_common.sh@10 -- # set +x
00:05:07.350  ************************************
00:05:07.350  END TEST per_node_1G_alloc
00:05:07.350  ************************************
00:05:07.350   16:49:00	-- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc
00:05:07.350   16:49:00	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:05:07.350   16:49:00	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:05:07.350   16:49:00	-- common/autotest_common.sh@10 -- # set +x
00:05:07.350  ************************************
00:05:07.350  START TEST even_2G_alloc
00:05:07.350  ************************************
00:05:07.350   16:49:00	-- common/autotest_common.sh@1114 -- # even_2G_alloc
00:05:07.350   16:49:00	-- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152
00:05:07.350   16:49:00	-- setup/hugepages.sh@49 -- # local size=2097152
00:05:07.350   16:49:00	-- setup/hugepages.sh@50 -- # (( 1 > 1 ))
00:05:07.350   16:49:00	-- setup/hugepages.sh@55 -- # (( size >= default_hugepages ))
00:05:07.350   16:49:00	-- setup/hugepages.sh@57 -- # nr_hugepages=1024
00:05:07.350   16:49:00	-- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node
00:05:07.350   16:49:00	-- setup/hugepages.sh@62 -- # user_nodes=()
00:05:07.350   16:49:00	-- setup/hugepages.sh@62 -- # local user_nodes
00:05:07.350   16:49:00	-- setup/hugepages.sh@64 -- # local _nr_hugepages=1024
00:05:07.350   16:49:00	-- setup/hugepages.sh@65 -- # local _no_nodes=1
00:05:07.350   16:49:00	-- setup/hugepages.sh@67 -- # nodes_test=()
00:05:07.350   16:49:00	-- setup/hugepages.sh@67 -- # local -g nodes_test
00:05:07.350   16:49:00	-- setup/hugepages.sh@69 -- # (( 0 > 0 ))
00:05:07.350   16:49:00	-- setup/hugepages.sh@74 -- # (( 0 > 0 ))
00:05:07.350   16:49:00	-- setup/hugepages.sh@81 -- # (( _no_nodes > 0 ))
00:05:07.350   16:49:00	-- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024
00:05:07.350   16:49:00	-- setup/hugepages.sh@83 -- # : 0
00:05:07.350   16:49:00	-- setup/hugepages.sh@84 -- # : 0
00:05:07.350   16:49:00	-- setup/hugepages.sh@81 -- # (( _no_nodes > 0 ))
00:05:07.350   16:49:00	-- setup/hugepages.sh@153 -- # NRHUGE=1024
00:05:07.350   16:49:00	-- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes
00:05:07.350   16:49:00	-- setup/hugepages.sh@153 -- # setup output
00:05:07.350   16:49:00	-- setup/common.sh@9 -- # [[ output == output ]]
00:05:07.350   16:49:00	-- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:05:07.917  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev
00:05:07.917  0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver
00:05:08.177   16:49:00	-- setup/hugepages.sh@154 -- # verify_nr_hugepages
00:05:08.177   16:49:00	-- setup/hugepages.sh@89 -- # local node
00:05:08.177   16:49:00	-- setup/hugepages.sh@90 -- # local sorted_t
00:05:08.177   16:49:00	-- setup/hugepages.sh@91 -- # local sorted_s
00:05:08.177   16:49:00	-- setup/hugepages.sh@92 -- # local surp
00:05:08.177   16:49:00	-- setup/hugepages.sh@93 -- # local resv
00:05:08.177   16:49:00	-- setup/hugepages.sh@94 -- # local anon
00:05:08.177   16:49:00	-- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]]
00:05:08.177    16:49:00	-- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages
00:05:08.177    16:49:01	-- setup/common.sh@17 -- # local get=AnonHugePages
00:05:08.177    16:49:01	-- setup/common.sh@18 -- # local node=
00:05:08.177    16:49:01	-- setup/common.sh@19 -- # local var val
00:05:08.177    16:49:01	-- setup/common.sh@20 -- # local mem_f mem
00:05:08.177    16:49:01	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:05:08.177    16:49:01	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:05:08.177    16:49:01	-- setup/common.sh@25 -- # [[ -n '' ]]
00:05:08.177    16:49:01	-- setup/common.sh@28 -- # mapfile -t mem
00:05:08.177    16:49:01	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:05:08.177    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.177    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.177     16:49:01	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12242972 kB' 'MemFree:         4209152 kB' 'MemAvailable:    9485200 kB' 'Buffers:           39856 kB' 'Cached:          5335120 kB' 'SwapCached:            0 kB' 'Active:          1375480 kB' 'Inactive:        4130624 kB' 'Active(anon):       1060 kB' 'Inactive(anon):   141732 kB' 'Active(file):    1374420 kB' 'Inactive(file):  3988892 kB' 'Unevictable:       29168 kB' 'Mlocked:           27632 kB' 'SwapTotal:             0 kB' 'SwapFree:              0 kB' 'Dirty:               696 kB' 'Writeback:             0 kB' 'AnonPages:        160308 kB' 'Mapped:            68260 kB' 'Shmem:              2596 kB' 'KReclaimable:     234040 kB' 'Slab:             302420 kB' 'SReclaimable:     234040 kB' 'SUnreclaim:        68380 kB' 'KernelStack:        4500 kB' 'PageTables:         3624 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:     5072908 kB' 'Committed_AS:     506184 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:       19644 kB' 'VmallocChunk:          0 kB' 'Percpu:             8256 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'HugePages_Total:    1024' 'HugePages_Free:     1024' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2097152 kB' 'DirectMap4k:      141164 kB' 'DirectMap2M:     4052992 kB' 'DirectMap1G:    10485760 kB'
00:05:08.177    16:49:01	-- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:08.177    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.177    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.177    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.177    16:49:01	-- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:08.177    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.177    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.177    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.177    16:49:01	-- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:08.177    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.177    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.177    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.177    16:49:01	-- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:08.177    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.177    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.177    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.177    16:49:01	-- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:08.177    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.177    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.177    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.177    16:49:01	-- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:08.177    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.177    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.177    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.177    16:49:01	-- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:08.177    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.177    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.177    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.177    16:49:01	-- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:08.177    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.177    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.177    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.177    16:49:01	-- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:08.177    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.177    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.177    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.177    16:49:01	-- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:08.177    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.177    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.177    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.177    16:49:01	-- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:08.177    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.177    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.177    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.177    16:49:01	-- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:08.177    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.177    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.177    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.177    16:49:01	-- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:08.177    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.177    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.177    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.177    16:49:01	-- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:08.178    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.178    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.178    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.178    16:49:01	-- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:08.178    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.178    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.178    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.178    16:49:01	-- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:08.178    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.178    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.178    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.178    16:49:01	-- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:08.178    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.178    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.178    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.178    16:49:01	-- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:08.178    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.178    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.178    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.178    16:49:01	-- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:08.178    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.178    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.178    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.178    16:49:01	-- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:08.178    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.178    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.178    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.178    16:49:01	-- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:08.178    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.178    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.178    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.178    16:49:01	-- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:08.178    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.178    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.178    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.178    16:49:01	-- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:08.178    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.178    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.178    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.178    16:49:01	-- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:08.178    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.178    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.178    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.178    16:49:01	-- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:08.178    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.178    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.178    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.178    16:49:01	-- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:08.178    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.178    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.178    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.178    16:49:01	-- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:08.178    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.178    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.178    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.178    16:49:01	-- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:08.178    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.178    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.178    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.178    16:49:01	-- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:08.178    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.178    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.178    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.178    16:49:01	-- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:08.178    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.178    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.178    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.178    16:49:01	-- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:08.178    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.178    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.178    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.178    16:49:01	-- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:08.178    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.178    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.178    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.178    16:49:01	-- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:08.178    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.178    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.178    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.178    16:49:01	-- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:08.178    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.178    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.178    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.178    16:49:01	-- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:08.178    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.178    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.178    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.178    16:49:01	-- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:08.178    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.178    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.178    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.178    16:49:01	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:08.178    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.178    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.178    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.178    16:49:01	-- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:08.178    16:49:01	-- setup/common.sh@33 -- # echo 0
00:05:08.178    16:49:01	-- setup/common.sh@33 -- # return 0
00:05:08.178   16:49:01	-- setup/hugepages.sh@97 -- # anon=0
00:05:08.178    16:49:01	-- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp
00:05:08.178    16:49:01	-- setup/common.sh@17 -- # local get=HugePages_Surp
00:05:08.178    16:49:01	-- setup/common.sh@18 -- # local node=
00:05:08.178    16:49:01	-- setup/common.sh@19 -- # local var val
00:05:08.178    16:49:01	-- setup/common.sh@20 -- # local mem_f mem
00:05:08.178    16:49:01	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:05:08.178    16:49:01	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:05:08.178    16:49:01	-- setup/common.sh@25 -- # [[ -n '' ]]
00:05:08.178    16:49:01	-- setup/common.sh@28 -- # mapfile -t mem
00:05:08.178    16:49:01	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:05:08.178    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.178     16:49:01	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12242972 kB' 'MemFree:         4209152 kB' 'MemAvailable:    9485200 kB' 'Buffers:           39856 kB' 'Cached:          5335120 kB' 'SwapCached:            0 kB' 'Active:          1375480 kB' 'Inactive:        4130884 kB' 'Active(anon):       1060 kB' 'Inactive(anon):   141992 kB' 'Active(file):    1374420 kB' 'Inactive(file):  3988892 kB' 'Unevictable:       29168 kB' 'Mlocked:           27632 kB' 'SwapTotal:             0 kB' 'SwapFree:              0 kB' 'Dirty:               696 kB' 'Writeback:             0 kB' 'AnonPages:        160308 kB' 'Mapped:            68260 kB' 'Shmem:              2596 kB' 'KReclaimable:     234040 kB' 'Slab:             302420 kB' 'SReclaimable:     234040 kB' 'SUnreclaim:        68380 kB' 'KernelStack:        4500 kB' 'PageTables:         3624 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:     5072908 kB' 'Committed_AS:     506184 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:       19644 kB' 'VmallocChunk:          0 kB' 'Percpu:             8256 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'HugePages_Total:    1024' 'HugePages_Free:     1024' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2097152 kB' 'DirectMap4k:      141164 kB' 'DirectMap2M:     4052992 kB' 'DirectMap1G:    10485760 kB'
00:05:08.178    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.178    16:49:01	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:08.178    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.178    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.178    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.178    16:49:01	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:08.178    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.178    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.178    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.178    16:49:01	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:08.178    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.178    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.178    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.178    16:49:01	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:08.178    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.178    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.178    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.178    16:49:01	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:08.178    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.178    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.178    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.178    16:49:01	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:08.178    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.178    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.178    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.178    16:49:01	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:08.178    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.178    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.178    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.178    16:49:01	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:08.178    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.178    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.178    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.178    16:49:01	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:08.178    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.178    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.178    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.179    16:49:01	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:08.179    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.179    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.179    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.179    16:49:01	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:08.179    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.179    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.179    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.179    16:49:01	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:08.179    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.179    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.179    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.179    16:49:01	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:08.179    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.179    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.179    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.179    16:49:01	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:08.179    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.179    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.179    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.179    16:49:01	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:08.179    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.179    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.179    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.179    16:49:01	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:08.179    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.179    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.179    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.179    16:49:01	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:08.179    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.179    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.179    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.179    16:49:01	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:08.179    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.179    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.179    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.179    16:49:01	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:08.179    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.179    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.179    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.179    16:49:01	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:08.179    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.179    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.179    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.179    16:49:01	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:08.179    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.179    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.179    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.179    16:49:01	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:08.179    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.179    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.179    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.179    16:49:01	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:08.179    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.179    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.179    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.179    16:49:01	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:08.179    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.179    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.179    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.179    16:49:01	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:08.179    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.179    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.179    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.179    16:49:01	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:08.179    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.179    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.179    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.179    16:49:01	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:08.179    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.179    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.179    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.179    16:49:01	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:08.179    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.179    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.179    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.179    16:49:01	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:08.179    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.179    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.179    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.179    16:49:01	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:08.179    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.179    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.179    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.179    16:49:01	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:08.179    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.179    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.179    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.179    16:49:01	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:08.179    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.179    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.179    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.179    16:49:01	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:08.179    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.179    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.179    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.179    16:49:01	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:08.179    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.179    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.179    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.179    16:49:01	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:08.179    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.179    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.179    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.179    16:49:01	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:08.179    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.179    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.179    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.179    16:49:01	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:08.179    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.179    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.179    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.179    16:49:01	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:08.179    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.179    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.179    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.179    16:49:01	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:08.179    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.440    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.440    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.440    16:49:01	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:08.440    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.440    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.440    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.440    16:49:01	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:08.440    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.440    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.440    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.440    16:49:01	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:08.440    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.440    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.440    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.440    16:49:01	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:08.440    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.440    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.440    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.440    16:49:01	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:08.440    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.440    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.440    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.440    16:49:01	-- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:08.440    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.440    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.441    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.441    16:49:01	-- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:08.441    16:49:01	-- setup/common.sh@33 -- # echo 0
00:05:08.441    16:49:01	-- setup/common.sh@33 -- # return 0
00:05:08.441   16:49:01	-- setup/hugepages.sh@99 -- # surp=0
00:05:08.441    16:49:01	-- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd
00:05:08.441    16:49:01	-- setup/common.sh@17 -- # local get=HugePages_Rsvd
00:05:08.441    16:49:01	-- setup/common.sh@18 -- # local node=
00:05:08.441    16:49:01	-- setup/common.sh@19 -- # local var val
00:05:08.441    16:49:01	-- setup/common.sh@20 -- # local mem_f mem
00:05:08.441    16:49:01	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:05:08.441    16:49:01	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:05:08.441    16:49:01	-- setup/common.sh@25 -- # [[ -n '' ]]
00:05:08.441    16:49:01	-- setup/common.sh@28 -- # mapfile -t mem
00:05:08.441    16:49:01	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:05:08.441    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.441    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.441     16:49:01	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12242972 kB' 'MemFree:         4209152 kB' 'MemAvailable:    9485200 kB' 'Buffers:           39856 kB' 'Cached:          5335120 kB' 'SwapCached:            0 kB' 'Active:          1375480 kB' 'Inactive:        4130588 kB' 'Active(anon):       1060 kB' 'Inactive(anon):   141696 kB' 'Active(file):    1374420 kB' 'Inactive(file):  3988892 kB' 'Unevictable:       29168 kB' 'Mlocked:           27632 kB' 'SwapTotal:             0 kB' 'SwapFree:              0 kB' 'Dirty:               696 kB' 'Writeback:             0 kB' 'AnonPages:        160056 kB' 'Mapped:            68220 kB' 'Shmem:              2596 kB' 'KReclaimable:     234040 kB' 'Slab:             302308 kB' 'SReclaimable:     234040 kB' 'SUnreclaim:        68268 kB' 'KernelStack:        4436 kB' 'PageTables:         3480 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:     5072908 kB' 'Committed_AS:     506184 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:       19660 kB' 'VmallocChunk:          0 kB' 'Percpu:             8256 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'HugePages_Total:    1024' 'HugePages_Free:     1024' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2097152 kB' 'DirectMap4k:      141164 kB' 'DirectMap2M:     4052992 kB' 'DirectMap1G:    10485760 kB'
00:05:08.441    16:49:01	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:08.441    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.441    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.441    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.441    16:49:01	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:08.441    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.441    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.441    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.441    16:49:01	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:08.441    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.441    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.441    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.441    16:49:01	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:08.441    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.441    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.441    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.441    16:49:01	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:08.441    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.441    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.441    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.441    16:49:01	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:08.441    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.441    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.441    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.441    16:49:01	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:08.441    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.441    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.441    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.441    16:49:01	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:08.441    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.441    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.441    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.441    16:49:01	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:08.441    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.441    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.441    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.441    16:49:01	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:08.441    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.441    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.441    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.441    16:49:01	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:08.441    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.441    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.441    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.441    16:49:01	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:08.441    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.441    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.441    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.441    16:49:01	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:08.441    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.441    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.441    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.441    16:49:01	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:08.441    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.441    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.441    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.441    16:49:01	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:08.441    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.441    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.441    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.441    16:49:01	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:08.441    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.441    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.441    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.441    16:49:01	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:08.441    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.441    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.441    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.441    16:49:01	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:08.441    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.441    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.441    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.441    16:49:01	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:08.441    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.441    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.441    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.441    16:49:01	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:08.441    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.441    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.441    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.441    16:49:01	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:08.441    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.441    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.441    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.441    16:49:01	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:08.441    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.441    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.441    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.441    16:49:01	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:08.441    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.441    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.441    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.441    16:49:01	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:08.441    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.441    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.441    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.441    16:49:01	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:08.441    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.441    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.441    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.441    16:49:01	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:08.441    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.441    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.441    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.441    16:49:01	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:08.441    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.441    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.441    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.441    16:49:01	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:08.441    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.441    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.441    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.441    16:49:01	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:08.442    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.442    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.442    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.442    16:49:01	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:08.442    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.442    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.442    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.442    16:49:01	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:08.442    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.442    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.442    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.442    16:49:01	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:08.442    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.442    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.442    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.442    16:49:01	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:08.442    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.442    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.442    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.442    16:49:01	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:08.442    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.442    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.442    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.442    16:49:01	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:08.442    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.442    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.442    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.442    16:49:01	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:08.442    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.442    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.442    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.442    16:49:01	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:08.442    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.442    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.442    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.442    16:49:01	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:08.442    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.442    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.442    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.442    16:49:01	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:08.442    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.442    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.442    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.442    16:49:01	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:08.442    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.442    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.442    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.442    16:49:01	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:08.442    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.442    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.442    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.442    16:49:01	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:08.442    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.442    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.442    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.442    16:49:01	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:08.442    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.442    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.442    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.442    16:49:01	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:08.442    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.442    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.442    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.442    16:49:01	-- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:08.442    16:49:01	-- setup/common.sh@33 -- # echo 0
00:05:08.442    16:49:01	-- setup/common.sh@33 -- # return 0
00:05:08.442   16:49:01	-- setup/hugepages.sh@100 -- # resv=0
00:05:08.442  nr_hugepages=1024
00:05:08.442   16:49:01	-- setup/hugepages.sh@102 -- # echo nr_hugepages=1024
00:05:08.442  resv_hugepages=0
00:05:08.442   16:49:01	-- setup/hugepages.sh@103 -- # echo resv_hugepages=0
00:05:08.442  surplus_hugepages=0
00:05:08.442   16:49:01	-- setup/hugepages.sh@104 -- # echo surplus_hugepages=0
00:05:08.442  anon_hugepages=0
00:05:08.442   16:49:01	-- setup/hugepages.sh@105 -- # echo anon_hugepages=0
00:05:08.442   16:49:01	-- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv ))
00:05:08.442   16:49:01	-- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages ))
00:05:08.442    16:49:01	-- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total
00:05:08.442    16:49:01	-- setup/common.sh@17 -- # local get=HugePages_Total
00:05:08.442    16:49:01	-- setup/common.sh@18 -- # local node=
00:05:08.442    16:49:01	-- setup/common.sh@19 -- # local var val
00:05:08.442    16:49:01	-- setup/common.sh@20 -- # local mem_f mem
00:05:08.442    16:49:01	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:05:08.442    16:49:01	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:05:08.442    16:49:01	-- setup/common.sh@25 -- # [[ -n '' ]]
00:05:08.442    16:49:01	-- setup/common.sh@28 -- # mapfile -t mem
00:05:08.442    16:49:01	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:05:08.442    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.442     16:49:01	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12242972 kB' 'MemFree:         4209152 kB' 'MemAvailable:    9485200 kB' 'Buffers:           39856 kB' 'Cached:          5335120 kB' 'SwapCached:            0 kB' 'Active:          1375480 kB' 'Inactive:        4130640 kB' 'Active(anon):       1060 kB' 'Inactive(anon):   141748 kB' 'Active(file):    1374420 kB' 'Inactive(file):  3988892 kB' 'Unevictable:       29168 kB' 'Mlocked:           27632 kB' 'SwapTotal:             0 kB' 'SwapFree:              0 kB' 'Dirty:               696 kB' 'Writeback:             0 kB' 'AnonPages:        160316 kB' 'Mapped:            68220 kB' 'Shmem:              2596 kB' 'KReclaimable:     234040 kB' 'Slab:             302308 kB' 'SReclaimable:     234040 kB' 'SUnreclaim:        68268 kB' 'KernelStack:        4456 kB' 'PageTables:         3608 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:     5072908 kB' 'Committed_AS:     506184 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:       19660 kB' 'VmallocChunk:          0 kB' 'Percpu:             8256 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'HugePages_Total:    1024' 'HugePages_Free:     1024' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2097152 kB' 'DirectMap4k:      141164 kB' 'DirectMap2M:     4052992 kB' 'DirectMap1G:    10485760 kB'
00:05:08.442    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.442    16:49:01	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:08.442    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.442    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.442    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.442    16:49:01	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:08.442    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.442    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.442    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.442    16:49:01	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:08.442    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.442    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.442    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.442    16:49:01	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:08.442    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.442    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.442    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.442    16:49:01	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:08.442    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.442    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.442    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.442    16:49:01	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:08.442    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.442    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.442    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.442    16:49:01	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:08.442    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.442    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.442    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.442    16:49:01	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:08.442    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.442    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.442    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.442    16:49:01	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:08.442    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.442    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.442    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.442    16:49:01	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:08.442    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.442    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.442    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.442    16:49:01	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:08.442    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.442    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.442    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.442    16:49:01	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:08.442    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.442    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.442    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.442    16:49:01	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:08.442    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.442    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.442    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.442    16:49:01	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:08.443    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.443    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.443    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.443    16:49:01	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:08.443    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.443    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.443    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.443    16:49:01	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:08.443    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.443    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.443    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.443    16:49:01	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:08.443    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.443    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.443    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.443    16:49:01	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:08.443    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.443    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.443    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.443    16:49:01	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:08.443    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.443    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.443    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.443    16:49:01	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:08.443    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.443    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.443    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.443    16:49:01	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:08.443    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.443    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.443    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.443    16:49:01	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:08.443    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.443    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.443    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.443    16:49:01	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:08.443    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.443    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.443    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.443    16:49:01	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:08.443    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.443    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.443    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.443    16:49:01	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:08.443    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.443    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.443    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.443    16:49:01	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:08.443    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.443    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.443    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.443    16:49:01	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:08.443    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.443    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.443    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.443    16:49:01	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:08.443    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.443    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.443    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.443    16:49:01	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:08.443    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.443    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.443    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.443    16:49:01	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:08.443    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.443    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.443    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.443    16:49:01	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:08.443    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.443    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.443    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.443    16:49:01	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:08.443    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.443    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.443    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.443    16:49:01	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:08.443    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.443    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.443    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.443    16:49:01	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:08.443    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.443    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.443    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.443    16:49:01	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:08.443    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.443    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.443    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.443    16:49:01	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:08.443    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.443    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.443    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.443    16:49:01	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:08.443    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.443    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.443    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.443    16:49:01	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:08.443    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.443    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.443    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.443    16:49:01	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:08.443    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.443    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.443    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.443    16:49:01	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:08.443    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.443    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.443    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.443    16:49:01	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:08.443    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.443    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.443    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.443    16:49:01	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:08.443    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.443    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.443    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.443    16:49:01	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:08.443    16:49:01	-- setup/common.sh@33 -- # echo 1024
00:05:08.443    16:49:01	-- setup/common.sh@33 -- # return 0
00:05:08.443   16:49:01	-- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv ))
00:05:08.443   16:49:01	-- setup/hugepages.sh@112 -- # get_nodes
00:05:08.443   16:49:01	-- setup/hugepages.sh@27 -- # local node
00:05:08.443   16:49:01	-- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9])
00:05:08.443   16:49:01	-- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024
00:05:08.443   16:49:01	-- setup/hugepages.sh@32 -- # no_nodes=1
00:05:08.443   16:49:01	-- setup/hugepages.sh@33 -- # (( no_nodes > 0 ))
00:05:08.443   16:49:01	-- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}"
00:05:08.443   16:49:01	-- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv ))
00:05:08.443    16:49:01	-- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0
00:05:08.443    16:49:01	-- setup/common.sh@17 -- # local get=HugePages_Surp
00:05:08.443    16:49:01	-- setup/common.sh@18 -- # local node=0
00:05:08.443    16:49:01	-- setup/common.sh@19 -- # local var val
00:05:08.443    16:49:01	-- setup/common.sh@20 -- # local mem_f mem
00:05:08.443    16:49:01	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:05:08.443    16:49:01	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]]
00:05:08.443    16:49:01	-- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo
00:05:08.443    16:49:01	-- setup/common.sh@28 -- # mapfile -t mem
00:05:08.443    16:49:01	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:05:08.443    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.443    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.443     16:49:01	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12242972 kB' 'MemFree:         4209152 kB' 'MemUsed:         8033820 kB' 'SwapCached:            0 kB' 'Active:          1375480 kB' 'Inactive:        4130380 kB' 'Active(anon):       1060 kB' 'Inactive(anon):   141488 kB' 'Active(file):    1374420 kB' 'Inactive(file):  3988892 kB' 'Unevictable:       29168 kB' 'Mlocked:           27632 kB' 'Dirty:               696 kB' 'Writeback:             0 kB' 'FilePages:       5374976 kB' 'Mapped:            68220 kB' 'AnonPages:        160056 kB' 'Shmem:              2596 kB' 'KernelStack:        4456 kB' 'PageTables:         3608 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'KReclaimable:     234040 kB' 'Slab:             302308 kB' 'SReclaimable:     234040 kB' 'SUnreclaim:        68268 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:        0 kB' 'FilePmdMapped:        0 kB' 'HugePages_Total:  1024' 'HugePages_Free:   1024' 'HugePages_Surp:      0'
00:05:08.443    16:49:01	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:08.443    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.443    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.443    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.443    16:49:01	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:08.443    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.443    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.443    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.443    16:49:01	-- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:08.443    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.443    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.444    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.444    16:49:01	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:08.444    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.444    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.444    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.444    16:49:01	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:08.444    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.444    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.444    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.444    16:49:01	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:08.444    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.444    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.444    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.444    16:49:01	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:08.444    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.444    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.444    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.444    16:49:01	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:08.444    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.444    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.444    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.444    16:49:01	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:08.444    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.444    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.444    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.444    16:49:01	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:08.444    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.444    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.444    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.444    16:49:01	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:08.444    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.444    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.444    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.444    16:49:01	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:08.444    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.444    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.444    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.444    16:49:01	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:08.444    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.444    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.444    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.444    16:49:01	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:08.444    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.444    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.444    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.444    16:49:01	-- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:08.444    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.444    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.444    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.444    16:49:01	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:08.444    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.444    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.444    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.444    16:49:01	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:08.444    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.444    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.444    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.444    16:49:01	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:08.444    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.444    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.444    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.444    16:49:01	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:08.444    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.444    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.444    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.444    16:49:01	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:08.444    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.444    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.444    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.444    16:49:01	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:08.444    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.444    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.444    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.444    16:49:01	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:08.444    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.444    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.444    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.444    16:49:01	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:08.444    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.444    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.444    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.444    16:49:01	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:08.444    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.444    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.444    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.444    16:49:01	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:08.444    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.444    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.444    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.444    16:49:01	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:08.444    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.444    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.444    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.444    16:49:01	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:08.444    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.444    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.444    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.444    16:49:01	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:08.444    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.444    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.444    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.444    16:49:01	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:08.444    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.444    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.444    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.444    16:49:01	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:08.444    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.444    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.444    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.444    16:49:01	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:08.444    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.444    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.444    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.444    16:49:01	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:08.444    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.444    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.444    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.444    16:49:01	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:08.444    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.444    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.444    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.444    16:49:01	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:08.444    16:49:01	-- setup/common.sh@32 -- # continue
00:05:08.444    16:49:01	-- setup/common.sh@31 -- # IFS=': '
00:05:08.444    16:49:01	-- setup/common.sh@31 -- # read -r var val _
00:05:08.444    16:49:01	-- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:08.444    16:49:01	-- setup/common.sh@33 -- # echo 0
00:05:08.444    16:49:01	-- setup/common.sh@33 -- # return 0
00:05:08.444   16:49:01	-- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 ))
00:05:08.444   16:49:01	-- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}"
00:05:08.444   16:49:01	-- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1
00:05:08.444   16:49:01	-- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1
00:05:08.444  node0=1024 expecting 1024
00:05:08.444  ************************************
00:05:08.444  END TEST even_2G_alloc
00:05:08.444  ************************************
00:05:08.444   16:49:01	-- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024'
00:05:08.444   16:49:01	-- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]]
00:05:08.444  
00:05:08.444  real	0m1.047s
00:05:08.444  user	0m0.288s
00:05:08.444  sys	0m0.816s
00:05:08.444   16:49:01	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:05:08.444   16:49:01	-- common/autotest_common.sh@10 -- # set +x
00:05:08.444   16:49:01	-- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc
00:05:08.444   16:49:01	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:05:08.444   16:49:01	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:05:08.444   16:49:01	-- common/autotest_common.sh@10 -- # set +x
00:05:08.444  ************************************
00:05:08.444  START TEST odd_alloc
00:05:08.444  ************************************
00:05:08.444   16:49:01	-- common/autotest_common.sh@1114 -- # odd_alloc
00:05:08.444   16:49:01	-- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176
00:05:08.444   16:49:01	-- setup/hugepages.sh@49 -- # local size=2098176
00:05:08.444   16:49:01	-- setup/hugepages.sh@50 -- # (( 1 > 1 ))
00:05:08.444   16:49:01	-- setup/hugepages.sh@55 -- # (( size >= default_hugepages ))
00:05:08.444   16:49:01	-- setup/hugepages.sh@57 -- # nr_hugepages=1025
00:05:08.444   16:49:01	-- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node
00:05:08.444   16:49:01	-- setup/hugepages.sh@62 -- # user_nodes=()
00:05:08.444   16:49:01	-- setup/hugepages.sh@62 -- # local user_nodes
00:05:08.444   16:49:01	-- setup/hugepages.sh@64 -- # local _nr_hugepages=1025
00:05:08.444   16:49:01	-- setup/hugepages.sh@65 -- # local _no_nodes=1
00:05:08.444   16:49:01	-- setup/hugepages.sh@67 -- # nodes_test=()
00:05:08.444   16:49:01	-- setup/hugepages.sh@67 -- # local -g nodes_test
00:05:08.444   16:49:01	-- setup/hugepages.sh@69 -- # (( 0 > 0 ))
00:05:08.444   16:49:01	-- setup/hugepages.sh@74 -- # (( 0 > 0 ))
00:05:08.444   16:49:01	-- setup/hugepages.sh@81 -- # (( _no_nodes > 0 ))
00:05:08.445   16:49:01	-- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025
00:05:08.445   16:49:01	-- setup/hugepages.sh@83 -- # : 0
00:05:08.445   16:49:01	-- setup/hugepages.sh@84 -- # : 0
00:05:08.445   16:49:01	-- setup/hugepages.sh@81 -- # (( _no_nodes > 0 ))
00:05:08.445   16:49:01	-- setup/hugepages.sh@160 -- # HUGEMEM=2049
00:05:08.445   16:49:01	-- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes
00:05:08.445   16:49:01	-- setup/hugepages.sh@160 -- # setup output
00:05:08.445   16:49:01	-- setup/common.sh@9 -- # [[ output == output ]]
00:05:08.445   16:49:01	-- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:05:09.012  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev
00:05:09.012  0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver
00:05:09.952   16:49:02	-- setup/hugepages.sh@161 -- # verify_nr_hugepages
00:05:09.952   16:49:02	-- setup/hugepages.sh@89 -- # local node
00:05:09.952   16:49:02	-- setup/hugepages.sh@90 -- # local sorted_t
00:05:09.952   16:49:02	-- setup/hugepages.sh@91 -- # local sorted_s
00:05:09.952   16:49:02	-- setup/hugepages.sh@92 -- # local surp
00:05:09.952   16:49:02	-- setup/hugepages.sh@93 -- # local resv
00:05:09.952   16:49:02	-- setup/hugepages.sh@94 -- # local anon
00:05:09.952   16:49:02	-- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]]
00:05:09.952    16:49:02	-- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages
00:05:09.952    16:49:02	-- setup/common.sh@17 -- # local get=AnonHugePages
00:05:09.952    16:49:02	-- setup/common.sh@18 -- # local node=
00:05:09.952    16:49:02	-- setup/common.sh@19 -- # local var val
00:05:09.952    16:49:02	-- setup/common.sh@20 -- # local mem_f mem
00:05:09.952    16:49:02	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:05:09.952    16:49:02	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:05:09.952    16:49:02	-- setup/common.sh@25 -- # [[ -n '' ]]
00:05:09.952    16:49:02	-- setup/common.sh@28 -- # mapfile -t mem
00:05:09.952    16:49:02	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:05:09.952    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.952    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.952     16:49:02	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12242972 kB' 'MemFree:         4209492 kB' 'MemAvailable:    9485544 kB' 'Buffers:           39856 kB' 'Cached:          5335120 kB' 'SwapCached:            0 kB' 'Active:          1375496 kB' 'Inactive:        4127652 kB' 'Active(anon):       1064 kB' 'Inactive(anon):   138768 kB' 'Active(file):    1374432 kB' 'Inactive(file):  3988884 kB' 'Unevictable:       29168 kB' 'Mlocked:           27632 kB' 'SwapTotal:             0 kB' 'SwapFree:              0 kB' 'Dirty:               252 kB' 'Writeback:             0 kB' 'AnonPages:        157464 kB' 'Mapped:            67604 kB' 'Shmem:              2596 kB' 'KReclaimable:     234040 kB' 'Slab:             302268 kB' 'SReclaimable:     234040 kB' 'SUnreclaim:        68228 kB' 'KernelStack:        4408 kB' 'PageTables:         3436 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:     5071884 kB' 'Committed_AS:     498200 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:       19564 kB' 'VmallocChunk:          0 kB' 'Percpu:             8256 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'HugePages_Total:    1025' 'HugePages_Free:     1025' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2099200 kB' 'DirectMap4k:      141164 kB' 'DirectMap2M:     4052992 kB' 'DirectMap1G:    10485760 kB'
00:05:09.952    16:49:02	-- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:09.952    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.952    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.952    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.952    16:49:02	-- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:09.952    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.952    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.952    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.952    16:49:02	-- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:09.952    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.952    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.952    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.952    16:49:02	-- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:09.952    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.952    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.952    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.952    16:49:02	-- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:09.952    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.952    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.952    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.952    16:49:02	-- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:09.952    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.952    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.952    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.952    16:49:02	-- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:09.952    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.952    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.952    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.952    16:49:02	-- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:09.952    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.952    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.952    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.952    16:49:02	-- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:09.952    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.952    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.952    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.952    16:49:02	-- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:09.952    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.952    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.952    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.952    16:49:02	-- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:09.952    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.952    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.952    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.952    16:49:02	-- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:09.952    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.952    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.952    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.952    16:49:02	-- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:09.952    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.952    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.952    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.952    16:49:02	-- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:09.952    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.952    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.952    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.952    16:49:02	-- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:09.953    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.953    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.953    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.953    16:49:02	-- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:09.953    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.953    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.953    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.953    16:49:02	-- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:09.953    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.953    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.953    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.953    16:49:02	-- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:09.953    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.953    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.953    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.953    16:49:02	-- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:09.953    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.953    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.953    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.953    16:49:02	-- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:09.953    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.953    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.953    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.953    16:49:02	-- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:09.953    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.953    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.953    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.953    16:49:02	-- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:09.953    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.953    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.953    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.953    16:49:02	-- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:09.953    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.953    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.953    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.953    16:49:02	-- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:09.953    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.953    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.953    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.953    16:49:02	-- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:09.953    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.953    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.953    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.953    16:49:02	-- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:09.953    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.953    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.953    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.953    16:49:02	-- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:09.953    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.953    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.953    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.953    16:49:02	-- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:09.953    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.953    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.953    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.953    16:49:02	-- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:09.953    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.953    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.953    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.953    16:49:02	-- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:09.953    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.953    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.953    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.953    16:49:02	-- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:09.953    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.953    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.953    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.953    16:49:02	-- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:09.953    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.953    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.953    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.953    16:49:02	-- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:09.953    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.953    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.953    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.953    16:49:02	-- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:09.953    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.953    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.953    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.953    16:49:02	-- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:09.953    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.953    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.953    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.953    16:49:02	-- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:09.953    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.953    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.953    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.953    16:49:02	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:09.953    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.953    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.953    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.953    16:49:02	-- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:09.953    16:49:02	-- setup/common.sh@33 -- # echo 0
00:05:09.953    16:49:02	-- setup/common.sh@33 -- # return 0
00:05:09.953   16:49:02	-- setup/hugepages.sh@97 -- # anon=0
00:05:09.953    16:49:02	-- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp
00:05:09.953    16:49:02	-- setup/common.sh@17 -- # local get=HugePages_Surp
00:05:09.953    16:49:02	-- setup/common.sh@18 -- # local node=
00:05:09.953    16:49:02	-- setup/common.sh@19 -- # local var val
00:05:09.953    16:49:02	-- setup/common.sh@20 -- # local mem_f mem
00:05:09.953    16:49:02	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:05:09.953    16:49:02	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:05:09.953    16:49:02	-- setup/common.sh@25 -- # [[ -n '' ]]
00:05:09.953    16:49:02	-- setup/common.sh@28 -- # mapfile -t mem
00:05:09.953    16:49:02	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:05:09.953    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.953     16:49:02	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12242972 kB' 'MemFree:         4209744 kB' 'MemAvailable:    9485796 kB' 'Buffers:           39856 kB' 'Cached:          5335120 kB' 'SwapCached:            0 kB' 'Active:          1375488 kB' 'Inactive:        4127740 kB' 'Active(anon):       1056 kB' 'Inactive(anon):   138856 kB' 'Active(file):    1374432 kB' 'Inactive(file):  3988884 kB' 'Unevictable:       29168 kB' 'Mlocked:           27632 kB' 'SwapTotal:             0 kB' 'SwapFree:              0 kB' 'Dirty:               252 kB' 'Writeback:             0 kB' 'AnonPages:        157552 kB' 'Mapped:            67304 kB' 'Shmem:              2596 kB' 'KReclaimable:     234040 kB' 'Slab:             302268 kB' 'SReclaimable:     234040 kB' 'SUnreclaim:        68228 kB' 'KernelStack:        4376 kB' 'PageTables:         3380 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:     5071884 kB' 'Committed_AS:     498200 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:       19580 kB' 'VmallocChunk:          0 kB' 'Percpu:             8256 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'HugePages_Total:    1025' 'HugePages_Free:     1025' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2099200 kB' 'DirectMap4k:      141164 kB' 'DirectMap2M:     4052992 kB' 'DirectMap1G:    10485760 kB'
00:05:09.953    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.953    16:49:02	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:09.953    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.953    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.953    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.953    16:49:02	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:09.953    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.953    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.953    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.953    16:49:02	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:09.953    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.953    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.953    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.953    16:49:02	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:09.953    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.953    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.953    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.953    16:49:02	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:09.953    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.953    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.953    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.953    16:49:02	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:09.953    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.953    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.953    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.953    16:49:02	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:09.953    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.953    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.953    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.953    16:49:02	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:09.953    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.953    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.953    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.953    16:49:02	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:09.953    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.953    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.953    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.953    16:49:02	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:09.953    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.953    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.953    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.953    16:49:02	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:09.954    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.954    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.954    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.954    16:49:02	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:09.954    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.954    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.954    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.954    16:49:02	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:09.954    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.954    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.954    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.954    16:49:02	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:09.954    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.954    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.954    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.954    16:49:02	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:09.954    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.954    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.954    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.954    16:49:02	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:09.954    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.954    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.954    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.954    16:49:02	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:09.954    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.954    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.954    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.954    16:49:02	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:09.954    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.954    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.954    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.954    16:49:02	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:09.954    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.954    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.954    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.954    16:49:02	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:09.954    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.954    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.954    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.954    16:49:02	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:09.954    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.954    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.954    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.954    16:49:02	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:09.954    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.954    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.954    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.954    16:49:02	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:09.954    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.954    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.954    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.954    16:49:02	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:09.954    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.954    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.954    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.954    16:49:02	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:09.954    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.954    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.954    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.954    16:49:02	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:09.954    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.954    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.954    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.954    16:49:02	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:09.954    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.954    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.954    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.954    16:49:02	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:09.954    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.954    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.954    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.954    16:49:02	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:09.954    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.954    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.954    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.954    16:49:02	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:09.954    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.954    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.954    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.954    16:49:02	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:09.954    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.954    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.954    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.954    16:49:02	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:09.954    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.954    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.954    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.954    16:49:02	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:09.954    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.954    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.954    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.954    16:49:02	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:09.954    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.954    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.954    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.954    16:49:02	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:09.954    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.954    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.954    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.954    16:49:02	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:09.954    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.954    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.954    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.954    16:49:02	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:09.954    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.954    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.954    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.954    16:49:02	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:09.954    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.954    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.954    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.954    16:49:02	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:09.954    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.954    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.954    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.954    16:49:02	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:09.954    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.954    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.954    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.954    16:49:02	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:09.954    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.954    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.954    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.954    16:49:02	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:09.954    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.954    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.954    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.954    16:49:02	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:09.954    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.954    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.954    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.954    16:49:02	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:09.954    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.954    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.954    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.954    16:49:02	-- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:09.954    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.954    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.954    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.954    16:49:02	-- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:09.954    16:49:02	-- setup/common.sh@33 -- # echo 0
00:05:09.954    16:49:02	-- setup/common.sh@33 -- # return 0
00:05:09.954   16:49:02	-- setup/hugepages.sh@99 -- # surp=0
00:05:09.954    16:49:02	-- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd
00:05:09.954    16:49:02	-- setup/common.sh@17 -- # local get=HugePages_Rsvd
00:05:09.954    16:49:02	-- setup/common.sh@18 -- # local node=
00:05:09.954    16:49:02	-- setup/common.sh@19 -- # local var val
00:05:09.954    16:49:02	-- setup/common.sh@20 -- # local mem_f mem
00:05:09.954    16:49:02	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:05:09.954    16:49:02	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:05:09.954    16:49:02	-- setup/common.sh@25 -- # [[ -n '' ]]
00:05:09.954    16:49:02	-- setup/common.sh@28 -- # mapfile -t mem
00:05:09.954    16:49:02	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:05:09.954    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.955     16:49:02	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12242972 kB' 'MemFree:         4209744 kB' 'MemAvailable:    9485796 kB' 'Buffers:           39856 kB' 'Cached:          5335120 kB' 'SwapCached:            0 kB' 'Active:          1375480 kB' 'Inactive:        4127616 kB' 'Active(anon):       1048 kB' 'Inactive(anon):   138732 kB' 'Active(file):    1374432 kB' 'Inactive(file):  3988884 kB' 'Unevictable:       29168 kB' 'Mlocked:           27632 kB' 'SwapTotal:             0 kB' 'SwapFree:              0 kB' 'Dirty:               252 kB' 'Writeback:             0 kB' 'AnonPages:        157436 kB' 'Mapped:            67272 kB' 'Shmem:              2596 kB' 'KReclaimable:     234040 kB' 'Slab:             302324 kB' 'SReclaimable:     234040 kB' 'SUnreclaim:        68284 kB' 'KernelStack:        4384 kB' 'PageTables:         3460 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:     5071884 kB' 'Committed_AS:     498200 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:       19580 kB' 'VmallocChunk:          0 kB' 'Percpu:             8256 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'HugePages_Total:    1025' 'HugePages_Free:     1025' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2099200 kB' 'DirectMap4k:      141164 kB' 'DirectMap2M:     4052992 kB' 'DirectMap1G:    10485760 kB'
00:05:09.955    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.955    16:49:02	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:09.955    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.955    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.955    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.955    16:49:02	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:09.955    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.955    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.955    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.955    16:49:02	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:09.955    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.955    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.955    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.955    16:49:02	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:09.955    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.955    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.955    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.955    16:49:02	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:09.955    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.955    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.955    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.955    16:49:02	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:09.955    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.955    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.955    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.955    16:49:02	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:09.955    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.955    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.955    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.955    16:49:02	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:09.955    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.955    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.955    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.955    16:49:02	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:09.955    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.955    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.955    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.955    16:49:02	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:09.955    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.955    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.955    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.955    16:49:02	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:09.955    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.955    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.955    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.955    16:49:02	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:09.955    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.955    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.955    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.955    16:49:02	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:09.955    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.955    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.955    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.955    16:49:02	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:09.955    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.955    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.955    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.955    16:49:02	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:09.955    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.955    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.955    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.955    16:49:02	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:09.955    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.955    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.955    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.955    16:49:02	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:09.955    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.955    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.955    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.955    16:49:02	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:09.955    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.955    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.955    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.955    16:49:02	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:09.955    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.955    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.955    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.955    16:49:02	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:09.955    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.955    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.955    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.955    16:49:02	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:09.955    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.955    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.955    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.955    16:49:02	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:09.955    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.955    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.955    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.955    16:49:02	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:09.955    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.955    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.955    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.955    16:49:02	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:09.955    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.955    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.955    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.955    16:49:02	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:09.955    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.955    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.955    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.955    16:49:02	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:09.955    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.955    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.955    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.955    16:49:02	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:09.955    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.955    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.955    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.955    16:49:02	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:09.955    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.955    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.955    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.955    16:49:02	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:09.955    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.955    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.955    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.955    16:49:02	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:09.955    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.955    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.955    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.955    16:49:02	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:09.955    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.955    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.955    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.955    16:49:02	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:09.955    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.955    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.955    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.955    16:49:02	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:09.955    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.955    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.955    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.955    16:49:02	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:09.955    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.955    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.955    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.955    16:49:02	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:09.955    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.955    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.955    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.955    16:49:02	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:09.955    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.955    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.955    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.955    16:49:02	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:09.955    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.955    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.955    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.955    16:49:02	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:09.955    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.955    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.955    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.955    16:49:02	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:09.955    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.956    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.956    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.956    16:49:02	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:09.956    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.956    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.956    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.956    16:49:02	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:09.956    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.956    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.956    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.956    16:49:02	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:09.956    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.956    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.956    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.956    16:49:02	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:09.956    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.956    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.956    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.956    16:49:02	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:09.956    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.956    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.956    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.956    16:49:02	-- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:09.956    16:49:02	-- setup/common.sh@33 -- # echo 0
00:05:09.956    16:49:02	-- setup/common.sh@33 -- # return 0
00:05:09.956  nr_hugepages=1025
00:05:09.956  resv_hugepages=0
00:05:09.956  surplus_hugepages=0
00:05:09.956  anon_hugepages=0
00:05:09.956   16:49:02	-- setup/hugepages.sh@100 -- # resv=0
00:05:09.956   16:49:02	-- setup/hugepages.sh@102 -- # echo nr_hugepages=1025
00:05:09.956   16:49:02	-- setup/hugepages.sh@103 -- # echo resv_hugepages=0
00:05:09.956   16:49:02	-- setup/hugepages.sh@104 -- # echo surplus_hugepages=0
00:05:09.956   16:49:02	-- setup/hugepages.sh@105 -- # echo anon_hugepages=0
00:05:09.956   16:49:02	-- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv ))
00:05:09.956   16:49:02	-- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages ))
00:05:09.956    16:49:02	-- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total
00:05:09.956    16:49:02	-- setup/common.sh@17 -- # local get=HugePages_Total
00:05:09.956    16:49:02	-- setup/common.sh@18 -- # local node=
00:05:09.956    16:49:02	-- setup/common.sh@19 -- # local var val
00:05:09.956    16:49:02	-- setup/common.sh@20 -- # local mem_f mem
00:05:09.956    16:49:02	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:05:09.956    16:49:02	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:05:09.956    16:49:02	-- setup/common.sh@25 -- # [[ -n '' ]]
00:05:09.956    16:49:02	-- setup/common.sh@28 -- # mapfile -t mem
00:05:09.956    16:49:02	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:05:09.956    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.956    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.956     16:49:02	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12242972 kB' 'MemFree:         4209744 kB' 'MemAvailable:    9485796 kB' 'Buffers:           39856 kB' 'Cached:          5335120 kB' 'SwapCached:            0 kB' 'Active:          1375480 kB' 'Inactive:        4127644 kB' 'Active(anon):       1048 kB' 'Inactive(anon):   138760 kB' 'Active(file):    1374432 kB' 'Inactive(file):  3988884 kB' 'Unevictable:       29168 kB' 'Mlocked:           27632 kB' 'SwapTotal:             0 kB' 'SwapFree:              0 kB' 'Dirty:               252 kB' 'Writeback:             0 kB' 'AnonPages:        157160 kB' 'Mapped:            67272 kB' 'Shmem:              2596 kB' 'KReclaimable:     234040 kB' 'Slab:             302324 kB' 'SReclaimable:     234040 kB' 'SUnreclaim:        68284 kB' 'KernelStack:        4368 kB' 'PageTables:         3416 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:     5071884 kB' 'Committed_AS:     498200 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:       19612 kB' 'VmallocChunk:          0 kB' 'Percpu:             8256 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'HugePages_Total:    1025' 'HugePages_Free:     1025' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2099200 kB' 'DirectMap4k:      141164 kB' 'DirectMap2M:     4052992 kB' 'DirectMap1G:    10485760 kB'
00:05:09.956    16:49:02	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:09.956    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.956    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.956    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.956    16:49:02	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:09.956    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.956    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.956    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.956    16:49:02	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:09.956    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.956    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.956    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.956    16:49:02	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:09.956    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.956    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.956    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.956    16:49:02	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:09.956    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.956    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.956    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.956    16:49:02	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:09.956    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.956    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.956    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.956    16:49:02	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:09.956    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.956    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.956    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.956    16:49:02	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:09.956    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.956    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.956    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.956    16:49:02	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:09.956    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.956    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.956    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.956    16:49:02	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:09.956    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.956    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.956    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.956    16:49:02	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:09.956    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.956    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.956    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.956    16:49:02	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:09.956    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.956    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.956    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.956    16:49:02	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:09.956    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.956    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.956    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.956    16:49:02	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:09.956    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.956    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.956    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.956    16:49:02	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:09.956    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.956    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.956    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.956    16:49:02	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:09.956    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.956    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.956    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.956    16:49:02	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:09.956    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.956    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.956    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.956    16:49:02	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:09.956    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.956    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.956    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.956    16:49:02	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:09.956    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.956    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.956    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.956    16:49:02	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:09.956    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.956    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.956    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.956    16:49:02	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:09.956    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.956    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.956    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.956    16:49:02	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:09.956    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.956    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.956    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.956    16:49:02	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:09.956    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.956    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.956    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.956    16:49:02	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:09.956    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.956    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.956    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.956    16:49:02	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:09.956    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.956    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.956    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.957    16:49:02	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:09.957    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.957    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.957    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.957    16:49:02	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:09.957    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.957    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.957    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.957    16:49:02	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:09.957    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.957    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.957    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.957    16:49:02	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:09.957    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.957    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.957    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.957    16:49:02	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:09.957    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.957    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.957    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.957    16:49:02	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:09.957    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.957    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.957    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.957    16:49:02	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:09.957    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.957    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.957    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.957    16:49:02	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:09.957    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.957    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.957    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.957    16:49:02	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:09.957    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.957    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.957    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.957    16:49:02	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:09.957    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.957    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.957    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.957    16:49:02	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:09.957    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.957    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.957    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.957    16:49:02	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:09.957    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.957    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.957    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.957    16:49:02	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:09.957    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.957    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.957    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.957    16:49:02	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:09.957    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.957    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.957    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.957    16:49:02	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:09.957    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.957    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.957    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.957    16:49:02	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:09.957    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.957    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.957    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.957    16:49:02	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:09.957    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.957    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.957    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.957    16:49:02	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:09.957    16:49:02	-- setup/common.sh@33 -- # echo 1025
00:05:09.957    16:49:02	-- setup/common.sh@33 -- # return 0
00:05:09.957   16:49:02	-- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv ))
00:05:09.957   16:49:02	-- setup/hugepages.sh@112 -- # get_nodes
00:05:09.957   16:49:02	-- setup/hugepages.sh@27 -- # local node
00:05:09.957   16:49:02	-- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9])
00:05:09.957   16:49:02	-- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025
00:05:09.957   16:49:02	-- setup/hugepages.sh@32 -- # no_nodes=1
00:05:09.957   16:49:02	-- setup/hugepages.sh@33 -- # (( no_nodes > 0 ))
00:05:09.957   16:49:02	-- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}"
00:05:09.957   16:49:02	-- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv ))
00:05:09.957    16:49:02	-- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0
00:05:09.957    16:49:02	-- setup/common.sh@17 -- # local get=HugePages_Surp
00:05:09.957    16:49:02	-- setup/common.sh@18 -- # local node=0
00:05:09.957    16:49:02	-- setup/common.sh@19 -- # local var val
00:05:09.957    16:49:02	-- setup/common.sh@20 -- # local mem_f mem
00:05:09.957    16:49:02	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:05:09.957    16:49:02	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]]
00:05:09.957    16:49:02	-- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo
00:05:09.957    16:49:02	-- setup/common.sh@28 -- # mapfile -t mem
00:05:09.957    16:49:02	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:05:09.957    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.957    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.957     16:49:02	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12242972 kB' 'MemFree:         4209744 kB' 'MemUsed:         8033228 kB' 'SwapCached:            0 kB' 'Active:          1375480 kB' 'Inactive:        4127528 kB' 'Active(anon):       1048 kB' 'Inactive(anon):   138644 kB' 'Active(file):    1374432 kB' 'Inactive(file):  3988884 kB' 'Unevictable:       29168 kB' 'Mlocked:           27632 kB' 'Dirty:               252 kB' 'Writeback:             0 kB' 'FilePages:       5374976 kB' 'Mapped:            67272 kB' 'AnonPages:        157296 kB' 'Shmem:              2596 kB' 'KernelStack:        4388 kB' 'PageTables:         3280 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'KReclaimable:     234040 kB' 'Slab:             302324 kB' 'SReclaimable:     234040 kB' 'SUnreclaim:        68284 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:        0 kB' 'FilePmdMapped:        0 kB' 'HugePages_Total:  1025' 'HugePages_Free:   1025' 'HugePages_Surp:      0'
00:05:09.957    16:49:02	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:09.957    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.957    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.957    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.957    16:49:02	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:09.957    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.957    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.957    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.957    16:49:02	-- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:09.957    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.957    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.957    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.957    16:49:02	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:09.957    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.957    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.957    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.957    16:49:02	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:09.957    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.957    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.957    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.957    16:49:02	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:09.957    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.957    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.957    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.957    16:49:02	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:09.957    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.957    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.957    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.957    16:49:02	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:09.957    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.957    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.957    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.958    16:49:02	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:09.958    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.958    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.958    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.958    16:49:02	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:09.958    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.958    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.958    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.958    16:49:02	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:09.958    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.958    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.958    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.958    16:49:02	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:09.958    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.958    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.958    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.958    16:49:02	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:09.958    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.958    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.958    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.958    16:49:02	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:09.958    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.958    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.958    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.958    16:49:02	-- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:09.958    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.958    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.958    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.958    16:49:02	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:09.958    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.958    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.958    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.958    16:49:02	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:09.958    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.958    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.958    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.958    16:49:02	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:09.958    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.958    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.958    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.958    16:49:02	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:09.958    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.958    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.958    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.958    16:49:02	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:09.958    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.958    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.958    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.958    16:49:02	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:09.958    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.958    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.958    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.958    16:49:02	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:09.958    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.958    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.958    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.958    16:49:02	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:09.958    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.958    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.958    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.958    16:49:02	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:09.958    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.958    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.958    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.958    16:49:02	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:09.958    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.958    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.958    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.958    16:49:02	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:09.958    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.958    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.958    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.958    16:49:02	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:09.958    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.958    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.958    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.958    16:49:02	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:09.958    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.958    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.958    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.958    16:49:02	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:09.958    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.958    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.958    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.958    16:49:02	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:09.958    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.958    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.958    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.958    16:49:02	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:09.958    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.958    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.958    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.958    16:49:02	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:09.958    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.958    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.958    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.958    16:49:02	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:09.958    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.958    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.958    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.958    16:49:02	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:09.958    16:49:02	-- setup/common.sh@32 -- # continue
00:05:09.958    16:49:02	-- setup/common.sh@31 -- # IFS=': '
00:05:09.958    16:49:02	-- setup/common.sh@31 -- # read -r var val _
00:05:09.958    16:49:02	-- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:09.958    16:49:02	-- setup/common.sh@33 -- # echo 0
00:05:09.958    16:49:02	-- setup/common.sh@33 -- # return 0
00:05:09.958  node0=1025 expecting 1025
00:05:09.958  ************************************
00:05:09.958  END TEST odd_alloc
00:05:09.958  ************************************
00:05:09.958   16:49:02	-- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 ))
00:05:09.958   16:49:02	-- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}"
00:05:09.958   16:49:02	-- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1
00:05:09.958   16:49:02	-- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1
00:05:09.958   16:49:02	-- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025'
00:05:09.958   16:49:02	-- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]]
00:05:09.958  
00:05:09.958  real	0m1.484s
00:05:09.958  user	0m0.333s
00:05:09.958  sys	0m1.166s
00:05:09.958   16:49:02	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:05:09.958   16:49:02	-- common/autotest_common.sh@10 -- # set +x
00:05:09.958   16:49:02	-- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc
00:05:09.958   16:49:02	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:05:09.958   16:49:02	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:05:09.958   16:49:02	-- common/autotest_common.sh@10 -- # set +x
00:05:09.958  ************************************
00:05:09.958  START TEST custom_alloc
00:05:09.958  ************************************
00:05:09.958   16:49:02	-- common/autotest_common.sh@1114 -- # custom_alloc
00:05:09.958   16:49:02	-- setup/hugepages.sh@167 -- # local IFS=,
00:05:09.958   16:49:02	-- setup/hugepages.sh@169 -- # local node
00:05:09.958   16:49:02	-- setup/hugepages.sh@170 -- # nodes_hp=()
00:05:09.958   16:49:02	-- setup/hugepages.sh@170 -- # local nodes_hp
00:05:09.958   16:49:02	-- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0
00:05:09.958   16:49:02	-- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576
00:05:09.958   16:49:02	-- setup/hugepages.sh@49 -- # local size=1048576
00:05:09.958   16:49:02	-- setup/hugepages.sh@50 -- # (( 1 > 1 ))
00:05:09.958   16:49:02	-- setup/hugepages.sh@55 -- # (( size >= default_hugepages ))
00:05:09.958   16:49:02	-- setup/hugepages.sh@57 -- # nr_hugepages=512
00:05:09.958   16:49:02	-- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node
00:05:09.958   16:49:02	-- setup/hugepages.sh@62 -- # user_nodes=()
00:05:09.958   16:49:02	-- setup/hugepages.sh@62 -- # local user_nodes
00:05:09.958   16:49:02	-- setup/hugepages.sh@64 -- # local _nr_hugepages=512
00:05:09.958   16:49:02	-- setup/hugepages.sh@65 -- # local _no_nodes=1
00:05:09.958   16:49:02	-- setup/hugepages.sh@67 -- # nodes_test=()
00:05:09.958   16:49:02	-- setup/hugepages.sh@67 -- # local -g nodes_test
00:05:09.958   16:49:02	-- setup/hugepages.sh@69 -- # (( 0 > 0 ))
00:05:09.958   16:49:02	-- setup/hugepages.sh@74 -- # (( 0 > 0 ))
00:05:09.958   16:49:02	-- setup/hugepages.sh@81 -- # (( _no_nodes > 0 ))
00:05:09.958   16:49:02	-- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512
00:05:09.958   16:49:02	-- setup/hugepages.sh@83 -- # : 0
00:05:09.958   16:49:02	-- setup/hugepages.sh@84 -- # : 0
00:05:09.958   16:49:02	-- setup/hugepages.sh@81 -- # (( _no_nodes > 0 ))
00:05:09.958   16:49:02	-- setup/hugepages.sh@175 -- # nodes_hp[0]=512
00:05:09.958   16:49:02	-- setup/hugepages.sh@176 -- # (( 1 > 1 ))
00:05:09.958   16:49:02	-- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}"
00:05:09.958   16:49:02	-- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}")
00:05:09.958   16:49:02	-- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] ))
00:05:09.958   16:49:02	-- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node
00:05:09.958   16:49:02	-- setup/hugepages.sh@62 -- # user_nodes=()
00:05:09.958   16:49:02	-- setup/hugepages.sh@62 -- # local user_nodes
00:05:09.958   16:49:02	-- setup/hugepages.sh@64 -- # local _nr_hugepages=512
00:05:09.958   16:49:02	-- setup/hugepages.sh@65 -- # local _no_nodes=1
00:05:09.958   16:49:02	-- setup/hugepages.sh@67 -- # nodes_test=()
00:05:09.959   16:49:02	-- setup/hugepages.sh@67 -- # local -g nodes_test
00:05:09.959   16:49:02	-- setup/hugepages.sh@69 -- # (( 0 > 0 ))
00:05:09.959   16:49:02	-- setup/hugepages.sh@74 -- # (( 1 > 0 ))
00:05:09.959   16:49:02	-- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}"
00:05:09.959   16:49:02	-- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512
00:05:09.959   16:49:02	-- setup/hugepages.sh@78 -- # return 0
00:05:09.959   16:49:02	-- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512'
00:05:09.959   16:49:02	-- setup/hugepages.sh@187 -- # setup output
00:05:09.959   16:49:02	-- setup/common.sh@9 -- # [[ output == output ]]
00:05:09.959   16:49:02	-- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:05:10.527  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev
00:05:10.527  0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver
00:05:10.791   16:49:03	-- setup/hugepages.sh@188 -- # nr_hugepages=512
00:05:10.791   16:49:03	-- setup/hugepages.sh@188 -- # verify_nr_hugepages
00:05:10.791   16:49:03	-- setup/hugepages.sh@89 -- # local node
00:05:10.791   16:49:03	-- setup/hugepages.sh@90 -- # local sorted_t
00:05:10.791   16:49:03	-- setup/hugepages.sh@91 -- # local sorted_s
00:05:10.791   16:49:03	-- setup/hugepages.sh@92 -- # local surp
00:05:10.791   16:49:03	-- setup/hugepages.sh@93 -- # local resv
00:05:10.791   16:49:03	-- setup/hugepages.sh@94 -- # local anon
00:05:10.791   16:49:03	-- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]]
00:05:10.791    16:49:03	-- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages
00:05:10.791    16:49:03	-- setup/common.sh@17 -- # local get=AnonHugePages
00:05:10.791    16:49:03	-- setup/common.sh@18 -- # local node=
00:05:10.791    16:49:03	-- setup/common.sh@19 -- # local var val
00:05:10.791    16:49:03	-- setup/common.sh@20 -- # local mem_f mem
00:05:10.791    16:49:03	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:05:10.791    16:49:03	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:05:10.791    16:49:03	-- setup/common.sh@25 -- # [[ -n '' ]]
00:05:10.791    16:49:03	-- setup/common.sh@28 -- # mapfile -t mem
00:05:10.791    16:49:03	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:05:10.791    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.791    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.791     16:49:03	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12242972 kB' 'MemFree:         5262736 kB' 'MemAvailable:   10538788 kB' 'Buffers:           39856 kB' 'Cached:          5335120 kB' 'SwapCached:            0 kB' 'Active:          1375488 kB' 'Inactive:        4127896 kB' 'Active(anon):       1056 kB' 'Inactive(anon):   139012 kB' 'Active(file):    1374432 kB' 'Inactive(file):  3988884 kB' 'Unevictable:       29168 kB' 'Mlocked:           27632 kB' 'SwapTotal:             0 kB' 'SwapFree:              0 kB' 'Dirty:               252 kB' 'Writeback:             0 kB' 'AnonPages:        157664 kB' 'Mapped:            67564 kB' 'Shmem:              2596 kB' 'KReclaimable:     234040 kB' 'Slab:             302444 kB' 'SReclaimable:     234040 kB' 'SUnreclaim:        68404 kB' 'KernelStack:        4492 kB' 'PageTables:         3656 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:     5597196 kB' 'Committed_AS:     498200 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:       19580 kB' 'VmallocChunk:          0 kB' 'Percpu:             8256 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'HugePages_Total:     512' 'HugePages_Free:      512' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         1048576 kB' 'DirectMap4k:      141164 kB' 'DirectMap2M:     4052992 kB' 'DirectMap1G:    10485760 kB'
00:05:10.791    16:49:03	-- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:10.791    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.791    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.791    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.791    16:49:03	-- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:10.791    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.791    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.791    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.791    16:49:03	-- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:10.791    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.791    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.791    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.791    16:49:03	-- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:10.791    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.791    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.791    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.791    16:49:03	-- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:10.791    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.791    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.791    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.791    16:49:03	-- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:10.791    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.791    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.791    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.791    16:49:03	-- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:10.791    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.791    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.791    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.791    16:49:03	-- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:10.791    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.791    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.791    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.791    16:49:03	-- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:10.791    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.791    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.791    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.791    16:49:03	-- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:10.791    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.791    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.791    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.791    16:49:03	-- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:10.791    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.791    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.791    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.791    16:49:03	-- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:10.791    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.791    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.791    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.791    16:49:03	-- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:10.791    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.791    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.791    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.791    16:49:03	-- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:10.791    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.791    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.791    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.791    16:49:03	-- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:10.791    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.791    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.791    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.791    16:49:03	-- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:10.791    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.791    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.791    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.791    16:49:03	-- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:10.791    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.791    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.791    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.791    16:49:03	-- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:10.791    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.791    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.791    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.791    16:49:03	-- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:10.791    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.791    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.791    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.791    16:49:03	-- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:10.791    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.791    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.791    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.791    16:49:03	-- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:10.791    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.791    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.791    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.791    16:49:03	-- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:10.791    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.791    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.791    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.791    16:49:03	-- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:10.791    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.791    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.791    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.791    16:49:03	-- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:10.792    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.792    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.792    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.792    16:49:03	-- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:10.792    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.792    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.792    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.792    16:49:03	-- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:10.792    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.792    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.792    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.792    16:49:03	-- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:10.792    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.792    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.792    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.792    16:49:03	-- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:10.792    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.792    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.792    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.792    16:49:03	-- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:10.792    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.792    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.792    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.792    16:49:03	-- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:10.792    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.792    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.792    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.792    16:49:03	-- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:10.792    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.792    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.792    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.792    16:49:03	-- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:10.792    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.792    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.792    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.792    16:49:03	-- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:10.792    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.792    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.792    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.792    16:49:03	-- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:10.792    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.792    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.792    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.792    16:49:03	-- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:10.792    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.792    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.792    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.792    16:49:03	-- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:10.792    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.792    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.792    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.792    16:49:03	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:10.792    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.792    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.792    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.792    16:49:03	-- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:10.792    16:49:03	-- setup/common.sh@33 -- # echo 0
00:05:10.792    16:49:03	-- setup/common.sh@33 -- # return 0
00:05:10.792   16:49:03	-- setup/hugepages.sh@97 -- # anon=0
00:05:10.792    16:49:03	-- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp
00:05:10.792    16:49:03	-- setup/common.sh@17 -- # local get=HugePages_Surp
00:05:10.792    16:49:03	-- setup/common.sh@18 -- # local node=
00:05:10.792    16:49:03	-- setup/common.sh@19 -- # local var val
00:05:10.792    16:49:03	-- setup/common.sh@20 -- # local mem_f mem
00:05:10.792    16:49:03	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:05:10.792    16:49:03	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:05:10.792    16:49:03	-- setup/common.sh@25 -- # [[ -n '' ]]
00:05:10.792    16:49:03	-- setup/common.sh@28 -- # mapfile -t mem
00:05:10.792    16:49:03	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:05:10.792    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.792     16:49:03	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12242972 kB' 'MemFree:         5262976 kB' 'MemAvailable:   10539028 kB' 'Buffers:           39856 kB' 'Cached:          5335120 kB' 'SwapCached:            0 kB' 'Active:          1375472 kB' 'Inactive:        4127640 kB' 'Active(anon):       1040 kB' 'Inactive(anon):   138756 kB' 'Active(file):    1374432 kB' 'Inactive(file):  3988884 kB' 'Unevictable:       29168 kB' 'Mlocked:           27632 kB' 'SwapTotal:             0 kB' 'SwapFree:              0 kB' 'Dirty:               252 kB' 'Writeback:             0 kB' 'AnonPages:        157420 kB' 'Mapped:            67412 kB' 'Shmem:              2596 kB' 'KReclaimable:     234040 kB' 'Slab:             302552 kB' 'SReclaimable:     234040 kB' 'SUnreclaim:        68512 kB' 'KernelStack:        4468 kB' 'PageTables:         3948 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:     5597196 kB' 'Committed_AS:     498200 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:       19564 kB' 'VmallocChunk:          0 kB' 'Percpu:             8256 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'HugePages_Total:     512' 'HugePages_Free:      512' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         1048576 kB' 'DirectMap4k:      141164 kB' 'DirectMap2M:     4052992 kB' 'DirectMap1G:    10485760 kB'
00:05:10.792    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.792    16:49:03	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:10.792    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.792    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.792    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.792    16:49:03	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:10.792    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.792    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.792    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.792    16:49:03	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:10.792    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.792    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.792    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.792    16:49:03	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:10.792    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.792    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.792    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.792    16:49:03	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:10.792    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.792    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.792    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.792    16:49:03	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:10.792    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.792    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.792    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.792    16:49:03	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:10.792    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.792    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.792    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.792    16:49:03	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:10.792    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.792    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.792    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.792    16:49:03	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:10.792    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.792    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.792    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.792    16:49:03	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:10.792    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.792    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.792    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.792    16:49:03	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:10.792    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.792    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.792    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.792    16:49:03	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:10.792    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.792    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.792    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.792    16:49:03	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:10.792    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.792    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.792    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.792    16:49:03	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:10.792    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.792    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.792    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.792    16:49:03	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:10.792    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.792    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.792    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.792    16:49:03	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:10.792    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.792    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.792    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.792    16:49:03	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:10.792    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.792    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.792    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.792    16:49:03	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:10.792    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.792    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.792    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.792    16:49:03	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:10.792    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.793    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.793    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.793    16:49:03	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:10.793    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.793    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.793    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.793    16:49:03	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:10.793    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.793    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.793    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.793    16:49:03	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:10.793    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.793    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.793    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.793    16:49:03	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:10.793    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.793    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.793    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.793    16:49:03	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:10.793    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.793    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.793    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.793    16:49:03	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:10.793    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.793    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.793    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.793    16:49:03	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:10.793    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.793    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.793    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.793    16:49:03	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:10.793    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.793    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.793    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.793    16:49:03	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:10.793    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.793    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.793    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.793    16:49:03	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:10.793    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.793    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.793    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.793    16:49:03	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:10.793    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.793    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.793    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.793    16:49:03	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:10.793    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.793    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.793    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.793    16:49:03	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:10.793    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.793    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.793    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.793    16:49:03	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:10.793    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.793    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.793    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.793    16:49:03	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:10.793    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.793    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.793    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.793    16:49:03	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:10.793    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.793    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.793    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.793    16:49:03	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:10.793    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.793    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.793    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.793    16:49:03	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:10.793    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.793    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.793    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.793    16:49:03	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:10.793    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.793    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.793    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.793    16:49:03	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:10.793    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.793    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.793    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.793    16:49:03	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:10.793    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.793    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.793    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.793    16:49:03	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:10.793    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.793    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.793    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.793    16:49:03	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:10.793    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.793    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.793    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.793    16:49:03	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:10.793    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.793    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.793    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.793    16:49:03	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:10.793    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.793    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.793    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.793    16:49:03	-- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:10.793    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.793    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.793    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.793    16:49:03	-- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:10.793    16:49:03	-- setup/common.sh@33 -- # echo 0
00:05:10.793    16:49:03	-- setup/common.sh@33 -- # return 0
00:05:10.793   16:49:03	-- setup/hugepages.sh@99 -- # surp=0
00:05:10.793    16:49:03	-- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd
00:05:10.793    16:49:03	-- setup/common.sh@17 -- # local get=HugePages_Rsvd
00:05:10.793    16:49:03	-- setup/common.sh@18 -- # local node=
00:05:10.793    16:49:03	-- setup/common.sh@19 -- # local var val
00:05:10.793    16:49:03	-- setup/common.sh@20 -- # local mem_f mem
00:05:10.793    16:49:03	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:05:10.793    16:49:03	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:05:10.793    16:49:03	-- setup/common.sh@25 -- # [[ -n '' ]]
00:05:10.793    16:49:03	-- setup/common.sh@28 -- # mapfile -t mem
00:05:10.793    16:49:03	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:05:10.793    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.793    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.793     16:49:03	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12242972 kB' 'MemFree:         5262976 kB' 'MemAvailable:   10539028 kB' 'Buffers:           39856 kB' 'Cached:          5335120 kB' 'SwapCached:            0 kB' 'Active:          1375472 kB' 'Inactive:        4127452 kB' 'Active(anon):       1040 kB' 'Inactive(anon):   138568 kB' 'Active(file):    1374432 kB' 'Inactive(file):  3988884 kB' 'Unevictable:       29168 kB' 'Mlocked:           27632 kB' 'SwapTotal:             0 kB' 'SwapFree:              0 kB' 'Dirty:               252 kB' 'Writeback:             0 kB' 'AnonPages:        157228 kB' 'Mapped:            67452 kB' 'Shmem:              2596 kB' 'KReclaimable:     234040 kB' 'Slab:             302552 kB' 'SReclaimable:     234040 kB' 'SUnreclaim:        68512 kB' 'KernelStack:        4420 kB' 'PageTables:         3828 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:     5597196 kB' 'Committed_AS:     498200 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:       19580 kB' 'VmallocChunk:          0 kB' 'Percpu:             8256 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'HugePages_Total:     512' 'HugePages_Free:      512' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         1048576 kB' 'DirectMap4k:      141164 kB' 'DirectMap2M:     4052992 kB' 'DirectMap1G:    10485760 kB'
00:05:10.793    16:49:03	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:10.793    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.793    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.793    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.793    16:49:03	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:10.793    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.793    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.793    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.793    16:49:03	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:10.793    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.793    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.793    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.793    16:49:03	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:10.793    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.793    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.793    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.793    16:49:03	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:10.793    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.793    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.793    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.793    16:49:03	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:10.793    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.793    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.793    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.794    16:49:03	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:10.794    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.794    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.794    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.794    16:49:03	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:10.794    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.794    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.794    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.794    16:49:03	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:10.794    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.794    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.794    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.794    16:49:03	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:10.794    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.794    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.794    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.794    16:49:03	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:10.794    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.794    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.794    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.794    16:49:03	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:10.794    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.794    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.794    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.794    16:49:03	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:10.794    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.794    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.794    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.794    16:49:03	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:10.794    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.794    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.794    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.794    16:49:03	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:10.794    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.794    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.794    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.794    16:49:03	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:10.794    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.794    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.794    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.794    16:49:03	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:10.794    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.794    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.794    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.794    16:49:03	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:10.794    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.794    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.794    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.794    16:49:03	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:10.794    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.794    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.794    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.794    16:49:03	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:10.794    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.794    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.794    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.794    16:49:03	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:10.794    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.794    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.794    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.794    16:49:03	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:10.794    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.794    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.794    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.794    16:49:03	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:10.794    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.794    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.794    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.794    16:49:03	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:10.794    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.794    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.794    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.794    16:49:03	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:10.794    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.794    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.794    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.794    16:49:03	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:10.794    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.794    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.794    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.794    16:49:03	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:10.794    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.794    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.794    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.794    16:49:03	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:10.794    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.794    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.794    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.794    16:49:03	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:10.794    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.794    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.794    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.794    16:49:03	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:10.794    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.794    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.794    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.794    16:49:03	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:10.794    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.794    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.794    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.794    16:49:03	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:10.794    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.794    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.794    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.794    16:49:03	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:10.794    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.794    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.794    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.794    16:49:03	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:10.794    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.794    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.794    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.794    16:49:03	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:10.794    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.794    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.794    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.794    16:49:03	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:10.794    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.794    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.794    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.794    16:49:03	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:10.794    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.794    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.794    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.794    16:49:03	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:10.794    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.794    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.794    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.794    16:49:03	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:10.794    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.794    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.794    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.794    16:49:03	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:10.794    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.794    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.794    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.794    16:49:03	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:10.794    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.794    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.794    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.794    16:49:03	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:10.794    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.794    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.794    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.794    16:49:03	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:10.794    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.794    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.794    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.794    16:49:03	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:10.794    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.794    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.794    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.794    16:49:03	-- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:10.794    16:49:03	-- setup/common.sh@33 -- # echo 0
00:05:10.794    16:49:03	-- setup/common.sh@33 -- # return 0
00:05:10.794   16:49:03	-- setup/hugepages.sh@100 -- # resv=0
00:05:10.794  nr_hugepages=512
00:05:10.794   16:49:03	-- setup/hugepages.sh@102 -- # echo nr_hugepages=512
00:05:10.794  resv_hugepages=0
00:05:10.794   16:49:03	-- setup/hugepages.sh@103 -- # echo resv_hugepages=0
00:05:10.794  surplus_hugepages=0
00:05:10.794   16:49:03	-- setup/hugepages.sh@104 -- # echo surplus_hugepages=0
00:05:10.794  anon_hugepages=0
00:05:10.794   16:49:03	-- setup/hugepages.sh@105 -- # echo anon_hugepages=0
00:05:10.794   16:49:03	-- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv ))
00:05:10.795   16:49:03	-- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages ))
00:05:10.795    16:49:03	-- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total
00:05:10.795    16:49:03	-- setup/common.sh@17 -- # local get=HugePages_Total
00:05:10.795    16:49:03	-- setup/common.sh@18 -- # local node=
00:05:10.795    16:49:03	-- setup/common.sh@19 -- # local var val
00:05:10.795    16:49:03	-- setup/common.sh@20 -- # local mem_f mem
00:05:10.795    16:49:03	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:05:10.795    16:49:03	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:05:10.795    16:49:03	-- setup/common.sh@25 -- # [[ -n '' ]]
00:05:10.795    16:49:03	-- setup/common.sh@28 -- # mapfile -t mem
00:05:10.795    16:49:03	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:05:10.795    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.795     16:49:03	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12242972 kB' 'MemFree:         5262976 kB' 'MemAvailable:   10539028 kB' 'Buffers:           39856 kB' 'Cached:          5335120 kB' 'SwapCached:            0 kB' 'Active:          1375472 kB' 'Inactive:        4127348 kB' 'Active(anon):       1040 kB' 'Inactive(anon):   138464 kB' 'Active(file):    1374432 kB' 'Inactive(file):  3988884 kB' 'Unevictable:       29168 kB' 'Mlocked:           27632 kB' 'SwapTotal:             0 kB' 'SwapFree:              0 kB' 'Dirty:               252 kB' 'Writeback:             0 kB' 'AnonPages:        157128 kB' 'Mapped:            67452 kB' 'Shmem:              2596 kB' 'KReclaimable:     234040 kB' 'Slab:             302552 kB' 'SReclaimable:     234040 kB' 'SUnreclaim:        68512 kB' 'KernelStack:        4420 kB' 'PageTables:         3824 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:     5597196 kB' 'Committed_AS:     498200 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:       19564 kB' 'VmallocChunk:          0 kB' 'Percpu:             8256 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'HugePages_Total:     512' 'HugePages_Free:      512' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         1048576 kB' 'DirectMap4k:      141164 kB' 'DirectMap2M:     4052992 kB' 'DirectMap1G:    10485760 kB'
00:05:10.795    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.795    16:49:03	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:10.795    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.795    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.795    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.795    16:49:03	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:10.795    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.795    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.795    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.795    16:49:03	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:10.795    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.795    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.795    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.795    16:49:03	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:10.795    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.795    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.795    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.795    16:49:03	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:10.795    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.795    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.795    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.795    16:49:03	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:10.795    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.795    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.795    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.795    16:49:03	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:10.795    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.795    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.795    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.795    16:49:03	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:10.795    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.795    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.795    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.795    16:49:03	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:10.795    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.795    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.795    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.795    16:49:03	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:10.795    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.795    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.795    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.795    16:49:03	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:10.795    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.795    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.795    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.795    16:49:03	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:10.795    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.795    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.795    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.795    16:49:03	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:10.795    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.795    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.795    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.795    16:49:03	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:10.795    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.795    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.795    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.795    16:49:03	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:10.795    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.795    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.795    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.795    16:49:03	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:10.795    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.795    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.795    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.795    16:49:03	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:10.795    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.795    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.795    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.795    16:49:03	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:10.795    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.795    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.795    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.795    16:49:03	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:10.795    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.795    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.795    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.795    16:49:03	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:10.795    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.795    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.795    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.795    16:49:03	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:10.795    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.795    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.795    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.795    16:49:03	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:10.795    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.795    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.795    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.795    16:49:03	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:10.795    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.795    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.795    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.795    16:49:03	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:10.795    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.795    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.795    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.795    16:49:03	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:10.795    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.795    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.795    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.795    16:49:03	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:10.795    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.795    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.795    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.795    16:49:03	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:10.795    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.795    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.795    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.796    16:49:03	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:10.796    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.796    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.796    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.796    16:49:03	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:10.796    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.796    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.796    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.796    16:49:03	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:10.796    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.796    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.796    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.796    16:49:03	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:10.796    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.796    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.796    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.796    16:49:03	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:10.796    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.796    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.796    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.796    16:49:03	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:10.796    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.796    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.796    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.796    16:49:03	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:10.796    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.796    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.796    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.796    16:49:03	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:10.796    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.796    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.796    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.796    16:49:03	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:10.796    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.796    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.796    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.796    16:49:03	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:10.796    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.796    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.796    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.796    16:49:03	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:10.796    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.796    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.796    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.796    16:49:03	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:10.796    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.796    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.796    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.796    16:49:03	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:10.796    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.796    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.796    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.796    16:49:03	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:10.796    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.796    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.796    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.796    16:49:03	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:10.796    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.796    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.796    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.796    16:49:03	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:10.796    16:49:03	-- setup/common.sh@33 -- # echo 512
00:05:10.796    16:49:03	-- setup/common.sh@33 -- # return 0
00:05:10.796   16:49:03	-- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv ))
00:05:10.796   16:49:03	-- setup/hugepages.sh@112 -- # get_nodes
00:05:10.796   16:49:03	-- setup/hugepages.sh@27 -- # local node
00:05:10.796   16:49:03	-- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9])
00:05:10.796   16:49:03	-- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512
00:05:10.796   16:49:03	-- setup/hugepages.sh@32 -- # no_nodes=1
00:05:10.796   16:49:03	-- setup/hugepages.sh@33 -- # (( no_nodes > 0 ))
00:05:10.796   16:49:03	-- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}"
00:05:10.796   16:49:03	-- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv ))
00:05:10.796    16:49:03	-- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0
00:05:10.796    16:49:03	-- setup/common.sh@17 -- # local get=HugePages_Surp
00:05:10.796    16:49:03	-- setup/common.sh@18 -- # local node=0
00:05:10.796    16:49:03	-- setup/common.sh@19 -- # local var val
00:05:10.796    16:49:03	-- setup/common.sh@20 -- # local mem_f mem
00:05:10.796    16:49:03	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:05:10.796    16:49:03	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]]
00:05:10.796    16:49:03	-- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo
00:05:10.796    16:49:03	-- setup/common.sh@28 -- # mapfile -t mem
00:05:10.796    16:49:03	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:05:10.796    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.796    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.796     16:49:03	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12242972 kB' 'MemFree:         5262472 kB' 'MemUsed:         6980500 kB' 'SwapCached:            0 kB' 'Active:          1375472 kB' 'Inactive:        4127468 kB' 'Active(anon):       1040 kB' 'Inactive(anon):   138584 kB' 'Active(file):    1374432 kB' 'Inactive(file):  3988884 kB' 'Unevictable:       29168 kB' 'Mlocked:           27632 kB' 'Dirty:               252 kB' 'Writeback:             0 kB' 'FilePages:       5374976 kB' 'Mapped:            67452 kB' 'AnonPages:        157252 kB' 'Shmem:              2596 kB' 'KernelStack:        4472 kB' 'PageTables:         3780 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'KReclaimable:     234040 kB' 'Slab:             302456 kB' 'SReclaimable:     234040 kB' 'SUnreclaim:        68416 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:        0 kB' 'FilePmdMapped:        0 kB' 'HugePages_Total:   512' 'HugePages_Free:    512' 'HugePages_Surp:      0'
00:05:10.796    16:49:03	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:10.796    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.796    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.796    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.796    16:49:03	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:10.796    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.796    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.796    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.796    16:49:03	-- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:10.796    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.796    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.796    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.796    16:49:03	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:10.796    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.796    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.796    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.796    16:49:03	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:10.796    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.796    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.796    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.796    16:49:03	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:10.796    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.796    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.796    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.796    16:49:03	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:10.796    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.796    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.796    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.796    16:49:03	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:10.796    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.796    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.796    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.796    16:49:03	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:10.796    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.796    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.796    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.796    16:49:03	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:10.796    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.796    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.796    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.796    16:49:03	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:10.796    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.796    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.796    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.796    16:49:03	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:10.796    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.796    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.796    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.796    16:49:03	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:10.796    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.796    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.796    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.796    16:49:03	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:10.796    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.796    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.796    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.796    16:49:03	-- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:10.796    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.796    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.796    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.796    16:49:03	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:10.796    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.796    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.796    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.796    16:49:03	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:10.796    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.796    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.797    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.797    16:49:03	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:10.797    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.797    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.797    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.797    16:49:03	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:10.797    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.797    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.797    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.797    16:49:03	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:10.797    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.797    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.797    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.797    16:49:03	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:10.797    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.797    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.797    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.797    16:49:03	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:10.797    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.797    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.797    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.797    16:49:03	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:10.797    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.797    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.797    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.797    16:49:03	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:10.797    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.797    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.797    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.797    16:49:03	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:10.797    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.797    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.797    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.797    16:49:03	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:10.797    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.797    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.797    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.797    16:49:03	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:10.797    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.797    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.797    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.797    16:49:03	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:10.797    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.797    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.797    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.797    16:49:03	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:10.797    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.797    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.797    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.797    16:49:03	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:10.797    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.797    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.797    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.797    16:49:03	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:10.797    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.797    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.797    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.797    16:49:03	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:10.797    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.797    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.797    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.797    16:49:03	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:10.797    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.797    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.797    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.797    16:49:03	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:10.797    16:49:03	-- setup/common.sh@32 -- # continue
00:05:10.797    16:49:03	-- setup/common.sh@31 -- # IFS=': '
00:05:10.797    16:49:03	-- setup/common.sh@31 -- # read -r var val _
00:05:10.797    16:49:03	-- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:10.797    16:49:03	-- setup/common.sh@33 -- # echo 0
00:05:10.797    16:49:03	-- setup/common.sh@33 -- # return 0
00:05:10.797   16:49:03	-- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 ))
00:05:10.797   16:49:03	-- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}"
00:05:10.797   16:49:03	-- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1
00:05:10.797   16:49:03	-- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1
00:05:10.797  node0=512 expecting 512
00:05:10.797   16:49:03	-- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512'
00:05:10.797   16:49:03	-- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]]
00:05:10.797  
00:05:10.797  real	0m0.900s
00:05:10.797  user	0m0.366s
00:05:10.797  sys	0m0.534s
00:05:10.797   16:49:03	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:05:10.797   16:49:03	-- common/autotest_common.sh@10 -- # set +x
00:05:10.797  ************************************
00:05:10.797  END TEST custom_alloc
00:05:10.797  ************************************
00:05:11.056   16:49:03	-- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc
00:05:11.056   16:49:03	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:05:11.056   16:49:03	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:05:11.056   16:49:03	-- common/autotest_common.sh@10 -- # set +x
00:05:11.056  ************************************
00:05:11.056  START TEST no_shrink_alloc
00:05:11.056  ************************************
00:05:11.056   16:49:03	-- common/autotest_common.sh@1114 -- # no_shrink_alloc
00:05:11.056   16:49:03	-- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0
00:05:11.056   16:49:03	-- setup/hugepages.sh@49 -- # local size=2097152
00:05:11.056   16:49:03	-- setup/hugepages.sh@50 -- # (( 2 > 1 ))
00:05:11.056   16:49:03	-- setup/hugepages.sh@51 -- # shift
00:05:11.056   16:49:03	-- setup/hugepages.sh@52 -- # node_ids=('0')
00:05:11.056   16:49:03	-- setup/hugepages.sh@52 -- # local node_ids
00:05:11.056   16:49:03	-- setup/hugepages.sh@55 -- # (( size >= default_hugepages ))
00:05:11.056   16:49:03	-- setup/hugepages.sh@57 -- # nr_hugepages=1024
00:05:11.056   16:49:03	-- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0
00:05:11.056   16:49:03	-- setup/hugepages.sh@62 -- # user_nodes=('0')
00:05:11.056   16:49:03	-- setup/hugepages.sh@62 -- # local user_nodes
00:05:11.056   16:49:03	-- setup/hugepages.sh@64 -- # local _nr_hugepages=1024
00:05:11.056   16:49:03	-- setup/hugepages.sh@65 -- # local _no_nodes=1
00:05:11.056   16:49:03	-- setup/hugepages.sh@67 -- # nodes_test=()
00:05:11.056   16:49:03	-- setup/hugepages.sh@67 -- # local -g nodes_test
00:05:11.056   16:49:03	-- setup/hugepages.sh@69 -- # (( 1 > 0 ))
00:05:11.056   16:49:03	-- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}"
00:05:11.056   16:49:03	-- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024
00:05:11.056   16:49:03	-- setup/hugepages.sh@73 -- # return 0
00:05:11.056   16:49:03	-- setup/hugepages.sh@198 -- # setup output
00:05:11.056   16:49:03	-- setup/common.sh@9 -- # [[ output == output ]]
00:05:11.056   16:49:03	-- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:05:11.315  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev
00:05:11.315  0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver
00:05:11.885   16:49:04	-- setup/hugepages.sh@199 -- # verify_nr_hugepages
00:05:11.885   16:49:04	-- setup/hugepages.sh@89 -- # local node
00:05:11.885   16:49:04	-- setup/hugepages.sh@90 -- # local sorted_t
00:05:11.885   16:49:04	-- setup/hugepages.sh@91 -- # local sorted_s
00:05:11.885   16:49:04	-- setup/hugepages.sh@92 -- # local surp
00:05:11.885   16:49:04	-- setup/hugepages.sh@93 -- # local resv
00:05:11.885   16:49:04	-- setup/hugepages.sh@94 -- # local anon
00:05:11.885   16:49:04	-- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]]
00:05:11.885    16:49:04	-- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages
00:05:11.885    16:49:04	-- setup/common.sh@17 -- # local get=AnonHugePages
00:05:11.885    16:49:04	-- setup/common.sh@18 -- # local node=
00:05:11.885    16:49:04	-- setup/common.sh@19 -- # local var val
00:05:11.885    16:49:04	-- setup/common.sh@20 -- # local mem_f mem
00:05:11.885    16:49:04	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:05:11.885    16:49:04	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:05:11.885    16:49:04	-- setup/common.sh@25 -- # [[ -n '' ]]
00:05:11.885    16:49:04	-- setup/common.sh@28 -- # mapfile -t mem
00:05:11.885    16:49:04	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:05:11.885    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.885    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.885     16:49:04	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12242972 kB' 'MemFree:         4213620 kB' 'MemAvailable:    9489668 kB' 'Buffers:           39856 kB' 'Cached:          5335116 kB' 'SwapCached:            0 kB' 'Active:          1375480 kB' 'Inactive:        4127692 kB' 'Active(anon):       1048 kB' 'Inactive(anon):   138812 kB' 'Active(file):    1374432 kB' 'Inactive(file):  3988880 kB' 'Unevictable:       29168 kB' 'Mlocked:           27632 kB' 'SwapTotal:             0 kB' 'SwapFree:              0 kB' 'Dirty:               252 kB' 'Writeback:             0 kB' 'AnonPages:        157372 kB' 'Mapped:            67716 kB' 'Shmem:              2596 kB' 'KReclaimable:     234040 kB' 'Slab:             302300 kB' 'SReclaimable:     234040 kB' 'SUnreclaim:        68260 kB' 'KernelStack:        4416 kB' 'PageTables:         3384 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:     5072908 kB' 'Committed_AS:     498400 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:       19564 kB' 'VmallocChunk:          0 kB' 'Percpu:             8256 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'HugePages_Total:    1024' 'HugePages_Free:     1024' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2097152 kB' 'DirectMap4k:      141164 kB' 'DirectMap2M:     4052992 kB' 'DirectMap1G:    10485760 kB'
00:05:11.885    16:49:04	-- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:11.885    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.885    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.885    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.885    16:49:04	-- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:11.885    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.885    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.885    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.885    16:49:04	-- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:11.885    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.885    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.885    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.885    16:49:04	-- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:11.885    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.885    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.885    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.885    16:49:04	-- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:11.885    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.885    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.885    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.885    16:49:04	-- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:11.885    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.885    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.885    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.885    16:49:04	-- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:11.885    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.885    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.885    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.885    16:49:04	-- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:11.885    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.885    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.885    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.885    16:49:04	-- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:11.885    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.885    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.885    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.885    16:49:04	-- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:11.885    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.885    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.885    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.885    16:49:04	-- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:11.885    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.885    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.885    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.885    16:49:04	-- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:11.885    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.885    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.885    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.885    16:49:04	-- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:11.885    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.885    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.885    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.885    16:49:04	-- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:11.885    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.885    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.885    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.885    16:49:04	-- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:11.885    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.885    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.885    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.885    16:49:04	-- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:11.885    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.885    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.885    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.885    16:49:04	-- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:11.885    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.885    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.885    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.885    16:49:04	-- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:11.885    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.885    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.885    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.885    16:49:04	-- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:11.885    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.885    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.885    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.885    16:49:04	-- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:11.885    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.885    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.885    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.885    16:49:04	-- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:11.885    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.885    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.885    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.885    16:49:04	-- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:11.885    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.885    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.885    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.885    16:49:04	-- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:11.885    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.885    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.885    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.885    16:49:04	-- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:11.885    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.885    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.885    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.885    16:49:04	-- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:11.885    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.885    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.886    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.886    16:49:04	-- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:11.886    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.886    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.886    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.886    16:49:04	-- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:11.886    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.886    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.886    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.886    16:49:04	-- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:11.886    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.886    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.886    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.886    16:49:04	-- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:11.886    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.886    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.886    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.886    16:49:04	-- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:11.886    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.886    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.886    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.886    16:49:04	-- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:11.886    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.886    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.886    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.886    16:49:04	-- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:11.886    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.886    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.886    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.886    16:49:04	-- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:11.886    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.886    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.886    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.886    16:49:04	-- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:11.886    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.886    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.886    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.886    16:49:04	-- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:11.886    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.886    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.886    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.886    16:49:04	-- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:11.886    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.886    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.886    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.886    16:49:04	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:11.886    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.886    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.886    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.886    16:49:04	-- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:11.886    16:49:04	-- setup/common.sh@33 -- # echo 0
00:05:11.886    16:49:04	-- setup/common.sh@33 -- # return 0
00:05:11.886   16:49:04	-- setup/hugepages.sh@97 -- # anon=0
00:05:11.886    16:49:04	-- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp
00:05:11.886    16:49:04	-- setup/common.sh@17 -- # local get=HugePages_Surp
00:05:11.886    16:49:04	-- setup/common.sh@18 -- # local node=
00:05:11.886    16:49:04	-- setup/common.sh@19 -- # local var val
00:05:11.886    16:49:04	-- setup/common.sh@20 -- # local mem_f mem
00:05:11.886    16:49:04	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:05:11.886    16:49:04	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:05:11.886    16:49:04	-- setup/common.sh@25 -- # [[ -n '' ]]
00:05:11.886    16:49:04	-- setup/common.sh@28 -- # mapfile -t mem
00:05:11.886    16:49:04	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:05:11.886    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.886     16:49:04	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12242972 kB' 'MemFree:         4213620 kB' 'MemAvailable:    9489668 kB' 'Buffers:           39856 kB' 'Cached:          5335116 kB' 'SwapCached:            0 kB' 'Active:          1375480 kB' 'Inactive:        4127952 kB' 'Active(anon):       1048 kB' 'Inactive(anon):   139072 kB' 'Active(file):    1374432 kB' 'Inactive(file):  3988880 kB' 'Unevictable:       29168 kB' 'Mlocked:           27632 kB' 'SwapTotal:             0 kB' 'SwapFree:              0 kB' 'Dirty:               252 kB' 'Writeback:             0 kB' 'AnonPages:        157632 kB' 'Mapped:            67716 kB' 'Shmem:              2596 kB' 'KReclaimable:     234040 kB' 'Slab:             302300 kB' 'SReclaimable:     234040 kB' 'SUnreclaim:        68260 kB' 'KernelStack:        4416 kB' 'PageTables:         3384 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:     5072908 kB' 'Committed_AS:     498400 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:       19580 kB' 'VmallocChunk:          0 kB' 'Percpu:             8256 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'HugePages_Total:    1024' 'HugePages_Free:     1024' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2097152 kB' 'DirectMap4k:      141164 kB' 'DirectMap2M:     4052992 kB' 'DirectMap1G:    10485760 kB'
00:05:11.886    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.886    16:49:04	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:11.886    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.886    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.886    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.886    16:49:04	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:11.886    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.886    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.886    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.886    16:49:04	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:11.886    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.886    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.886    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.886    16:49:04	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:11.886    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.886    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.886    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.886    16:49:04	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:11.886    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.886    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.886    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.886    16:49:04	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:11.886    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.886    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.886    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.886    16:49:04	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:11.886    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.886    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.886    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.886    16:49:04	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:11.886    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.886    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.886    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.886    16:49:04	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:11.886    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.886    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.886    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.886    16:49:04	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:11.886    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.886    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.886    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.886    16:49:04	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:11.886    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.886    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.886    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.886    16:49:04	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:11.886    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.886    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.886    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.886    16:49:04	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:11.886    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.886    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.886    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.886    16:49:04	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:11.886    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.886    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.886    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.886    16:49:04	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:11.886    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.886    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.886    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.886    16:49:04	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:11.886    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.886    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.886    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.886    16:49:04	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:11.886    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.886    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.886    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.886    16:49:04	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:11.886    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.886    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.886    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.886    16:49:04	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:11.886    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.886    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.886    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.886    16:49:04	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:11.886    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.886    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.886    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.886    16:49:04	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:11.886    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.887    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.887    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.887    16:49:04	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:11.887    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.887    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.887    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.887    16:49:04	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:11.887    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.887    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.887    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.887    16:49:04	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:11.887    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.887    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.887    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.887    16:49:04	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:11.887    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.887    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.887    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.887    16:49:04	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:11.887    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.887    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.887    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.887    16:49:04	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:11.887    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.887    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.887    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.887    16:49:04	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:11.887    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.887    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.887    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.887    16:49:04	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:11.887    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.887    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.887    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.887    16:49:04	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:11.887    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.887    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.887    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.887    16:49:04	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:11.887    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.887    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.887    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.887    16:49:04	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:11.887    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.887    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.887    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.887    16:49:04	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:11.887    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.887    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.887    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.887    16:49:04	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:11.887    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.887    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.887    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.887    16:49:04	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:11.887    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.887    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.887    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.887    16:49:04	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:11.887    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.887    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.887    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.887    16:49:04	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:11.887    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.887    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.887    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.887    16:49:04	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:11.887    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.887    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.887    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.887    16:49:04	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:11.887    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.887    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.887    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.887    16:49:04	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:11.887    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.887    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.887    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.887    16:49:04	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:11.887    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.887    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.887    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.887    16:49:04	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:11.887    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.887    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.887    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.887    16:49:04	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:11.887    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.887    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.887    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.887    16:49:04	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:11.887    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.887    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.887    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.887    16:49:04	-- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:11.887    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.887    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.887    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.887    16:49:04	-- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:11.887    16:49:04	-- setup/common.sh@33 -- # echo 0
00:05:11.887    16:49:04	-- setup/common.sh@33 -- # return 0
00:05:11.887   16:49:04	-- setup/hugepages.sh@99 -- # surp=0
00:05:11.887    16:49:04	-- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd
00:05:11.887    16:49:04	-- setup/common.sh@17 -- # local get=HugePages_Rsvd
00:05:11.887    16:49:04	-- setup/common.sh@18 -- # local node=
00:05:11.887    16:49:04	-- setup/common.sh@19 -- # local var val
00:05:11.887    16:49:04	-- setup/common.sh@20 -- # local mem_f mem
00:05:11.887    16:49:04	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:05:11.887    16:49:04	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:05:11.887    16:49:04	-- setup/common.sh@25 -- # [[ -n '' ]]
00:05:11.887    16:49:04	-- setup/common.sh@28 -- # mapfile -t mem
00:05:11.887    16:49:04	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:05:11.887    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.887     16:49:04	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12242972 kB' 'MemFree:         4213860 kB' 'MemAvailable:    9489912 kB' 'Buffers:           39864 kB' 'Cached:          5335120 kB' 'SwapCached:            0 kB' 'Active:          1375492 kB' 'Inactive:        4127472 kB' 'Active(anon):       1056 kB' 'Inactive(anon):   138592 kB' 'Active(file):    1374436 kB' 'Inactive(file):  3988880 kB' 'Unevictable:       29168 kB' 'Mlocked:           27632 kB' 'SwapTotal:             0 kB' 'SwapFree:              0 kB' 'Dirty:               252 kB' 'Writeback:             4 kB' 'AnonPages:        157256 kB' 'Mapped:            67276 kB' 'Shmem:              2596 kB' 'KReclaimable:     234040 kB' 'Slab:             302356 kB' 'SReclaimable:     234040 kB' 'SUnreclaim:        68316 kB' 'KernelStack:        4320 kB' 'PageTables:         3284 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:     5072908 kB' 'Committed_AS:     498400 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:       19596 kB' 'VmallocChunk:          0 kB' 'Percpu:             8256 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'HugePages_Total:    1024' 'HugePages_Free:     1024' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2097152 kB' 'DirectMap4k:      141164 kB' 'DirectMap2M:     4052992 kB' 'DirectMap1G:    10485760 kB'
00:05:11.887    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.887    16:49:04	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:11.887    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.887    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.887    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.887    16:49:04	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:11.887    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.887    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.887    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.887    16:49:04	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:11.887    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.887    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.887    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.887    16:49:04	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:11.887    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.887    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.887    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.887    16:49:04	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:11.887    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.887    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.887    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.887    16:49:04	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:11.887    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.887    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.887    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.887    16:49:04	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:11.887    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.887    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.887    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.887    16:49:04	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:11.887    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.887    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.887    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.888    16:49:04	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:11.888    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.888    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.888    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.888    16:49:04	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:11.888    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.888    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.888    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.888    16:49:04	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:11.888    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.888    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.888    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.888    16:49:04	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:11.888    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.888    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.888    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.888    16:49:04	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:11.888    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.888    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.888    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.888    16:49:04	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:11.888    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.888    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.888    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.888    16:49:04	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:11.888    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.888    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.888    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.888    16:49:04	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:11.888    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.888    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.888    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.888    16:49:04	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:11.888    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.888    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.888    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.888    16:49:04	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:11.888    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.888    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.888    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.888    16:49:04	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:11.888    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.888    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.888    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.888    16:49:04	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:11.888    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.888    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.888    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.888    16:49:04	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:11.888    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.888    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.888    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.888    16:49:04	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:11.888    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.888    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.888    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.888    16:49:04	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:11.888    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.888    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.888    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.888    16:49:04	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:11.888    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.888    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.888    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.888    16:49:04	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:11.888    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.888    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.888    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.888    16:49:04	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:11.888    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.888    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.888    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.888    16:49:04	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:11.888    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.888    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.888    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.888    16:49:04	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:11.888    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.888    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.888    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.888    16:49:04	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:11.888    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.888    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.888    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.888    16:49:04	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:11.888    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.888    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.888    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.888    16:49:04	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:11.888    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.888    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.888    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.888    16:49:04	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:11.888    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.888    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.888    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.888    16:49:04	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:11.888    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.888    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.888    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.888    16:49:04	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:11.888    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.888    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.888    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.888    16:49:04	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:11.888    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.888    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.888    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.888    16:49:04	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:11.888    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.888    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.888    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.888    16:49:04	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:11.888    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.888    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.888    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.888    16:49:04	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:11.888    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.888    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.888    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.888    16:49:04	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:11.888    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.888    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.888    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.888    16:49:04	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:11.888    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.888    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.888    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.888    16:49:04	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:11.888    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.888    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.888    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.888    16:49:04	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:11.888    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.888    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.888    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.888    16:49:04	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:11.888    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.888    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.888    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.888    16:49:04	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:11.888    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.888    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.888    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.888    16:49:04	-- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:11.888    16:49:04	-- setup/common.sh@33 -- # echo 0
00:05:11.888    16:49:04	-- setup/common.sh@33 -- # return 0
00:05:11.888   16:49:04	-- setup/hugepages.sh@100 -- # resv=0
00:05:11.888  nr_hugepages=1024
00:05:11.888   16:49:04	-- setup/hugepages.sh@102 -- # echo nr_hugepages=1024
00:05:11.888  resv_hugepages=0
00:05:11.888   16:49:04	-- setup/hugepages.sh@103 -- # echo resv_hugepages=0
00:05:11.888  surplus_hugepages=0
00:05:11.888   16:49:04	-- setup/hugepages.sh@104 -- # echo surplus_hugepages=0
00:05:11.888  anon_hugepages=0
00:05:11.888   16:49:04	-- setup/hugepages.sh@105 -- # echo anon_hugepages=0
00:05:11.888   16:49:04	-- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv ))
00:05:11.888   16:49:04	-- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages ))
00:05:11.888    16:49:04	-- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total
00:05:11.888    16:49:04	-- setup/common.sh@17 -- # local get=HugePages_Total
00:05:11.888    16:49:04	-- setup/common.sh@18 -- # local node=
00:05:11.888    16:49:04	-- setup/common.sh@19 -- # local var val
00:05:11.888    16:49:04	-- setup/common.sh@20 -- # local mem_f mem
00:05:11.888    16:49:04	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:05:11.889    16:49:04	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:05:11.889    16:49:04	-- setup/common.sh@25 -- # [[ -n '' ]]
00:05:11.889    16:49:04	-- setup/common.sh@28 -- # mapfile -t mem
00:05:11.889    16:49:04	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:05:11.889    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.889    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.889     16:49:04	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12242972 kB' 'MemFree:         4214384 kB' 'MemAvailable:    9490436 kB' 'Buffers:           39864 kB' 'Cached:          5335120 kB' 'SwapCached:            0 kB' 'Active:          1375492 kB' 'Inactive:        4127492 kB' 'Active(anon):       1056 kB' 'Inactive(anon):   138612 kB' 'Active(file):    1374436 kB' 'Inactive(file):  3988880 kB' 'Unevictable:       29168 kB' 'Mlocked:           27632 kB' 'SwapTotal:             0 kB' 'SwapFree:              0 kB' 'Dirty:               252 kB' 'Writeback:             4 kB' 'AnonPages:        157276 kB' 'Mapped:            67276 kB' 'Shmem:              2596 kB' 'KReclaimable:     234040 kB' 'Slab:             302356 kB' 'SReclaimable:     234040 kB' 'SUnreclaim:        68316 kB' 'KernelStack:        4388 kB' 'PageTables:         3544 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:     5072908 kB' 'Committed_AS:     498400 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:       19596 kB' 'VmallocChunk:          0 kB' 'Percpu:             8256 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'HugePages_Total:    1024' 'HugePages_Free:     1024' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2097152 kB' 'DirectMap4k:      141164 kB' 'DirectMap2M:     4052992 kB' 'DirectMap1G:    10485760 kB'
00:05:11.889    16:49:04	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:11.889    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.889    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.889    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.889    16:49:04	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:11.889    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.889    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.889    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.889    16:49:04	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:11.889    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.889    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.889    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.889    16:49:04	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:11.889    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.889    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.889    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.889    16:49:04	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:11.889    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.889    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.889    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.889    16:49:04	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:11.889    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.889    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.889    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.889    16:49:04	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:11.889    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.889    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.889    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.889    16:49:04	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:11.889    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.889    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.889    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.889    16:49:04	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:11.889    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.889    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.889    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.889    16:49:04	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:11.889    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.889    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.889    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.889    16:49:04	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:11.889    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.889    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.889    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.889    16:49:04	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:11.889    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.889    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.889    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.889    16:49:04	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:11.889    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.889    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.889    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.889    16:49:04	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:11.889    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.889    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.889    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.889    16:49:04	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:11.889    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.889    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.889    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.889    16:49:04	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:11.889    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.889    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.889    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.889    16:49:04	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:11.889    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.889    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.889    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.889    16:49:04	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:11.889    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.889    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.889    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.889    16:49:04	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:11.889    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.889    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.889    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.889    16:49:04	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:11.889    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.889    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.889    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.889    16:49:04	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:11.889    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.889    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.889    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.889    16:49:04	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:11.889    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.889    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.889    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.889    16:49:04	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:11.889    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.889    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.889    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.889    16:49:04	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:11.889    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.889    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.889    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.889    16:49:04	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:11.889    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.889    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.889    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.889    16:49:04	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:11.889    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.889    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.889    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.889    16:49:04	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:11.889    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.889    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.889    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.889    16:49:04	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:11.889    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.889    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.889    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.889    16:49:04	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:11.889    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.889    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.889    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.889    16:49:04	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:11.890    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.890    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.890    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.890    16:49:04	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:11.890    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.890    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.890    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.890    16:49:04	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:11.890    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.890    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.890    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.890    16:49:04	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:11.890    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.890    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.890    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.890    16:49:04	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:11.890    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.890    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.890    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.890    16:49:04	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:11.890    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.890    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.890    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.890    16:49:04	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:11.890    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.890    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.890    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.890    16:49:04	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:11.890    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.890    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.890    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.890    16:49:04	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:11.890    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.890    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.890    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.890    16:49:04	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:11.890    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.890    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.890    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.890    16:49:04	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:11.890    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.890    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.890    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.890    16:49:04	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:11.890    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.890    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.890    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.890    16:49:04	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:11.890    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.890    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.890    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.890    16:49:04	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:11.890    16:49:04	-- setup/common.sh@33 -- # echo 1024
00:05:11.890    16:49:04	-- setup/common.sh@33 -- # return 0
00:05:11.890   16:49:04	-- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv ))
00:05:11.890   16:49:04	-- setup/hugepages.sh@112 -- # get_nodes
00:05:11.890   16:49:04	-- setup/hugepages.sh@27 -- # local node
00:05:11.890   16:49:04	-- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9])
00:05:11.890   16:49:04	-- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024
00:05:11.890   16:49:04	-- setup/hugepages.sh@32 -- # no_nodes=1
00:05:11.890   16:49:04	-- setup/hugepages.sh@33 -- # (( no_nodes > 0 ))
00:05:11.890   16:49:04	-- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}"
00:05:11.890   16:49:04	-- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv ))
00:05:11.890    16:49:04	-- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0
00:05:11.890    16:49:04	-- setup/common.sh@17 -- # local get=HugePages_Surp
00:05:11.890    16:49:04	-- setup/common.sh@18 -- # local node=0
00:05:11.890    16:49:04	-- setup/common.sh@19 -- # local var val
00:05:11.890    16:49:04	-- setup/common.sh@20 -- # local mem_f mem
00:05:11.890    16:49:04	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:05:11.890    16:49:04	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]]
00:05:11.890    16:49:04	-- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo
00:05:11.890    16:49:04	-- setup/common.sh@28 -- # mapfile -t mem
00:05:11.890    16:49:04	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:05:11.890    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.890     16:49:04	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12242972 kB' 'MemFree:         4214384 kB' 'MemUsed:         8028588 kB' 'SwapCached:            0 kB' 'Active:          1375492 kB' 'Inactive:        4127456 kB' 'Active(anon):       1056 kB' 'Inactive(anon):   138576 kB' 'Active(file):    1374436 kB' 'Inactive(file):  3988880 kB' 'Unevictable:       29168 kB' 'Mlocked:           27632 kB' 'Dirty:               252 kB' 'Writeback:             4 kB' 'FilePages:       5374984 kB' 'Mapped:            67276 kB' 'AnonPages:        157240 kB' 'Shmem:              2596 kB' 'KernelStack:        4440 kB' 'PageTables:         3500 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'KReclaimable:     234040 kB' 'Slab:             302356 kB' 'SReclaimable:     234040 kB' 'SUnreclaim:        68316 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:        0 kB' 'FilePmdMapped:        0 kB' 'HugePages_Total:  1024' 'HugePages_Free:   1024' 'HugePages_Surp:      0'
00:05:11.890    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.890    16:49:04	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:11.890    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.890    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.890    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.890    16:49:04	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:11.890    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.890    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.890    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.890    16:49:04	-- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:11.890    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.890    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.890    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.890    16:49:04	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:11.890    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.890    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.890    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.890    16:49:04	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:11.890    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.890    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.890    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.890    16:49:04	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:11.890    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.890    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.890    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.890    16:49:04	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:11.890    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.890    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.890    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.890    16:49:04	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:11.890    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.890    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.890    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.890    16:49:04	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:11.890    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.890    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.890    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.890    16:49:04	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:11.890    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.890    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.890    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.890    16:49:04	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:11.890    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.890    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.890    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.890    16:49:04	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:11.890    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.890    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.890    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.890    16:49:04	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:11.890    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.890    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.890    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.890    16:49:04	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:11.890    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.890    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.890    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.890    16:49:04	-- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:11.890    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.890    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.890    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.890    16:49:04	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:11.890    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.890    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.890    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.890    16:49:04	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:11.890    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.890    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.890    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.890    16:49:04	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:11.890    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.890    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.890    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.890    16:49:04	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:11.890    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.890    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.890    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.891    16:49:04	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:11.891    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.891    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.891    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.891    16:49:04	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:11.891    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.891    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.891    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.891    16:49:04	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:11.891    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.891    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.891    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.891    16:49:04	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:11.891    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.891    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.891    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.891    16:49:04	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:11.891    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.891    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.891    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.891    16:49:04	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:11.891    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.891    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.891    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.891    16:49:04	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:11.891    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.891    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.891    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.891    16:49:04	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:11.891    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.891    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.891    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.891    16:49:04	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:11.891    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.891    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.891    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.891    16:49:04	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:11.891    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.891    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.891    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.891    16:49:04	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:11.891    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.891    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.891    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.891    16:49:04	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:11.891    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.891    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.891    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.891    16:49:04	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:11.891    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.891    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.891    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.891    16:49:04	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:11.891    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.891    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.891    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.891    16:49:04	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:11.891    16:49:04	-- setup/common.sh@32 -- # continue
00:05:11.891    16:49:04	-- setup/common.sh@31 -- # IFS=': '
00:05:11.891    16:49:04	-- setup/common.sh@31 -- # read -r var val _
00:05:11.891    16:49:04	-- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:11.891    16:49:04	-- setup/common.sh@33 -- # echo 0
00:05:11.891    16:49:04	-- setup/common.sh@33 -- # return 0
00:05:11.891   16:49:04	-- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 ))
00:05:11.891   16:49:04	-- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}"
00:05:11.891   16:49:04	-- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1
00:05:11.891   16:49:04	-- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1
00:05:11.891  node0=1024 expecting 1024
00:05:11.891   16:49:04	-- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024'
00:05:11.891   16:49:04	-- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]]
00:05:11.891   16:49:04	-- setup/hugepages.sh@202 -- # CLEAR_HUGE=no
00:05:11.891   16:49:04	-- setup/hugepages.sh@202 -- # NRHUGE=512
00:05:11.891   16:49:04	-- setup/hugepages.sh@202 -- # setup output
00:05:11.891   16:49:04	-- setup/common.sh@9 -- # [[ output == output ]]
00:05:11.891   16:49:04	-- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:05:12.462  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev
00:05:12.462  0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver
00:05:12.462  INFO: Requested 512 hugepages but 1024 already allocated on node0
00:05:12.462   16:49:05	-- setup/hugepages.sh@204 -- # verify_nr_hugepages
00:05:12.462   16:49:05	-- setup/hugepages.sh@89 -- # local node
00:05:12.462   16:49:05	-- setup/hugepages.sh@90 -- # local sorted_t
00:05:12.462   16:49:05	-- setup/hugepages.sh@91 -- # local sorted_s
00:05:12.462   16:49:05	-- setup/hugepages.sh@92 -- # local surp
00:05:12.462   16:49:05	-- setup/hugepages.sh@93 -- # local resv
00:05:12.462   16:49:05	-- setup/hugepages.sh@94 -- # local anon
00:05:12.462   16:49:05	-- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]]
00:05:12.462    16:49:05	-- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages
00:05:12.462    16:49:05	-- setup/common.sh@17 -- # local get=AnonHugePages
00:05:12.462    16:49:05	-- setup/common.sh@18 -- # local node=
00:05:12.462    16:49:05	-- setup/common.sh@19 -- # local var val
00:05:12.462    16:49:05	-- setup/common.sh@20 -- # local mem_f mem
00:05:12.462    16:49:05	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:05:12.462    16:49:05	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:05:12.462    16:49:05	-- setup/common.sh@25 -- # [[ -n '' ]]
00:05:12.462    16:49:05	-- setup/common.sh@28 -- # mapfile -t mem
00:05:12.462    16:49:05	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:05:12.462    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.462    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.463     16:49:05	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12242972 kB' 'MemFree:         4211496 kB' 'MemAvailable:    9487556 kB' 'Buffers:           39864 kB' 'Cached:          5335120 kB' 'SwapCached:            0 kB' 'Active:          1375500 kB' 'Inactive:        4127904 kB' 'Active(anon):       1056 kB' 'Inactive(anon):   139024 kB' 'Active(file):    1374444 kB' 'Inactive(file):  3988880 kB' 'Unevictable:       29168 kB' 'Mlocked:           27632 kB' 'SwapTotal:             0 kB' 'SwapFree:              0 kB' 'Dirty:               252 kB' 'Writeback:             4 kB' 'AnonPages:        157836 kB' 'Mapped:            67556 kB' 'Shmem:              2596 kB' 'KReclaimable:     234040 kB' 'Slab:             302276 kB' 'SReclaimable:     234040 kB' 'SUnreclaim:        68236 kB' 'KernelStack:        4436 kB' 'PageTables:         3724 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:     5072908 kB' 'Committed_AS:     498400 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:       19612 kB' 'VmallocChunk:          0 kB' 'Percpu:             8256 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'HugePages_Total:    1024' 'HugePages_Free:     1024' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2097152 kB' 'DirectMap4k:      141164 kB' 'DirectMap2M:     4052992 kB' 'DirectMap1G:    10485760 kB'
00:05:12.463    16:49:05	-- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:12.463    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.463    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.463    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.463    16:49:05	-- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:12.463    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.463    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.463    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.463    16:49:05	-- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:12.463    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.463    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.463    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.463    16:49:05	-- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:12.463    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.463    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.463    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.463    16:49:05	-- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:12.463    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.463    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.463    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.463    16:49:05	-- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:12.463    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.463    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.463    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.463    16:49:05	-- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:12.463    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.463    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.463    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.463    16:49:05	-- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:12.463    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.463    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.463    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.463    16:49:05	-- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:12.463    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.463    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.463    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.463    16:49:05	-- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:12.463    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.463    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.463    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.463    16:49:05	-- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:12.463    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.463    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.463    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.463    16:49:05	-- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:12.463    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.463    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.463    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.463    16:49:05	-- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:12.463    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.463    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.463    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.463    16:49:05	-- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:12.463    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.463    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.463    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.463    16:49:05	-- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:12.463    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.463    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.463    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.463    16:49:05	-- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:12.463    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.463    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.463    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.463    16:49:05	-- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:12.463    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.463    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.463    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.463    16:49:05	-- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:12.463    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.463    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.463    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.463    16:49:05	-- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:12.463    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.463    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.463    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.463    16:49:05	-- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:12.463    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.463    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.463    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.463    16:49:05	-- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:12.463    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.463    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.463    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.463    16:49:05	-- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:12.463    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.463    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.463    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.463    16:49:05	-- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:12.463    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.463    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.463    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.463    16:49:05	-- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:12.463    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.463    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.463    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.463    16:49:05	-- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:12.463    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.463    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.463    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.463    16:49:05	-- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:12.463    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.463    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.463    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.463    16:49:05	-- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:12.463    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.463    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.463    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.463    16:49:05	-- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:12.463    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.463    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.463    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.463    16:49:05	-- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:12.463    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.463    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.463    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.463    16:49:05	-- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:12.463    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.463    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.463    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.463    16:49:05	-- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:12.463    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.463    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.463    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.463    16:49:05	-- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:12.463    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.463    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.463    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.463    16:49:05	-- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:12.463    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.463    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.463    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.463    16:49:05	-- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:12.463    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.463    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.463    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.463    16:49:05	-- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:12.463    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.463    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.463    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.463    16:49:05	-- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:12.463    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.463    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.463    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.463    16:49:05	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:12.463    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.463    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.464    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.464    16:49:05	-- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:12.464    16:49:05	-- setup/common.sh@33 -- # echo 0
00:05:12.464    16:49:05	-- setup/common.sh@33 -- # return 0
00:05:12.464   16:49:05	-- setup/hugepages.sh@97 -- # anon=0
00:05:12.464    16:49:05	-- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp
00:05:12.464    16:49:05	-- setup/common.sh@17 -- # local get=HugePages_Surp
00:05:12.464    16:49:05	-- setup/common.sh@18 -- # local node=
00:05:12.464    16:49:05	-- setup/common.sh@19 -- # local var val
00:05:12.464    16:49:05	-- setup/common.sh@20 -- # local mem_f mem
00:05:12.464    16:49:05	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:05:12.464    16:49:05	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:05:12.464    16:49:05	-- setup/common.sh@25 -- # [[ -n '' ]]
00:05:12.464    16:49:05	-- setup/common.sh@28 -- # mapfile -t mem
00:05:12.464    16:49:05	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:05:12.464    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.464    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.464     16:49:05	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12242972 kB' 'MemFree:         4211680 kB' 'MemAvailable:    9487740 kB' 'Buffers:           39864 kB' 'Cached:          5335120 kB' 'SwapCached:            0 kB' 'Active:          1375500 kB' 'Inactive:        4127868 kB' 'Active(anon):       1056 kB' 'Inactive(anon):   138988 kB' 'Active(file):    1374444 kB' 'Inactive(file):  3988880 kB' 'Unevictable:       29168 kB' 'Mlocked:           27632 kB' 'SwapTotal:             0 kB' 'SwapFree:              0 kB' 'Dirty:               252 kB' 'Writeback:             4 kB' 'AnonPages:        157776 kB' 'Mapped:            67336 kB' 'Shmem:              2596 kB' 'KReclaimable:     234040 kB' 'Slab:             302164 kB' 'SReclaimable:     234040 kB' 'SUnreclaim:        68124 kB' 'KernelStack:        4452 kB' 'PageTables:         3864 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:     5072908 kB' 'Committed_AS:     498400 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:       19612 kB' 'VmallocChunk:          0 kB' 'Percpu:             8256 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'HugePages_Total:    1024' 'HugePages_Free:     1024' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2097152 kB' 'DirectMap4k:      141164 kB' 'DirectMap2M:     4052992 kB' 'DirectMap1G:    10485760 kB'
00:05:12.464    16:49:05	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:12.464    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.464    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.464    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.464    16:49:05	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:12.464    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.464    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.464    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.464    16:49:05	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:12.464    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.464    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.464    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.464    16:49:05	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:12.464    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.464    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.464    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.464    16:49:05	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:12.464    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.464    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.464    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.464    16:49:05	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:12.464    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.464    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.464    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.464    16:49:05	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:12.464    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.464    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.464    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.464    16:49:05	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:12.464    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.464    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.464    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.464    16:49:05	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:12.464    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.464    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.464    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.464    16:49:05	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:12.464    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.464    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.464    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.464    16:49:05	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:12.464    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.464    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.464    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.464    16:49:05	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:12.464    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.464    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.464    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.464    16:49:05	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:12.464    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.464    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.464    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.464    16:49:05	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:12.464    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.464    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.464    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.464    16:49:05	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:12.464    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.464    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.464    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.464    16:49:05	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:12.464    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.464    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.464    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.464    16:49:05	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:12.464    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.464    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.464    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.464    16:49:05	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:12.464    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.464    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.464    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.464    16:49:05	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:12.464    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.464    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.464    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.464    16:49:05	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:12.464    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.464    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.464    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.464    16:49:05	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:12.464    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.464    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.464    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.464    16:49:05	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:12.464    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.464    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.464    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.464    16:49:05	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:12.464    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.464    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.464    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.464    16:49:05	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:12.464    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.464    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.464    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.464    16:49:05	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:12.464    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.464    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.464    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.464    16:49:05	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:12.464    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.464    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.464    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.464    16:49:05	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:12.464    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.464    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.464    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.464    16:49:05	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:12.464    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.464    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.464    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.464    16:49:05	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:12.464    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.464    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.464    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.464    16:49:05	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:12.464    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.464    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.464    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.464    16:49:05	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:12.464    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.464    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.464    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.464    16:49:05	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:12.464    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.464    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.465    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.465    16:49:05	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:12.465    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.465    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.465    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.465    16:49:05	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:12.465    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.465    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.465    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.465    16:49:05	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:12.465    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.465    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.465    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.465    16:49:05	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:12.465    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.465    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.465    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.465    16:49:05	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:12.465    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.465    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.465    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.465    16:49:05	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:12.465    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.465    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.465    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.465    16:49:05	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:12.465    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.465    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.465    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.465    16:49:05	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:12.465    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.465    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.465    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.465    16:49:05	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:12.465    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.465    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.465    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.465    16:49:05	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:12.465    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.465    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.465    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.465    16:49:05	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:12.465    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.465    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.465    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.465    16:49:05	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:12.465    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.465    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.465    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.465    16:49:05	-- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:12.465    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.465    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.465    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.465    16:49:05	-- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:12.465    16:49:05	-- setup/common.sh@33 -- # echo 0
00:05:12.465    16:49:05	-- setup/common.sh@33 -- # return 0
00:05:12.465   16:49:05	-- setup/hugepages.sh@99 -- # surp=0
00:05:12.465    16:49:05	-- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd
00:05:12.465    16:49:05	-- setup/common.sh@17 -- # local get=HugePages_Rsvd
00:05:12.465    16:49:05	-- setup/common.sh@18 -- # local node=
00:05:12.465    16:49:05	-- setup/common.sh@19 -- # local var val
00:05:12.465    16:49:05	-- setup/common.sh@20 -- # local mem_f mem
00:05:12.465    16:49:05	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:05:12.465    16:49:05	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:05:12.465    16:49:05	-- setup/common.sh@25 -- # [[ -n '' ]]
00:05:12.465    16:49:05	-- setup/common.sh@28 -- # mapfile -t mem
00:05:12.465    16:49:05	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:05:12.465    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.465    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.465     16:49:05	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12242972 kB' 'MemFree:         4211940 kB' 'MemAvailable:    9488000 kB' 'Buffers:           39864 kB' 'Cached:          5335120 kB' 'SwapCached:            0 kB' 'Active:          1375500 kB' 'Inactive:        4127868 kB' 'Active(anon):       1056 kB' 'Inactive(anon):   138988 kB' 'Active(file):    1374444 kB' 'Inactive(file):  3988880 kB' 'Unevictable:       29168 kB' 'Mlocked:           27632 kB' 'SwapTotal:             0 kB' 'SwapFree:              0 kB' 'Dirty:               252 kB' 'Writeback:             4 kB' 'AnonPages:        157776 kB' 'Mapped:            67336 kB' 'Shmem:              2596 kB' 'KReclaimable:     234040 kB' 'Slab:             302164 kB' 'SReclaimable:     234040 kB' 'SUnreclaim:        68124 kB' 'KernelStack:        4452 kB' 'PageTables:         3864 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:     5072908 kB' 'Committed_AS:     498400 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:       19612 kB' 'VmallocChunk:          0 kB' 'Percpu:             8256 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'HugePages_Total:    1024' 'HugePages_Free:     1024' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2097152 kB' 'DirectMap4k:      141164 kB' 'DirectMap2M:     4052992 kB' 'DirectMap1G:    10485760 kB'
00:05:12.465    16:49:05	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:12.465    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.465    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.465    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.465    16:49:05	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:12.465    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.465    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.465    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.465    16:49:05	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:12.465    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.465    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.465    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.465    16:49:05	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:12.465    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.465    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.465    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.465    16:49:05	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:12.465    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.465    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.465    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.465    16:49:05	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:12.465    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.465    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.465    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.465    16:49:05	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:12.465    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.465    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.465    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.465    16:49:05	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:12.465    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.465    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.465    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.465    16:49:05	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:12.465    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.465    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.465    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.465    16:49:05	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:12.465    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.465    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.465    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.465    16:49:05	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:12.465    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.465    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.465    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.465    16:49:05	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:12.465    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.465    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.465    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.465    16:49:05	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:12.465    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.465    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.465    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.465    16:49:05	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:12.465    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.465    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.465    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.465    16:49:05	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:12.465    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.465    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.465    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.465    16:49:05	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:12.465    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.465    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.465    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.465    16:49:05	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:12.465    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.465    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.465    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.465    16:49:05	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:12.465    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.465    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.465    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.465    16:49:05	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:12.465    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.465    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.465    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.466    16:49:05	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:12.466    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.466    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.466    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.466    16:49:05	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:12.466    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.466    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.466    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.466    16:49:05	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:12.466    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.466    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.466    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.466    16:49:05	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:12.466    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.466    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.466    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.466    16:49:05	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:12.466    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.466    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.466    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.466    16:49:05	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:12.466    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.466    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.466    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.466    16:49:05	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:12.466    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.466    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.466    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.466    16:49:05	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:12.466    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.466    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.466    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.466    16:49:05	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:12.466    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.466    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.466    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.466    16:49:05	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:12.466    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.466    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.466    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.466    16:49:05	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:12.466    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.466    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.466    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.466    16:49:05	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:12.466    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.466    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.466    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.466    16:49:05	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:12.466    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.466    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.466    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.466    16:49:05	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:12.466    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.466    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.466    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.466    16:49:05	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:12.466    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.466    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.466    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.466    16:49:05	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:12.466    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.466    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.466    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.466    16:49:05	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:12.466    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.466    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.466    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.466    16:49:05	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:12.466    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.466    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.466    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.466    16:49:05	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:12.466    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.466    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.466    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.466    16:49:05	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:12.466    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.466    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.466    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.466    16:49:05	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:12.466    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.466    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.466    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.466    16:49:05	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:12.466    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.466    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.466    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.466    16:49:05	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:12.466    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.466    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.466    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.466    16:49:05	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:12.466    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.466    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.466    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.466    16:49:05	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:12.466    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.466    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.466    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.466    16:49:05	-- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:12.466    16:49:05	-- setup/common.sh@33 -- # echo 0
00:05:12.466    16:49:05	-- setup/common.sh@33 -- # return 0
00:05:12.466   16:49:05	-- setup/hugepages.sh@100 -- # resv=0
00:05:12.466  nr_hugepages=1024
00:05:12.466   16:49:05	-- setup/hugepages.sh@102 -- # echo nr_hugepages=1024
00:05:12.466   16:49:05	-- setup/hugepages.sh@103 -- # echo resv_hugepages=0
00:05:12.466  resv_hugepages=0
00:05:12.466  surplus_hugepages=0
00:05:12.466   16:49:05	-- setup/hugepages.sh@104 -- # echo surplus_hugepages=0
00:05:12.466  anon_hugepages=0
00:05:12.466   16:49:05	-- setup/hugepages.sh@105 -- # echo anon_hugepages=0
00:05:12.466   16:49:05	-- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv ))
00:05:12.466   16:49:05	-- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages ))
00:05:12.466    16:49:05	-- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total
00:05:12.466    16:49:05	-- setup/common.sh@17 -- # local get=HugePages_Total
00:05:12.466    16:49:05	-- setup/common.sh@18 -- # local node=
00:05:12.466    16:49:05	-- setup/common.sh@19 -- # local var val
00:05:12.466    16:49:05	-- setup/common.sh@20 -- # local mem_f mem
00:05:12.466    16:49:05	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:05:12.466    16:49:05	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:05:12.466    16:49:05	-- setup/common.sh@25 -- # [[ -n '' ]]
00:05:12.466    16:49:05	-- setup/common.sh@28 -- # mapfile -t mem
00:05:12.466    16:49:05	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:05:12.466    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.466    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.466     16:49:05	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12242972 kB' 'MemFree:         4212400 kB' 'MemAvailable:    9488460 kB' 'Buffers:           39864 kB' 'Cached:          5335120 kB' 'SwapCached:            0 kB' 'Active:          1375500 kB' 'Inactive:        4127816 kB' 'Active(anon):       1056 kB' 'Inactive(anon):   138936 kB' 'Active(file):    1374444 kB' 'Inactive(file):  3988880 kB' 'Unevictable:       29168 kB' 'Mlocked:           27632 kB' 'SwapTotal:             0 kB' 'SwapFree:              0 kB' 'Dirty:               252 kB' 'Writeback:             0 kB' 'AnonPages:        157348 kB' 'Mapped:            67304 kB' 'Shmem:              2596 kB' 'KReclaimable:     234040 kB' 'Slab:             302152 kB' 'SReclaimable:     234040 kB' 'SUnreclaim:        68112 kB' 'KernelStack:        4420 kB' 'PageTables:         3356 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:     5072908 kB' 'Committed_AS:     498400 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:       19644 kB' 'VmallocChunk:          0 kB' 'Percpu:             8256 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'HugePages_Total:    1024' 'HugePages_Free:     1024' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2097152 kB' 'DirectMap4k:      141164 kB' 'DirectMap2M:     4052992 kB' 'DirectMap1G:    10485760 kB'
00:05:12.466    16:49:05	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:12.466    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.466    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.466    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.466    16:49:05	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:12.466    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.466    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.466    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.466    16:49:05	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:12.466    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.466    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.466    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.466    16:49:05	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:12.466    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.466    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.466    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.466    16:49:05	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:12.466    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.466    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.466    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.466    16:49:05	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:12.466    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.467    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.467    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.467    16:49:05	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:12.467    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.467    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.467    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.467    16:49:05	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:12.467    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.467    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.467    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.467    16:49:05	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:12.467    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.467    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.467    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.467    16:49:05	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:12.467    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.467    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.467    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.467    16:49:05	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:12.467    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.467    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.467    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.467    16:49:05	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:12.467    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.467    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.467    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.467    16:49:05	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:12.467    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.467    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.467    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.467    16:49:05	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:12.467    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.467    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.467    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.467    16:49:05	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:12.467    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.467    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.467    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.467    16:49:05	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:12.467    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.467    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.467    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.467    16:49:05	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:12.467    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.467    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.467    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.467    16:49:05	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:12.467    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.467    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.467    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.467    16:49:05	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:12.467    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.467    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.467    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.467    16:49:05	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:12.467    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.467    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.467    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.467    16:49:05	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:12.467    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.467    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.467    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.467    16:49:05	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:12.467    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.467    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.467    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.467    16:49:05	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:12.467    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.467    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.467    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.467    16:49:05	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:12.467    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.467    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.467    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.467    16:49:05	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:12.467    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.467    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.467    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.467    16:49:05	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:12.467    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.467    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.467    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.467    16:49:05	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:12.467    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.467    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.467    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.467    16:49:05	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:12.467    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.467    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.467    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.467    16:49:05	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:12.467    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.467    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.467    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.467    16:49:05	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:12.467    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.467    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.467    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.467    16:49:05	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:12.467    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.467    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.467    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.467    16:49:05	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:12.467    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.467    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.467    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.467    16:49:05	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:12.467    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.467    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.467    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.467    16:49:05	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:12.467    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.467    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.467    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.467    16:49:05	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:12.467    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.467    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.467    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.467    16:49:05	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:12.467    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.467    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.467    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.467    16:49:05	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:12.467    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.467    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.467    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.467    16:49:05	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:12.467    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.467    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.467    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.467    16:49:05	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:12.467    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.467    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.467    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.467    16:49:05	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:12.467    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.467    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.468    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.468    16:49:05	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:12.468    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.468    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.468    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.468    16:49:05	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:12.468    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.468    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.468    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.468    16:49:05	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:12.468    16:49:05	-- setup/common.sh@33 -- # echo 1024
00:05:12.468    16:49:05	-- setup/common.sh@33 -- # return 0
00:05:12.468   16:49:05	-- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv ))
00:05:12.468   16:49:05	-- setup/hugepages.sh@112 -- # get_nodes
00:05:12.468   16:49:05	-- setup/hugepages.sh@27 -- # local node
00:05:12.468   16:49:05	-- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9])
00:05:12.468   16:49:05	-- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024
00:05:12.468   16:49:05	-- setup/hugepages.sh@32 -- # no_nodes=1
00:05:12.468   16:49:05	-- setup/hugepages.sh@33 -- # (( no_nodes > 0 ))
00:05:12.468   16:49:05	-- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}"
00:05:12.468   16:49:05	-- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv ))
00:05:12.468    16:49:05	-- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0
00:05:12.468    16:49:05	-- setup/common.sh@17 -- # local get=HugePages_Surp
00:05:12.468    16:49:05	-- setup/common.sh@18 -- # local node=0
00:05:12.468    16:49:05	-- setup/common.sh@19 -- # local var val
00:05:12.468    16:49:05	-- setup/common.sh@20 -- # local mem_f mem
00:05:12.468    16:49:05	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:05:12.468    16:49:05	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]]
00:05:12.468    16:49:05	-- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo
00:05:12.468    16:49:05	-- setup/common.sh@28 -- # mapfile -t mem
00:05:12.468    16:49:05	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:05:12.468    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.468     16:49:05	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       12242972 kB' 'MemFree:         4212400 kB' 'MemUsed:         8030572 kB' 'SwapCached:            0 kB' 'Active:          1375500 kB' 'Inactive:        4127544 kB' 'Active(anon):       1056 kB' 'Inactive(anon):   138664 kB' 'Active(file):    1374444 kB' 'Inactive(file):  3988880 kB' 'Unevictable:       29168 kB' 'Mlocked:           27632 kB' 'Dirty:               252 kB' 'Writeback:             0 kB' 'FilePages:       5374984 kB' 'Mapped:            67304 kB' 'AnonPages:        157076 kB' 'Shmem:              2596 kB' 'KernelStack:        4404 kB' 'PageTables:         3572 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'KReclaimable:     234040 kB' 'Slab:             302152 kB' 'SReclaimable:     234040 kB' 'SUnreclaim:        68112 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:        0 kB' 'FilePmdMapped:        0 kB' 'HugePages_Total:  1024' 'HugePages_Free:   1024' 'HugePages_Surp:      0'
00:05:12.468    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.468    16:49:05	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:12.468    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.468    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.468    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.468    16:49:05	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:12.468    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.468    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.468    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.468    16:49:05	-- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:12.468    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.468    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.468    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.468    16:49:05	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:12.468    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.468    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.468    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.468    16:49:05	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:12.468    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.468    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.468    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.468    16:49:05	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:12.468    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.468    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.468    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.468    16:49:05	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:12.468    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.468    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.468    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.468    16:49:05	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:12.468    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.468    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.468    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.468    16:49:05	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:12.468    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.468    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.468    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.468    16:49:05	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:12.468    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.468    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.468    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.468    16:49:05	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:12.468    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.468    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.468    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.468    16:49:05	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:12.468    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.468    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.468    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.468    16:49:05	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:12.468    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.468    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.468    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.468    16:49:05	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:12.468    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.468    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.468    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.468    16:49:05	-- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:12.468    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.468    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.468    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.468    16:49:05	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:12.468    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.468    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.468    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.468    16:49:05	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:12.468    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.468    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.468    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.468    16:49:05	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:12.468    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.468    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.468    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.468    16:49:05	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:12.468    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.468    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.468    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.468    16:49:05	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:12.468    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.468    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.468    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.468    16:49:05	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:12.468    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.468    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.468    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.468    16:49:05	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:12.468    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.468    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.468    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.468    16:49:05	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:12.468    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.468    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.468    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.468    16:49:05	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:12.468    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.468    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.468    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.468    16:49:05	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:12.468    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.468    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.468    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.468    16:49:05	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:12.468    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.468    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.468    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.468    16:49:05	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:12.468    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.468    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.468    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.468    16:49:05	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:12.468    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.468    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.468    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.468    16:49:05	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:12.468    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.468    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.468    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.468    16:49:05	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:12.468    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.468    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.469    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.469    16:49:05	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:12.469    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.469    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.469    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.469    16:49:05	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:12.469    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.469    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.469    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.469    16:49:05	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:12.469    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.469    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.469    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.469    16:49:05	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:12.469    16:49:05	-- setup/common.sh@32 -- # continue
00:05:12.469    16:49:05	-- setup/common.sh@31 -- # IFS=': '
00:05:12.469    16:49:05	-- setup/common.sh@31 -- # read -r var val _
00:05:12.469    16:49:05	-- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:12.469    16:49:05	-- setup/common.sh@33 -- # echo 0
00:05:12.469    16:49:05	-- setup/common.sh@33 -- # return 0
00:05:12.469   16:49:05	-- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 ))
00:05:12.469   16:49:05	-- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}"
00:05:12.469   16:49:05	-- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1
00:05:12.469   16:49:05	-- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1
00:05:12.469  node0=1024 expecting 1024
00:05:12.469   16:49:05	-- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024'
00:05:12.469   16:49:05	-- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]]
00:05:12.469  
00:05:12.469  real	0m1.589s
00:05:12.469  user	0m0.602s
00:05:12.469  sys	0m1.088s
00:05:12.469   16:49:05	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:05:12.469   16:49:05	-- common/autotest_common.sh@10 -- # set +x
00:05:12.469  ************************************
00:05:12.469  END TEST no_shrink_alloc
00:05:12.469  ************************************
00:05:12.728   16:49:05	-- setup/hugepages.sh@217 -- # clear_hp
00:05:12.728   16:49:05	-- setup/hugepages.sh@37 -- # local node hp
00:05:12.728   16:49:05	-- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}"
00:05:12.728   16:49:05	-- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"*
00:05:12.728   16:49:05	-- setup/hugepages.sh@41 -- # echo 0
00:05:12.728   16:49:05	-- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"*
00:05:12.728   16:49:05	-- setup/hugepages.sh@41 -- # echo 0
00:05:12.728   16:49:05	-- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes
00:05:12.728   16:49:05	-- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes
00:05:12.728  ************************************
00:05:12.728  END TEST hugepages
00:05:12.728  ************************************
00:05:12.728  
00:05:12.728  real	0m8.145s
00:05:12.728  user	0m2.585s
00:05:12.728  sys	0m5.840s
00:05:12.728   16:49:05	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:05:12.728   16:49:05	-- common/autotest_common.sh@10 -- # set +x
00:05:12.728   16:49:05	-- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh
00:05:12.728   16:49:05	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:05:12.728   16:49:05	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:05:12.728   16:49:05	-- common/autotest_common.sh@10 -- # set +x
00:05:12.728  ************************************
00:05:12.728  START TEST driver
00:05:12.728  ************************************
00:05:12.728   16:49:05	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh
00:05:12.728  * Looking for test storage...
00:05:12.728  * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup
00:05:12.728     16:49:05	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:05:12.728      16:49:05	-- common/autotest_common.sh@1690 -- # lcov --version
00:05:12.728      16:49:05	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:05:12.987     16:49:05	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:05:12.987     16:49:05	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:05:12.987     16:49:05	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:05:12.987     16:49:05	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:05:12.987     16:49:05	-- scripts/common.sh@335 -- # IFS=.-:
00:05:12.987     16:49:05	-- scripts/common.sh@335 -- # read -ra ver1
00:05:12.987     16:49:05	-- scripts/common.sh@336 -- # IFS=.-:
00:05:12.987     16:49:05	-- scripts/common.sh@336 -- # read -ra ver2
00:05:12.987     16:49:05	-- scripts/common.sh@337 -- # local 'op=<'
00:05:12.987     16:49:05	-- scripts/common.sh@339 -- # ver1_l=2
00:05:12.987     16:49:05	-- scripts/common.sh@340 -- # ver2_l=1
00:05:12.987     16:49:05	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:05:12.987     16:49:05	-- scripts/common.sh@343 -- # case "$op" in
00:05:12.987     16:49:05	-- scripts/common.sh@344 -- # : 1
00:05:12.987     16:49:05	-- scripts/common.sh@363 -- # (( v = 0 ))
00:05:12.987     16:49:05	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:05:12.987      16:49:05	-- scripts/common.sh@364 -- # decimal 1
00:05:12.987      16:49:05	-- scripts/common.sh@352 -- # local d=1
00:05:12.987      16:49:05	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:05:12.987      16:49:05	-- scripts/common.sh@354 -- # echo 1
00:05:12.987     16:49:05	-- scripts/common.sh@364 -- # ver1[v]=1
00:05:12.987      16:49:05	-- scripts/common.sh@365 -- # decimal 2
00:05:12.987      16:49:05	-- scripts/common.sh@352 -- # local d=2
00:05:12.987      16:49:05	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:05:12.987      16:49:05	-- scripts/common.sh@354 -- # echo 2
00:05:12.987     16:49:05	-- scripts/common.sh@365 -- # ver2[v]=2
00:05:12.987     16:49:05	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:05:12.987     16:49:05	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:05:12.987     16:49:05	-- scripts/common.sh@367 -- # return 0
00:05:12.987     16:49:05	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:05:12.987     16:49:05	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:05:12.987  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:12.987  		--rc genhtml_branch_coverage=1
00:05:12.987  		--rc genhtml_function_coverage=1
00:05:12.987  		--rc genhtml_legend=1
00:05:12.987  		--rc geninfo_all_blocks=1
00:05:12.987  		--rc geninfo_unexecuted_blocks=1
00:05:12.987  		
00:05:12.987  		'
00:05:12.987     16:49:05	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:05:12.987  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:12.987  		--rc genhtml_branch_coverage=1
00:05:12.987  		--rc genhtml_function_coverage=1
00:05:12.987  		--rc genhtml_legend=1
00:05:12.987  		--rc geninfo_all_blocks=1
00:05:12.987  		--rc geninfo_unexecuted_blocks=1
00:05:12.987  		
00:05:12.987  		'
00:05:12.987     16:49:05	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:05:12.987  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:12.987  		--rc genhtml_branch_coverage=1
00:05:12.987  		--rc genhtml_function_coverage=1
00:05:12.987  		--rc genhtml_legend=1
00:05:12.987  		--rc geninfo_all_blocks=1
00:05:12.987  		--rc geninfo_unexecuted_blocks=1
00:05:12.987  		
00:05:12.987  		'
00:05:12.987     16:49:05	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:05:12.987  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:12.987  		--rc genhtml_branch_coverage=1
00:05:12.987  		--rc genhtml_function_coverage=1
00:05:12.987  		--rc genhtml_legend=1
00:05:12.987  		--rc geninfo_all_blocks=1
00:05:12.987  		--rc geninfo_unexecuted_blocks=1
00:05:12.987  		
00:05:12.987  		'
00:05:12.987   16:49:05	-- setup/driver.sh@68 -- # setup reset
00:05:12.987   16:49:05	-- setup/common.sh@9 -- # [[ reset == output ]]
00:05:12.987   16:49:05	-- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:05:13.554   16:49:06	-- setup/driver.sh@69 -- # run_test guess_driver guess_driver
00:05:13.554   16:49:06	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:05:13.554   16:49:06	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:05:13.554   16:49:06	-- common/autotest_common.sh@10 -- # set +x
00:05:13.554  ************************************
00:05:13.554  START TEST guess_driver
00:05:13.554  ************************************
00:05:13.554   16:49:06	-- common/autotest_common.sh@1114 -- # guess_driver
00:05:13.554   16:49:06	-- setup/driver.sh@46 -- # local driver setup_driver marker
00:05:13.554   16:49:06	-- setup/driver.sh@47 -- # local fail=0
00:05:13.554    16:49:06	-- setup/driver.sh@49 -- # pick_driver
00:05:13.554    16:49:06	-- setup/driver.sh@36 -- # vfio
00:05:13.554    16:49:06	-- setup/driver.sh@21 -- # local iommu_grups
00:05:13.554    16:49:06	-- setup/driver.sh@22 -- # local unsafe_vfio
00:05:13.554    16:49:06	-- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]]
00:05:13.554    16:49:06	-- setup/driver.sh@25 -- # unsafe_vfio=N
00:05:13.554    16:49:06	-- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*)
00:05:13.554    16:49:06	-- setup/driver.sh@29 -- # (( 0 > 0 ))
00:05:13.554    16:49:06	-- setup/driver.sh@29 -- # [[ N == Y ]]
00:05:13.554    16:49:06	-- setup/driver.sh@32 -- # return 1
00:05:13.554    16:49:06	-- setup/driver.sh@38 -- # uio
00:05:13.554    16:49:06	-- setup/driver.sh@17 -- # is_driver uio_pci_generic
00:05:13.554    16:49:06	-- setup/driver.sh@14 -- # mod uio_pci_generic
00:05:13.554     16:49:06	-- setup/driver.sh@12 -- # dep uio_pci_generic
00:05:13.554     16:49:06	-- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic
00:05:13.554    16:49:06	-- setup/driver.sh@12 -- # [[ insmod /lib/modules/5.15.0-101-generic/kernel/drivers/uio/uio.ko 
00:05:13.554  insmod /lib/modules/5.15.0-101-generic/kernel/drivers/uio/uio_pci_generic.ko  == *\.\k\o* ]]
00:05:13.554    16:49:06	-- setup/driver.sh@39 -- # echo uio_pci_generic
00:05:13.554   16:49:06	-- setup/driver.sh@49 -- # driver=uio_pci_generic
00:05:13.554   16:49:06	-- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]]
00:05:13.554  Looking for driver=uio_pci_generic
00:05:13.554   16:49:06	-- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic'
00:05:13.554   16:49:06	-- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver
00:05:13.554    16:49:06	-- setup/driver.sh@45 -- # setup output config
00:05:13.554    16:49:06	-- setup/common.sh@9 -- # [[ output == output ]]
00:05:13.554    16:49:06	-- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config
00:05:14.184   16:49:06	-- setup/driver.sh@58 -- # [[ devices: == \-\> ]]
00:05:14.184   16:49:06	-- setup/driver.sh@58 -- # continue
00:05:14.184   16:49:06	-- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver
00:05:14.184   16:49:06	-- setup/driver.sh@58 -- # [[ -> == \-\> ]]
00:05:14.184   16:49:06	-- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]]
00:05:14.184   16:49:06	-- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver
00:05:16.091   16:49:08	-- setup/driver.sh@64 -- # (( fail == 0 ))
00:05:16.091   16:49:08	-- setup/driver.sh@65 -- # setup reset
00:05:16.091   16:49:08	-- setup/common.sh@9 -- # [[ reset == output ]]
00:05:16.091   16:49:08	-- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:05:16.657  
00:05:16.657  real	0m2.983s
00:05:16.657  user	0m0.488s
00:05:16.657  sys	0m2.468s
00:05:16.657   16:49:09	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:05:16.657   16:49:09	-- common/autotest_common.sh@10 -- # set +x
00:05:16.657  ************************************
00:05:16.657  END TEST guess_driver
00:05:16.657  ************************************
00:05:16.657  ************************************
00:05:16.657  END TEST driver
00:05:16.657  ************************************
00:05:16.657  
00:05:16.657  real	0m3.857s
00:05:16.657  user	0m0.908s
00:05:16.657  sys	0m2.955s
00:05:16.657   16:49:09	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:05:16.657   16:49:09	-- common/autotest_common.sh@10 -- # set +x
00:05:16.657   16:49:09	-- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh
00:05:16.657   16:49:09	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:05:16.657   16:49:09	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:05:16.657   16:49:09	-- common/autotest_common.sh@10 -- # set +x
00:05:16.657  ************************************
00:05:16.657  START TEST devices
00:05:16.657  ************************************
00:05:16.657   16:49:09	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh
00:05:16.657  * Looking for test storage...
00:05:16.657  * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup
00:05:16.657     16:49:09	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:05:16.657      16:49:09	-- common/autotest_common.sh@1690 -- # lcov --version
00:05:16.657      16:49:09	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:05:16.657     16:49:09	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:05:16.657     16:49:09	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:05:16.657     16:49:09	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:05:16.657     16:49:09	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:05:16.657     16:49:09	-- scripts/common.sh@335 -- # IFS=.-:
00:05:16.657     16:49:09	-- scripts/common.sh@335 -- # read -ra ver1
00:05:16.657     16:49:09	-- scripts/common.sh@336 -- # IFS=.-:
00:05:16.657     16:49:09	-- scripts/common.sh@336 -- # read -ra ver2
00:05:16.657     16:49:09	-- scripts/common.sh@337 -- # local 'op=<'
00:05:16.657     16:49:09	-- scripts/common.sh@339 -- # ver1_l=2
00:05:16.657     16:49:09	-- scripts/common.sh@340 -- # ver2_l=1
00:05:16.657     16:49:09	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:05:16.657     16:49:09	-- scripts/common.sh@343 -- # case "$op" in
00:05:16.657     16:49:09	-- scripts/common.sh@344 -- # : 1
00:05:16.657     16:49:09	-- scripts/common.sh@363 -- # (( v = 0 ))
00:05:16.657     16:49:09	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:05:16.657      16:49:09	-- scripts/common.sh@364 -- # decimal 1
00:05:16.657      16:49:09	-- scripts/common.sh@352 -- # local d=1
00:05:16.657      16:49:09	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:05:16.658      16:49:09	-- scripts/common.sh@354 -- # echo 1
00:05:16.658     16:49:09	-- scripts/common.sh@364 -- # ver1[v]=1
00:05:16.658      16:49:09	-- scripts/common.sh@365 -- # decimal 2
00:05:16.658      16:49:09	-- scripts/common.sh@352 -- # local d=2
00:05:16.658      16:49:09	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:05:16.658      16:49:09	-- scripts/common.sh@354 -- # echo 2
00:05:16.658     16:49:09	-- scripts/common.sh@365 -- # ver2[v]=2
00:05:16.658     16:49:09	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:05:16.658     16:49:09	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:05:16.658     16:49:09	-- scripts/common.sh@367 -- # return 0
00:05:16.658     16:49:09	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:05:16.658     16:49:09	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:05:16.658  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:16.658  		--rc genhtml_branch_coverage=1
00:05:16.658  		--rc genhtml_function_coverage=1
00:05:16.658  		--rc genhtml_legend=1
00:05:16.658  		--rc geninfo_all_blocks=1
00:05:16.658  		--rc geninfo_unexecuted_blocks=1
00:05:16.658  		
00:05:16.658  		'
00:05:16.658     16:49:09	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:05:16.658  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:16.658  		--rc genhtml_branch_coverage=1
00:05:16.658  		--rc genhtml_function_coverage=1
00:05:16.658  		--rc genhtml_legend=1
00:05:16.658  		--rc geninfo_all_blocks=1
00:05:16.658  		--rc geninfo_unexecuted_blocks=1
00:05:16.658  		
00:05:16.658  		'
00:05:16.658     16:49:09	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:05:16.658  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:16.658  		--rc genhtml_branch_coverage=1
00:05:16.658  		--rc genhtml_function_coverage=1
00:05:16.658  		--rc genhtml_legend=1
00:05:16.658  		--rc geninfo_all_blocks=1
00:05:16.658  		--rc geninfo_unexecuted_blocks=1
00:05:16.658  		
00:05:16.658  		'
00:05:16.658     16:49:09	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:05:16.658  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:16.658  		--rc genhtml_branch_coverage=1
00:05:16.658  		--rc genhtml_function_coverage=1
00:05:16.658  		--rc genhtml_legend=1
00:05:16.658  		--rc geninfo_all_blocks=1
00:05:16.658  		--rc geninfo_unexecuted_blocks=1
00:05:16.658  		
00:05:16.658  		'
00:05:16.658   16:49:09	-- setup/devices.sh@190 -- # trap cleanup EXIT
00:05:16.658   16:49:09	-- setup/devices.sh@192 -- # setup reset
00:05:16.658   16:49:09	-- setup/common.sh@9 -- # [[ reset == output ]]
00:05:16.658   16:49:09	-- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:05:17.223   16:49:10	-- setup/devices.sh@194 -- # get_zoned_devs
00:05:17.223   16:49:10	-- common/autotest_common.sh@1664 -- # zoned_devs=()
00:05:17.223   16:49:10	-- common/autotest_common.sh@1664 -- # local -gA zoned_devs
00:05:17.223   16:49:10	-- common/autotest_common.sh@1665 -- # local nvme bdf
00:05:17.223   16:49:10	-- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme*
00:05:17.223   16:49:10	-- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1
00:05:17.223   16:49:10	-- common/autotest_common.sh@1657 -- # local device=nvme0n1
00:05:17.223   16:49:10	-- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]]
00:05:17.223   16:49:10	-- common/autotest_common.sh@1660 -- # [[ none != none ]]
00:05:17.223   16:49:10	-- setup/devices.sh@196 -- # blocks=()
00:05:17.223   16:49:10	-- setup/devices.sh@196 -- # declare -a blocks
00:05:17.223   16:49:10	-- setup/devices.sh@197 -- # blocks_to_pci=()
00:05:17.223   16:49:10	-- setup/devices.sh@197 -- # declare -A blocks_to_pci
00:05:17.223   16:49:10	-- setup/devices.sh@198 -- # min_disk_size=3221225472
00:05:17.223   16:49:10	-- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*)
00:05:17.223   16:49:10	-- setup/devices.sh@201 -- # ctrl=nvme0n1
00:05:17.223   16:49:10	-- setup/devices.sh@201 -- # ctrl=nvme0
00:05:17.223   16:49:10	-- setup/devices.sh@202 -- # pci=0000:00:06.0
00:05:17.224   16:49:10	-- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]]
00:05:17.224   16:49:10	-- setup/devices.sh@204 -- # block_in_use nvme0n1
00:05:17.224   16:49:10	-- scripts/common.sh@380 -- # local block=nvme0n1 pt
00:05:17.224   16:49:10	-- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1
00:05:17.224  No valid GPT data, bailing
00:05:17.224    16:49:10	-- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1
00:05:17.224   16:49:10	-- scripts/common.sh@393 -- # pt=
00:05:17.224   16:49:10	-- scripts/common.sh@394 -- # return 1
00:05:17.224    16:49:10	-- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1
00:05:17.224    16:49:10	-- setup/common.sh@76 -- # local dev=nvme0n1
00:05:17.224    16:49:10	-- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]]
00:05:17.224    16:49:10	-- setup/common.sh@80 -- # echo 5368709120
00:05:17.224   16:49:10	-- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size ))
00:05:17.224   16:49:10	-- setup/devices.sh@205 -- # blocks+=("${block##*/}")
00:05:17.224   16:49:10	-- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0
00:05:17.224   16:49:10	-- setup/devices.sh@209 -- # (( 1 > 0 ))
00:05:17.224   16:49:10	-- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1
00:05:17.224   16:49:10	-- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount
00:05:17.224   16:49:10	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:05:17.224   16:49:10	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:05:17.224   16:49:10	-- common/autotest_common.sh@10 -- # set +x
00:05:17.482  ************************************
00:05:17.482  START TEST nvme_mount
00:05:17.482  ************************************
00:05:17.482   16:49:10	-- common/autotest_common.sh@1114 -- # nvme_mount
00:05:17.482   16:49:10	-- setup/devices.sh@95 -- # nvme_disk=nvme0n1
00:05:17.482   16:49:10	-- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1
00:05:17.482   16:49:10	-- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount
00:05:17.482   16:49:10	-- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme
00:05:17.482   16:49:10	-- setup/devices.sh@101 -- # partition_drive nvme0n1 1
00:05:17.482   16:49:10	-- setup/common.sh@39 -- # local disk=nvme0n1
00:05:17.482   16:49:10	-- setup/common.sh@40 -- # local part_no=1
00:05:17.482   16:49:10	-- setup/common.sh@41 -- # local size=1073741824
00:05:17.482   16:49:10	-- setup/common.sh@43 -- # local part part_start=0 part_end=0
00:05:17.482   16:49:10	-- setup/common.sh@44 -- # parts=()
00:05:17.482   16:49:10	-- setup/common.sh@44 -- # local parts
00:05:17.482   16:49:10	-- setup/common.sh@46 -- # (( part = 1 ))
00:05:17.482   16:49:10	-- setup/common.sh@46 -- # (( part <= part_no ))
00:05:17.482   16:49:10	-- setup/common.sh@47 -- # parts+=("${disk}p$part")
00:05:17.482   16:49:10	-- setup/common.sh@46 -- # (( part++ ))
00:05:17.482   16:49:10	-- setup/common.sh@46 -- # (( part <= part_no ))
00:05:17.482   16:49:10	-- setup/common.sh@51 -- # (( size /= 4096 ))
00:05:17.482   16:49:10	-- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all
00:05:17.482   16:49:10	-- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1
00:05:18.416  Creating new GPT entries in memory.
00:05:18.416  GPT data structures destroyed! You may now partition the disk using fdisk or
00:05:18.416  other utilities.
00:05:18.416   16:49:11	-- setup/common.sh@57 -- # (( part = 1 ))
00:05:18.416   16:49:11	-- setup/common.sh@57 -- # (( part <= part_no ))
00:05:18.416   16:49:11	-- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 ))
00:05:18.416   16:49:11	-- setup/common.sh@59 -- # (( part_end = part_start + size - 1 ))
00:05:18.416   16:49:11	-- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191
00:05:19.352  Creating new GPT entries in memory.
00:05:19.352  The operation has completed successfully.
00:05:19.353   16:49:12	-- setup/common.sh@57 -- # (( part++ ))
00:05:19.353   16:49:12	-- setup/common.sh@57 -- # (( part <= part_no ))
00:05:19.353   16:49:12	-- setup/common.sh@62 -- # wait 108222
00:05:19.353   16:49:12	-- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount
00:05:19.353   16:49:12	-- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=
00:05:19.353   16:49:12	-- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount
00:05:19.353   16:49:12	-- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]]
00:05:19.353   16:49:12	-- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1
00:05:19.353   16:49:12	-- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount
00:05:19.353   16:49:12	-- setup/devices.sh@105 -- # verify 0000:00:06.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme
00:05:19.353   16:49:12	-- setup/devices.sh@48 -- # local dev=0000:00:06.0
00:05:19.353   16:49:12	-- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1
00:05:19.353   16:49:12	-- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount
00:05:19.353   16:49:12	-- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme
00:05:19.353   16:49:12	-- setup/devices.sh@53 -- # local found=0
00:05:19.353   16:49:12	-- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]]
00:05:19.353   16:49:12	-- setup/devices.sh@56 -- # :
00:05:19.353   16:49:12	-- setup/devices.sh@59 -- # local pci status
00:05:19.353   16:49:12	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:19.353    16:49:12	-- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0
00:05:19.353    16:49:12	-- setup/devices.sh@47 -- # setup output config
00:05:19.353    16:49:12	-- setup/common.sh@9 -- # [[ output == output ]]
00:05:19.353    16:49:12	-- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config
00:05:19.611   16:49:12	-- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]]
00:05:19.611   16:49:12	-- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]]
00:05:19.611   16:49:12	-- setup/devices.sh@63 -- # found=1
00:05:19.611   16:49:12	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:19.611   16:49:12	-- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]]
00:05:19.611   16:49:12	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:19.871   16:49:12	-- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]]
00:05:19.871   16:49:12	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:20.843   16:49:13	-- setup/devices.sh@66 -- # (( found == 1 ))
00:05:20.843   16:49:13	-- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]]
00:05:20.843   16:49:13	-- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount
00:05:20.843   16:49:13	-- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]]
00:05:20.843   16:49:13	-- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme
00:05:20.843   16:49:13	-- setup/devices.sh@110 -- # cleanup_nvme
00:05:20.843   16:49:13	-- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount
00:05:20.843   16:49:13	-- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount
00:05:20.843   16:49:13	-- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]]
00:05:20.843   16:49:13	-- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1
00:05:20.843  /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef
00:05:20.843   16:49:13	-- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]]
00:05:20.843   16:49:13	-- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1
00:05:20.843  /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54
00:05:20.843  /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54
00:05:20.843  /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa
00:05:20.843  /dev/nvme0n1: calling ioctl to re-read partition table: Success
00:05:20.843   16:49:13	-- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M
00:05:20.843   16:49:13	-- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M
00:05:20.843   16:49:13	-- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount
00:05:20.843   16:49:13	-- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]]
00:05:20.843   16:49:13	-- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M
00:05:20.843   16:49:13	-- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount
00:05:20.843   16:49:13	-- setup/devices.sh@116 -- # verify 0000:00:06.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme
00:05:20.843   16:49:13	-- setup/devices.sh@48 -- # local dev=0000:00:06.0
00:05:20.843   16:49:13	-- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1
00:05:20.843   16:49:13	-- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount
00:05:20.843   16:49:13	-- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme
00:05:20.843   16:49:13	-- setup/devices.sh@53 -- # local found=0
00:05:20.843   16:49:13	-- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]]
00:05:20.843   16:49:13	-- setup/devices.sh@56 -- # :
00:05:20.843   16:49:13	-- setup/devices.sh@59 -- # local pci status
00:05:20.843   16:49:13	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:20.843    16:49:13	-- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0
00:05:20.843    16:49:13	-- setup/devices.sh@47 -- # setup output config
00:05:20.843    16:49:13	-- setup/common.sh@9 -- # [[ output == output ]]
00:05:20.843    16:49:13	-- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config
00:05:21.108   16:49:13	-- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]]
00:05:21.108   16:49:13	-- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]]
00:05:21.108   16:49:13	-- setup/devices.sh@63 -- # found=1
00:05:21.108   16:49:13	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:21.108   16:49:13	-- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]]
00:05:21.108   16:49:13	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:21.108   16:49:13	-- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]]
00:05:21.108   16:49:13	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:22.046   16:49:14	-- setup/devices.sh@66 -- # (( found == 1 ))
00:05:22.046   16:49:14	-- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]]
00:05:22.046   16:49:14	-- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount
00:05:22.046   16:49:14	-- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]]
00:05:22.046   16:49:14	-- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme
00:05:22.046   16:49:14	-- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount
00:05:22.046   16:49:14	-- setup/devices.sh@125 -- # verify 0000:00:06.0 data@nvme0n1 '' ''
00:05:22.046   16:49:14	-- setup/devices.sh@48 -- # local dev=0000:00:06.0
00:05:22.046   16:49:14	-- setup/devices.sh@49 -- # local mounts=data@nvme0n1
00:05:22.046   16:49:14	-- setup/devices.sh@50 -- # local mount_point=
00:05:22.046   16:49:14	-- setup/devices.sh@51 -- # local test_file=
00:05:22.046   16:49:14	-- setup/devices.sh@53 -- # local found=0
00:05:22.046   16:49:14	-- setup/devices.sh@55 -- # [[ -n '' ]]
00:05:22.046   16:49:14	-- setup/devices.sh@59 -- # local pci status
00:05:22.046   16:49:14	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:22.046    16:49:14	-- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0
00:05:22.046    16:49:14	-- setup/devices.sh@47 -- # setup output config
00:05:22.046    16:49:14	-- setup/common.sh@9 -- # [[ output == output ]]
00:05:22.046    16:49:14	-- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config
00:05:22.614   16:49:15	-- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]]
00:05:22.614   16:49:15	-- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]]
00:05:22.614   16:49:15	-- setup/devices.sh@63 -- # found=1
00:05:22.614   16:49:15	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:22.614   16:49:15	-- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]]
00:05:22.614   16:49:15	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:22.614   16:49:15	-- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]]
00:05:22.614   16:49:15	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:23.551   16:49:16	-- setup/devices.sh@66 -- # (( found == 1 ))
00:05:23.551   16:49:16	-- setup/devices.sh@68 -- # [[ -n '' ]]
00:05:23.551   16:49:16	-- setup/devices.sh@68 -- # return 0
00:05:23.551   16:49:16	-- setup/devices.sh@128 -- # cleanup_nvme
00:05:23.551   16:49:16	-- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount
00:05:23.551   16:49:16	-- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]]
00:05:23.551   16:49:16	-- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]]
00:05:23.551   16:49:16	-- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1
00:05:23.551  /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef
00:05:23.551  
00:05:23.551  real	0m6.194s
00:05:23.551  user	0m0.738s
00:05:23.551  sys	0m3.503s
00:05:23.551   16:49:16	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:05:23.551   16:49:16	-- common/autotest_common.sh@10 -- # set +x
00:05:23.551  ************************************
00:05:23.551  END TEST nvme_mount
00:05:23.551  ************************************
00:05:23.551   16:49:16	-- setup/devices.sh@214 -- # run_test dm_mount dm_mount
00:05:23.551   16:49:16	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:05:23.551   16:49:16	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:05:23.551   16:49:16	-- common/autotest_common.sh@10 -- # set +x
00:05:23.551  ************************************
00:05:23.551  START TEST dm_mount
00:05:23.551  ************************************
00:05:23.551   16:49:16	-- common/autotest_common.sh@1114 -- # dm_mount
00:05:23.551   16:49:16	-- setup/devices.sh@144 -- # pv=nvme0n1
00:05:23.551   16:49:16	-- setup/devices.sh@145 -- # pv0=nvme0n1p1
00:05:23.551   16:49:16	-- setup/devices.sh@146 -- # pv1=nvme0n1p2
00:05:23.551   16:49:16	-- setup/devices.sh@148 -- # partition_drive nvme0n1
00:05:23.551   16:49:16	-- setup/common.sh@39 -- # local disk=nvme0n1
00:05:23.551   16:49:16	-- setup/common.sh@40 -- # local part_no=2
00:05:23.551   16:49:16	-- setup/common.sh@41 -- # local size=1073741824
00:05:23.551   16:49:16	-- setup/common.sh@43 -- # local part part_start=0 part_end=0
00:05:23.551   16:49:16	-- setup/common.sh@44 -- # parts=()
00:05:23.551   16:49:16	-- setup/common.sh@44 -- # local parts
00:05:23.551   16:49:16	-- setup/common.sh@46 -- # (( part = 1 ))
00:05:23.551   16:49:16	-- setup/common.sh@46 -- # (( part <= part_no ))
00:05:23.551   16:49:16	-- setup/common.sh@47 -- # parts+=("${disk}p$part")
00:05:23.551   16:49:16	-- setup/common.sh@46 -- # (( part++ ))
00:05:23.551   16:49:16	-- setup/common.sh@46 -- # (( part <= part_no ))
00:05:23.551   16:49:16	-- setup/common.sh@47 -- # parts+=("${disk}p$part")
00:05:23.551   16:49:16	-- setup/common.sh@46 -- # (( part++ ))
00:05:23.551   16:49:16	-- setup/common.sh@46 -- # (( part <= part_no ))
00:05:23.551   16:49:16	-- setup/common.sh@51 -- # (( size /= 4096 ))
00:05:23.551   16:49:16	-- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all
00:05:23.551   16:49:16	-- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2
00:05:24.930  Creating new GPT entries in memory.
00:05:24.930  GPT data structures destroyed! You may now partition the disk using fdisk or
00:05:24.930  other utilities.
00:05:24.930   16:49:17	-- setup/common.sh@57 -- # (( part = 1 ))
00:05:24.930   16:49:17	-- setup/common.sh@57 -- # (( part <= part_no ))
00:05:24.930   16:49:17	-- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 ))
00:05:24.930   16:49:17	-- setup/common.sh@59 -- # (( part_end = part_start + size - 1 ))
00:05:24.930   16:49:17	-- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191
00:05:25.866  Creating new GPT entries in memory.
00:05:25.866  The operation has completed successfully.
00:05:25.866   16:49:18	-- setup/common.sh@57 -- # (( part++ ))
00:05:25.866   16:49:18	-- setup/common.sh@57 -- # (( part <= part_no ))
00:05:25.866   16:49:18	-- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 ))
00:05:25.866   16:49:18	-- setup/common.sh@59 -- # (( part_end = part_start + size - 1 ))
00:05:25.866   16:49:18	-- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335
00:05:26.802  The operation has completed successfully.
00:05:26.802   16:49:19	-- setup/common.sh@57 -- # (( part++ ))
00:05:26.802   16:49:19	-- setup/common.sh@57 -- # (( part <= part_no ))
00:05:26.802   16:49:19	-- setup/common.sh@62 -- # wait 108704
00:05:26.802   16:49:19	-- setup/devices.sh@150 -- # dm_name=nvme_dm_test
00:05:26.802   16:49:19	-- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount
00:05:26.802   16:49:19	-- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm
00:05:26.802   16:49:19	-- setup/devices.sh@155 -- # dmsetup create nvme_dm_test
00:05:26.802   16:49:19	-- setup/devices.sh@160 -- # for t in {1..5}
00:05:26.802   16:49:19	-- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]]
00:05:26.802   16:49:19	-- setup/devices.sh@161 -- # break
00:05:26.802   16:49:19	-- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]]
00:05:26.802    16:49:19	-- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test
00:05:26.802   16:49:19	-- setup/devices.sh@165 -- # dm=/dev/dm-0
00:05:26.802   16:49:19	-- setup/devices.sh@166 -- # dm=dm-0
00:05:26.802   16:49:19	-- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]]
00:05:26.802   16:49:19	-- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]]
00:05:26.803   16:49:19	-- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount
00:05:26.803   16:49:19	-- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size=
00:05:26.803   16:49:19	-- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount
00:05:26.803   16:49:19	-- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]]
00:05:26.803   16:49:19	-- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test
00:05:26.803   16:49:19	-- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount
00:05:26.803   16:49:19	-- setup/devices.sh@174 -- # verify 0000:00:06.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm
00:05:26.803   16:49:19	-- setup/devices.sh@48 -- # local dev=0000:00:06.0
00:05:26.803   16:49:19	-- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test
00:05:26.803   16:49:19	-- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount
00:05:26.803   16:49:19	-- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm
00:05:26.803   16:49:19	-- setup/devices.sh@53 -- # local found=0
00:05:26.803   16:49:19	-- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]]
00:05:26.803   16:49:19	-- setup/devices.sh@56 -- # :
00:05:26.803   16:49:19	-- setup/devices.sh@59 -- # local pci status
00:05:26.803   16:49:19	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:26.803    16:49:19	-- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0
00:05:26.803    16:49:19	-- setup/devices.sh@47 -- # setup output config
00:05:26.803    16:49:19	-- setup/common.sh@9 -- # [[ output == output ]]
00:05:26.803    16:49:19	-- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config
00:05:27.062   16:49:19	-- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]]
00:05:27.062   16:49:19	-- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]]
00:05:27.062   16:49:19	-- setup/devices.sh@63 -- # found=1
00:05:27.062   16:49:19	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:27.062   16:49:19	-- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]]
00:05:27.062   16:49:19	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:27.322   16:49:20	-- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]]
00:05:27.322   16:49:20	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:29.298   16:49:21	-- setup/devices.sh@66 -- # (( found == 1 ))
00:05:29.298   16:49:21	-- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]]
00:05:29.298   16:49:21	-- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount
00:05:29.298   16:49:21	-- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]]
00:05:29.298   16:49:21	-- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm
00:05:29.298   16:49:21	-- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount
00:05:29.298   16:49:21	-- setup/devices.sh@184 -- # verify 0000:00:06.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' ''
00:05:29.298   16:49:21	-- setup/devices.sh@48 -- # local dev=0000:00:06.0
00:05:29.298   16:49:21	-- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0
00:05:29.298   16:49:21	-- setup/devices.sh@50 -- # local mount_point=
00:05:29.298   16:49:21	-- setup/devices.sh@51 -- # local test_file=
00:05:29.298   16:49:21	-- setup/devices.sh@53 -- # local found=0
00:05:29.298   16:49:21	-- setup/devices.sh@55 -- # [[ -n '' ]]
00:05:29.298   16:49:21	-- setup/devices.sh@59 -- # local pci status
00:05:29.298   16:49:21	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:29.298    16:49:21	-- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0
00:05:29.298    16:49:21	-- setup/devices.sh@47 -- # setup output config
00:05:29.298    16:49:21	-- setup/common.sh@9 -- # [[ output == output ]]
00:05:29.298    16:49:21	-- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config
00:05:29.560   16:49:22	-- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]]
00:05:29.560   16:49:22	-- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]]
00:05:29.560   16:49:22	-- setup/devices.sh@63 -- # found=1
00:05:29.560   16:49:22	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:29.560   16:49:22	-- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]]
00:05:29.560   16:49:22	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:29.560   16:49:22	-- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]]
00:05:29.560   16:49:22	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:30.940   16:49:23	-- setup/devices.sh@66 -- # (( found == 1 ))
00:05:30.940   16:49:23	-- setup/devices.sh@68 -- # [[ -n '' ]]
00:05:30.940   16:49:23	-- setup/devices.sh@68 -- # return 0
00:05:30.940   16:49:23	-- setup/devices.sh@187 -- # cleanup_dm
00:05:30.940   16:49:23	-- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount
00:05:30.940   16:49:23	-- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]]
00:05:30.940   16:49:23	-- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test
00:05:30.940   16:49:23	-- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]]
00:05:30.940   16:49:23	-- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1
00:05:30.940  /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef
00:05:30.940   16:49:23	-- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]]
00:05:30.940   16:49:23	-- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2
00:05:30.940  
00:05:30.940  real	0m7.170s
00:05:30.940  user	0m0.518s
00:05:30.940  sys	0m3.558s
00:05:30.940   16:49:23	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:05:30.940  ************************************
00:05:30.940  END TEST dm_mount
00:05:30.940  ************************************
00:05:30.940   16:49:23	-- common/autotest_common.sh@10 -- # set +x
00:05:30.940   16:49:23	-- setup/devices.sh@1 -- # cleanup
00:05:30.940   16:49:23	-- setup/devices.sh@11 -- # cleanup_nvme
00:05:30.940   16:49:23	-- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount
00:05:30.940   16:49:23	-- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]]
00:05:30.940   16:49:23	-- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1
00:05:30.940   16:49:23	-- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]]
00:05:30.940   16:49:23	-- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1
00:05:30.940  /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54
00:05:30.940  /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54
00:05:30.940  /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa
00:05:30.940  /dev/nvme0n1: calling ioctl to re-read partition table: Success
00:05:30.940   16:49:23	-- setup/devices.sh@12 -- # cleanup_dm
00:05:30.940   16:49:23	-- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount
00:05:30.940   16:49:23	-- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]]
00:05:30.940   16:49:23	-- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]]
00:05:30.940   16:49:23	-- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]]
00:05:30.940   16:49:23	-- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]]
00:05:30.940   16:49:23	-- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1
00:05:30.940  
00:05:30.940  real	0m14.319s
00:05:30.940  user	0m1.690s
00:05:30.940  sys	0m7.591s
00:05:30.940   16:49:23	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:05:30.940  ************************************
00:05:30.940   16:49:23	-- common/autotest_common.sh@10 -- # set +x
00:05:30.940  END TEST devices
00:05:30.940  ************************************
00:05:30.940  
00:05:30.940  real	0m32.936s
00:05:30.940  user	0m7.081s
00:05:30.940  sys	0m21.272s
00:05:30.940   16:49:23	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:05:30.940   16:49:23	-- common/autotest_common.sh@10 -- # set +x
00:05:30.940  ************************************
00:05:30.940  END TEST setup.sh
00:05:30.940  ************************************
00:05:30.940   16:49:23	-- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status
00:05:31.200  Hugepages
00:05:31.200  node     hugesize     free /  total
00:05:31.200  node0   1048576kB        0 /      0
00:05:31.200  node0      2048kB     2048 /   2048
00:05:31.200  
00:05:31.200  Type                      BDF             Vendor Device NUMA    Driver           Device     Block devices
00:05:31.200  virtio                    0000:00:03.0    1af4   1001   unknown virtio-pci       -          vda
00:05:31.460  NVMe                      0000:00:06.0    1b36   0010   unknown nvme             nvme0      nvme0n1
00:05:31.460    16:49:24	-- spdk/autotest.sh@128 -- # uname -s
00:05:31.460   16:49:24	-- spdk/autotest.sh@128 -- # [[ Linux == Linux ]]
00:05:31.460   16:49:24	-- spdk/autotest.sh@130 -- # nvme_namespace_revert
00:05:31.460   16:49:24	-- common/autotest_common.sh@1526 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:05:32.029  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev
00:05:32.029  0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic
00:05:33.409   16:49:25	-- common/autotest_common.sh@1527 -- # sleep 1
00:05:34.348   16:49:26	-- common/autotest_common.sh@1528 -- # bdfs=()
00:05:34.348   16:49:26	-- common/autotest_common.sh@1528 -- # local bdfs
00:05:34.348   16:49:26	-- common/autotest_common.sh@1529 -- # bdfs=($(get_nvme_bdfs))
00:05:34.348    16:49:26	-- common/autotest_common.sh@1529 -- # get_nvme_bdfs
00:05:34.348    16:49:26	-- common/autotest_common.sh@1508 -- # bdfs=()
00:05:34.348    16:49:26	-- common/autotest_common.sh@1508 -- # local bdfs
00:05:34.348    16:49:26	-- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:05:34.348     16:49:26	-- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr'
00:05:34.348     16:49:26	-- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:05:34.348    16:49:26	-- common/autotest_common.sh@1510 -- # (( 1 == 0 ))
00:05:34.348    16:49:26	-- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0
00:05:34.348   16:49:26	-- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:05:34.608  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev
00:05:34.608  Waiting for block devices as requested
00:05:34.868  0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme
00:05:34.868   16:49:27	-- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}"
00:05:34.868    16:49:27	-- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0
00:05:34.868     16:49:27	-- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0
00:05:34.868     16:49:27	-- common/autotest_common.sh@1497 -- # grep 0000:00:06.0/nvme/nvme
00:05:34.868    16:49:27	-- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0
00:05:34.868    16:49:27	-- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 ]]
00:05:34.868     16:49:27	-- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0
00:05:34.868    16:49:27	-- common/autotest_common.sh@1502 -- # printf '%s\n' nvme0
00:05:34.868   16:49:27	-- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme0
00:05:34.868   16:49:27	-- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme0 ]]
00:05:34.868    16:49:27	-- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0
00:05:34.868    16:49:27	-- common/autotest_common.sh@1540 -- # grep oacs
00:05:34.868    16:49:27	-- common/autotest_common.sh@1540 -- # cut -d: -f2
00:05:34.868   16:49:27	-- common/autotest_common.sh@1540 -- # oacs=' 0x12a'
00:05:34.868   16:49:27	-- common/autotest_common.sh@1541 -- # oacs_ns_manage=8
00:05:34.868   16:49:27	-- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]]
00:05:34.868    16:49:27	-- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme0
00:05:34.868    16:49:27	-- common/autotest_common.sh@1549 -- # grep unvmcap
00:05:34.868    16:49:27	-- common/autotest_common.sh@1549 -- # cut -d: -f2
00:05:34.868   16:49:27	-- common/autotest_common.sh@1549 -- # unvmcap=' 0'
00:05:34.868   16:49:27	-- common/autotest_common.sh@1550 -- # [[  0 -eq 0 ]]
00:05:34.868   16:49:27	-- common/autotest_common.sh@1552 -- # continue
00:05:34.868   16:49:27	-- spdk/autotest.sh@133 -- # timing_exit pre_cleanup
00:05:34.868   16:49:27	-- common/autotest_common.sh@728 -- # xtrace_disable
00:05:34.868   16:49:27	-- common/autotest_common.sh@10 -- # set +x
00:05:34.868   16:49:27	-- spdk/autotest.sh@136 -- # timing_enter afterboot
00:05:34.868   16:49:27	-- common/autotest_common.sh@722 -- # xtrace_disable
00:05:34.868   16:49:27	-- common/autotest_common.sh@10 -- # set +x
00:05:34.868   16:49:27	-- spdk/autotest.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:05:35.437  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev
00:05:35.696  0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic
00:05:36.636   16:49:29	-- spdk/autotest.sh@138 -- # timing_exit afterboot
00:05:36.636   16:49:29	-- common/autotest_common.sh@728 -- # xtrace_disable
00:05:36.636   16:49:29	-- common/autotest_common.sh@10 -- # set +x
00:05:36.636   16:49:29	-- spdk/autotest.sh@142 -- # opal_revert_cleanup
00:05:36.636   16:49:29	-- common/autotest_common.sh@1586 -- # mapfile -t bdfs
00:05:36.636    16:49:29	-- common/autotest_common.sh@1586 -- # get_nvme_bdfs_by_id 0x0a54
00:05:36.636    16:49:29	-- common/autotest_common.sh@1572 -- # bdfs=()
00:05:36.636    16:49:29	-- common/autotest_common.sh@1572 -- # local bdfs
00:05:36.636     16:49:29	-- common/autotest_common.sh@1574 -- # get_nvme_bdfs
00:05:36.636     16:49:29	-- common/autotest_common.sh@1508 -- # bdfs=()
00:05:36.636     16:49:29	-- common/autotest_common.sh@1508 -- # local bdfs
00:05:36.636     16:49:29	-- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:05:36.636      16:49:29	-- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:05:36.636      16:49:29	-- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr'
00:05:36.636     16:49:29	-- common/autotest_common.sh@1510 -- # (( 1 == 0 ))
00:05:36.636     16:49:29	-- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0
00:05:36.636    16:49:29	-- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs)
00:05:36.636     16:49:29	-- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:06.0/device
00:05:36.636    16:49:29	-- common/autotest_common.sh@1575 -- # device=0x0010
00:05:36.636    16:49:29	-- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]]
00:05:36.636    16:49:29	-- common/autotest_common.sh@1581 -- # printf '%s\n'
00:05:36.636   16:49:29	-- common/autotest_common.sh@1587 -- # [[ -z '' ]]
00:05:36.636   16:49:29	-- common/autotest_common.sh@1588 -- # return 0
00:05:36.636   16:49:29	-- spdk/autotest.sh@148 -- # '[' 1 -eq 1 ']'
00:05:36.636   16:49:29	-- spdk/autotest.sh@149 -- # run_test unittest /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh
00:05:36.636   16:49:29	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:05:36.636   16:49:29	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:05:36.636   16:49:29	-- common/autotest_common.sh@10 -- # set +x
00:05:36.636  ************************************
00:05:36.636  START TEST unittest
00:05:36.636  ************************************
00:05:36.636   16:49:29	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh
00:05:36.636  +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh
00:05:36.636  ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit
00:05:36.636  + testdir=/home/vagrant/spdk_repo/spdk/test/unit
00:05:36.636  +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh
00:05:36.636  ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit/../..
00:05:36.636  + rootdir=/home/vagrant/spdk_repo/spdk
00:05:36.636  + source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh
00:05:36.636  ++ rpc_py=rpc_cmd
00:05:36.636  ++ set -e
00:05:36.636  ++ shopt -s nullglob
00:05:36.636  ++ shopt -s extglob
00:05:36.636  ++ [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]]
00:05:36.636  ++ source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh
00:05:36.636  +++ CONFIG_WPDK_DIR=
00:05:36.636  +++ CONFIG_ASAN=y
00:05:36.636  +++ CONFIG_VBDEV_COMPRESS=n
00:05:36.636  +++ CONFIG_HAVE_EXECINFO_H=y
00:05:36.636  +++ CONFIG_USDT=n
00:05:36.636  +++ CONFIG_CUSTOMOCF=n
00:05:36.636  +++ CONFIG_PREFIX=/usr/local
00:05:36.636  +++ CONFIG_RBD=n
00:05:36.636  +++ CONFIG_LIBDIR=
00:05:36.636  +++ CONFIG_IDXD=y
00:05:36.636  +++ CONFIG_NVME_CUSE=y
00:05:36.636  +++ CONFIG_SMA=n
00:05:36.636  +++ CONFIG_VTUNE=n
00:05:36.636  +++ CONFIG_TSAN=n
00:05:36.636  +++ CONFIG_RDMA_SEND_WITH_INVAL=y
00:05:36.636  +++ CONFIG_VFIO_USER_DIR=
00:05:36.636  +++ CONFIG_PGO_CAPTURE=n
00:05:36.636  +++ CONFIG_HAVE_UUID_GENERATE_SHA1=y
00:05:36.636  +++ CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk
00:05:36.636  +++ CONFIG_LTO=n
00:05:36.636  +++ CONFIG_ISCSI_INITIATOR=y
00:05:36.636  +++ CONFIG_CET=n
00:05:36.636  +++ CONFIG_VBDEV_COMPRESS_MLX5=n
00:05:36.636  +++ CONFIG_OCF_PATH=
00:05:36.636  +++ CONFIG_RDMA_SET_TOS=y
00:05:36.636  +++ CONFIG_HAVE_ARC4RANDOM=n
00:05:36.636  +++ CONFIG_HAVE_LIBARCHIVE=n
00:05:36.636  +++ CONFIG_UBLK=n
00:05:36.636  +++ CONFIG_ISAL_CRYPTO=y
00:05:36.636  +++ CONFIG_OPENSSL_PATH=
00:05:36.636  +++ CONFIG_OCF=n
00:05:36.636  +++ CONFIG_FUSE=n
00:05:36.636  +++ CONFIG_VTUNE_DIR=
00:05:36.636  +++ CONFIG_FUZZER_LIB=
00:05:36.636  +++ CONFIG_FUZZER=n
00:05:36.636  +++ CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build
00:05:36.636  +++ CONFIG_CRYPTO=n
00:05:36.636  +++ CONFIG_PGO_USE=n
00:05:36.636  +++ CONFIG_VHOST=y
00:05:36.636  +++ CONFIG_DAOS=n
00:05:36.636  +++ CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include
00:05:36.636  +++ CONFIG_DAOS_DIR=
00:05:36.636  +++ CONFIG_UNIT_TESTS=y
00:05:36.636  +++ CONFIG_RDMA_SET_ACK_TIMEOUT=y
00:05:36.636  +++ CONFIG_VIRTIO=y
00:05:36.636  +++ CONFIG_COVERAGE=y
00:05:36.636  +++ CONFIG_RDMA=y
00:05:36.636  +++ CONFIG_FIO_SOURCE_DIR=/usr/src/fio
00:05:36.636  +++ CONFIG_URING_PATH=
00:05:36.636  +++ CONFIG_XNVME=n
00:05:36.636  +++ CONFIG_VFIO_USER=n
00:05:36.636  +++ CONFIG_ARCH=native
00:05:36.636  +++ CONFIG_URING_ZNS=n
00:05:36.636  +++ CONFIG_WERROR=y
00:05:36.636  +++ CONFIG_HAVE_LIBBSD=n
00:05:36.636  +++ CONFIG_UBSAN=y
00:05:36.636  +++ CONFIG_IPSEC_MB_DIR=
00:05:36.636  +++ CONFIG_GOLANG=n
00:05:36.636  +++ CONFIG_ISAL=y
00:05:36.636  +++ CONFIG_IDXD_KERNEL=n
00:05:36.636  +++ CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib
00:05:36.636  +++ CONFIG_RDMA_PROV=verbs
00:05:36.636  +++ CONFIG_APPS=y
00:05:36.636  +++ CONFIG_SHARED=n
00:05:36.636  +++ CONFIG_FC_PATH=
00:05:36.636  +++ CONFIG_DPDK_PKG_CONFIG=n
00:05:36.636  +++ CONFIG_FC=n
00:05:36.636  +++ CONFIG_AVAHI=n
00:05:36.636  +++ CONFIG_FIO_PLUGIN=y
00:05:36.636  +++ CONFIG_RAID5F=y
00:05:36.636  +++ CONFIG_EXAMPLES=y
00:05:36.636  +++ CONFIG_TESTS=y
00:05:36.636  +++ CONFIG_CRYPTO_MLX5=n
00:05:36.636  +++ CONFIG_MAX_LCORES=
00:05:36.636  +++ CONFIG_IPSEC_MB=n
00:05:36.636  +++ CONFIG_DEBUG=y
00:05:36.636  +++ CONFIG_DPDK_COMPRESSDEV=n
00:05:36.636  +++ CONFIG_CROSS_PREFIX=
00:05:36.636  +++ CONFIG_URING=n
00:05:36.636  ++ source /home/vagrant/spdk_repo/spdk/test/common/applications.sh
00:05:36.636  +++++ dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh
00:05:36.636  ++++ readlink -f /home/vagrant/spdk_repo/spdk/test/common
00:05:36.636  +++ _root=/home/vagrant/spdk_repo/spdk/test/common
00:05:36.636  +++ _root=/home/vagrant/spdk_repo/spdk
00:05:36.636  +++ _app_dir=/home/vagrant/spdk_repo/spdk/build/bin
00:05:36.636  +++ _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app
00:05:36.636  +++ _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples
00:05:36.636  +++ VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz")
00:05:36.636  +++ ISCSI_APP=("$_app_dir/iscsi_tgt")
00:05:36.636  +++ NVMF_APP=("$_app_dir/nvmf_tgt")
00:05:36.636  +++ VHOST_APP=("$_app_dir/vhost")
00:05:36.636  +++ DD_APP=("$_app_dir/spdk_dd")
00:05:36.636  +++ SPDK_APP=("$_app_dir/spdk_tgt")
00:05:36.636  +++ [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]]
00:05:36.636  +++ [[ #ifndef SPDK_CONFIG_H
00:05:36.636  #define SPDK_CONFIG_H
00:05:36.636  #define SPDK_CONFIG_APPS 1
00:05:36.636  #define SPDK_CONFIG_ARCH native
00:05:36.636  #define SPDK_CONFIG_ASAN 1
00:05:36.636  #undef SPDK_CONFIG_AVAHI
00:05:36.636  #undef SPDK_CONFIG_CET
00:05:36.636  #define SPDK_CONFIG_COVERAGE 1
00:05:36.636  #define SPDK_CONFIG_CROSS_PREFIX 
00:05:36.636  #undef SPDK_CONFIG_CRYPTO
00:05:36.636  #undef SPDK_CONFIG_CRYPTO_MLX5
00:05:36.636  #undef SPDK_CONFIG_CUSTOMOCF
00:05:36.636  #undef SPDK_CONFIG_DAOS
00:05:36.636  #define SPDK_CONFIG_DAOS_DIR 
00:05:36.636  #define SPDK_CONFIG_DEBUG 1
00:05:36.636  #undef SPDK_CONFIG_DPDK_COMPRESSDEV
00:05:36.636  #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build
00:05:36.636  #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include
00:05:36.636  #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib
00:05:36.636  #undef SPDK_CONFIG_DPDK_PKG_CONFIG
00:05:36.636  #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk
00:05:36.636  #define SPDK_CONFIG_EXAMPLES 1
00:05:36.636  #undef SPDK_CONFIG_FC
00:05:36.636  #define SPDK_CONFIG_FC_PATH 
00:05:36.636  #define SPDK_CONFIG_FIO_PLUGIN 1
00:05:36.636  #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio
00:05:36.636  #undef SPDK_CONFIG_FUSE
00:05:36.636  #undef SPDK_CONFIG_FUZZER
00:05:36.636  #define SPDK_CONFIG_FUZZER_LIB 
00:05:36.636  #undef SPDK_CONFIG_GOLANG
00:05:36.636  #undef SPDK_CONFIG_HAVE_ARC4RANDOM
00:05:36.636  #define SPDK_CONFIG_HAVE_EXECINFO_H 1
00:05:36.636  #undef SPDK_CONFIG_HAVE_LIBARCHIVE
00:05:36.636  #undef SPDK_CONFIG_HAVE_LIBBSD
00:05:36.636  #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1
00:05:36.636  #define SPDK_CONFIG_IDXD 1
00:05:36.636  #undef SPDK_CONFIG_IDXD_KERNEL
00:05:36.636  #undef SPDK_CONFIG_IPSEC_MB
00:05:36.636  #define SPDK_CONFIG_IPSEC_MB_DIR 
00:05:36.636  #define SPDK_CONFIG_ISAL 1
00:05:36.636  #define SPDK_CONFIG_ISAL_CRYPTO 1
00:05:36.636  #define SPDK_CONFIG_ISCSI_INITIATOR 1
00:05:36.636  #define SPDK_CONFIG_LIBDIR 
00:05:36.636  #undef SPDK_CONFIG_LTO
00:05:36.636  #define SPDK_CONFIG_MAX_LCORES 
00:05:36.636  #define SPDK_CONFIG_NVME_CUSE 1
00:05:36.636  #undef SPDK_CONFIG_OCF
00:05:36.636  #define SPDK_CONFIG_OCF_PATH 
00:05:36.636  #define SPDK_CONFIG_OPENSSL_PATH 
00:05:36.636  #undef SPDK_CONFIG_PGO_CAPTURE
00:05:36.636  #undef SPDK_CONFIG_PGO_USE
00:05:36.636  #define SPDK_CONFIG_PREFIX /usr/local
00:05:36.636  #define SPDK_CONFIG_RAID5F 1
00:05:36.636  #undef SPDK_CONFIG_RBD
00:05:36.636  #define SPDK_CONFIG_RDMA 1
00:05:36.636  #define SPDK_CONFIG_RDMA_PROV verbs
00:05:36.636  #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1
00:05:36.636  #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1
00:05:36.636  #define SPDK_CONFIG_RDMA_SET_TOS 1
00:05:36.636  #undef SPDK_CONFIG_SHARED
00:05:36.636  #undef SPDK_CONFIG_SMA
00:05:36.636  #define SPDK_CONFIG_TESTS 1
00:05:36.636  #undef SPDK_CONFIG_TSAN
00:05:36.636  #undef SPDK_CONFIG_UBLK
00:05:36.636  #define SPDK_CONFIG_UBSAN 1
00:05:36.636  #define SPDK_CONFIG_UNIT_TESTS 1
00:05:36.636  #undef SPDK_CONFIG_URING
00:05:36.636  #define SPDK_CONFIG_URING_PATH 
00:05:36.636  #undef SPDK_CONFIG_URING_ZNS
00:05:36.636  #undef SPDK_CONFIG_USDT
00:05:36.636  #undef SPDK_CONFIG_VBDEV_COMPRESS
00:05:36.636  #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5
00:05:36.636  #undef SPDK_CONFIG_VFIO_USER
00:05:36.636  #define SPDK_CONFIG_VFIO_USER_DIR 
00:05:36.636  #define SPDK_CONFIG_VHOST 1
00:05:36.636  #define SPDK_CONFIG_VIRTIO 1
00:05:36.636  #undef SPDK_CONFIG_VTUNE
00:05:36.636  #define SPDK_CONFIG_VTUNE_DIR 
00:05:36.636  #define SPDK_CONFIG_WERROR 1
00:05:36.636  #define SPDK_CONFIG_WPDK_DIR 
00:05:36.636  #undef SPDK_CONFIG_XNVME
00:05:36.636  #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]]
00:05:36.636  +++ (( SPDK_AUTOTEST_DEBUG_APPS ))
00:05:36.636  ++ source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:05:36.636  +++ [[ -e /bin/wpdk_common.sh ]]
00:05:36.636  +++ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:05:36.636  +++ source /etc/opt/spdk-pkgdep/paths/export.sh
00:05:36.636  ++++ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:05:36.636  ++++ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:05:36.636  ++++ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:05:36.636  ++++ export PATH
00:05:36.636  ++++ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:05:36.636  ++ source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common
00:05:36.636  +++++ dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common
00:05:36.636  ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm
00:05:36.636  +++ _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm
00:05:36.636  ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../
00:05:36.636  +++ _pmrootdir=/home/vagrant/spdk_repo/spdk
00:05:36.636  +++ TEST_TAG=N/A
00:05:36.636  +++ TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name
00:05:36.636  ++ : 1
00:05:36.636  ++ export RUN_NIGHTLY
00:05:36.636  ++ : 0
00:05:36.636  ++ export SPDK_AUTOTEST_DEBUG_APPS
00:05:36.636  ++ : 0
00:05:36.636  ++ export SPDK_RUN_VALGRIND
00:05:36.636  ++ : 1
00:05:36.636  ++ export SPDK_RUN_FUNCTIONAL_TEST
00:05:36.636  ++ : 1
00:05:36.636  ++ export SPDK_TEST_UNITTEST
00:05:36.636  ++ :
00:05:36.636  ++ export SPDK_TEST_AUTOBUILD
00:05:36.636  ++ : 0
00:05:36.636  ++ export SPDK_TEST_RELEASE_BUILD
00:05:36.636  ++ : 0
00:05:36.636  ++ export SPDK_TEST_ISAL
00:05:36.636  ++ : 0
00:05:36.636  ++ export SPDK_TEST_ISCSI
00:05:36.636  ++ : 0
00:05:36.636  ++ export SPDK_TEST_ISCSI_INITIATOR
00:05:36.636  ++ : 1
00:05:36.636  ++ export SPDK_TEST_NVME
00:05:36.636  ++ : 0
00:05:36.636  ++ export SPDK_TEST_NVME_PMR
00:05:36.636  ++ : 0
00:05:36.636  ++ export SPDK_TEST_NVME_BP
00:05:36.636  ++ : 0
00:05:36.636  ++ export SPDK_TEST_NVME_CLI
00:05:36.636  ++ : 0
00:05:36.637  ++ export SPDK_TEST_NVME_CUSE
00:05:36.637  ++ : 0
00:05:36.637  ++ export SPDK_TEST_NVME_FDP
00:05:36.637  ++ : 0
00:05:36.637  ++ export SPDK_TEST_NVMF
00:05:36.637  ++ : 0
00:05:36.637  ++ export SPDK_TEST_VFIOUSER
00:05:36.637  ++ : 0
00:05:36.637  ++ export SPDK_TEST_VFIOUSER_QEMU
00:05:36.637  ++ : 0
00:05:36.637  ++ export SPDK_TEST_FUZZER
00:05:36.637  ++ : 0
00:05:36.637  ++ export SPDK_TEST_FUZZER_SHORT
00:05:36.637  ++ : rdma
00:05:36.637  ++ export SPDK_TEST_NVMF_TRANSPORT
00:05:36.637  ++ : 0
00:05:36.637  ++ export SPDK_TEST_RBD
00:05:36.637  ++ : 0
00:05:36.637  ++ export SPDK_TEST_VHOST
00:05:36.637  ++ : 1
00:05:36.637  ++ export SPDK_TEST_BLOCKDEV
00:05:36.637  ++ : 0
00:05:36.637  ++ export SPDK_TEST_IOAT
00:05:36.637  ++ : 0
00:05:36.637  ++ export SPDK_TEST_BLOBFS
00:05:36.637  ++ : 0
00:05:36.637  ++ export SPDK_TEST_VHOST_INIT
00:05:36.637  ++ : 0
00:05:36.637  ++ export SPDK_TEST_LVOL
00:05:36.637  ++ : 0
00:05:36.637  ++ export SPDK_TEST_VBDEV_COMPRESS
00:05:36.637  ++ : 1
00:05:36.637  ++ export SPDK_RUN_ASAN
00:05:36.637  ++ : 1
00:05:36.637  ++ export SPDK_RUN_UBSAN
00:05:36.637  ++ : /home/vagrant/spdk_repo/dpdk/build
00:05:36.637  ++ export SPDK_RUN_EXTERNAL_DPDK
00:05:36.637  ++ : 0
00:05:36.637  ++ export SPDK_RUN_NON_ROOT
00:05:36.637  ++ : 0
00:05:36.637  ++ export SPDK_TEST_CRYPTO
00:05:36.637  ++ : 0
00:05:36.637  ++ export SPDK_TEST_FTL
00:05:36.637  ++ : 0
00:05:36.637  ++ export SPDK_TEST_OCF
00:05:36.637  ++ : 0
00:05:36.637  ++ export SPDK_TEST_VMD
00:05:36.637  ++ : 0
00:05:36.637  ++ export SPDK_TEST_OPAL
00:05:36.637  ++ : v22.11.4
00:05:36.637  ++ export SPDK_TEST_NATIVE_DPDK
00:05:36.637  ++ : true
00:05:36.637  ++ export SPDK_AUTOTEST_X
00:05:36.637  ++ : 1
00:05:36.637  ++ export SPDK_TEST_RAID5
00:05:36.637  ++ : 0
00:05:36.637  ++ export SPDK_TEST_URING
00:05:36.637  ++ : 0
00:05:36.637  ++ export SPDK_TEST_USDT
00:05:36.637  ++ : 0
00:05:36.637  ++ export SPDK_TEST_USE_IGB_UIO
00:05:36.637  ++ : 0
00:05:36.637  ++ export SPDK_TEST_SCHEDULER
00:05:36.637  ++ : 0
00:05:36.637  ++ export SPDK_TEST_SCANBUILD
00:05:36.637  ++ :
00:05:36.637  ++ export SPDK_TEST_NVMF_NICS
00:05:36.637  ++ : 0
00:05:36.637  ++ export SPDK_TEST_SMA
00:05:36.637  ++ : 0
00:05:36.637  ++ export SPDK_TEST_DAOS
00:05:36.637  ++ : 0
00:05:36.637  ++ export SPDK_TEST_XNVME
00:05:36.637  ++ : 0
00:05:36.637  ++ export SPDK_TEST_ACCEL_DSA
00:05:36.637  ++ : 0
00:05:36.637  ++ export SPDK_TEST_ACCEL_IAA
00:05:36.637  ++ : 0
00:05:36.637  ++ export SPDK_TEST_ACCEL_IOAT
00:05:36.637  ++ :
00:05:36.637  ++ export SPDK_TEST_FUZZER_TARGET
00:05:36.637  ++ : 0
00:05:36.637  ++ export SPDK_TEST_NVMF_MDNS
00:05:36.637  ++ : 0
00:05:36.637  ++ export SPDK_JSONRPC_GO_CLIENT
00:05:36.637  ++ export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib
00:05:36.637  ++ SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib
00:05:36.637  ++ export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib
00:05:36.637  ++ DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib
00:05:36.637  ++ export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib
00:05:36.637  ++ VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib
00:05:36.637  ++ export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib
00:05:36.637  ++ LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib
00:05:36.637  ++ export PCI_BLOCK_SYNC_ON_RESET=yes
00:05:36.637  ++ PCI_BLOCK_SYNC_ON_RESET=yes
00:05:36.637  ++ export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python
00:05:36.637  ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python
00:05:36.637  ++ export PYTHONDONTWRITEBYTECODE=1
00:05:36.637  ++ PYTHONDONTWRITEBYTECODE=1
00:05:36.637  ++ export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0
00:05:36.637  ++ ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0
00:05:36.637  ++ export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134
00:05:36.637  ++ UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134
00:05:36.637  ++ asan_suppression_file=/var/tmp/asan_suppression_file
00:05:36.637  ++ rm -rf /var/tmp/asan_suppression_file
00:05:36.897  ++ cat
00:05:36.897  ++ echo leak:libfuse3.so
00:05:36.897  ++ export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file
00:05:36.897  ++ LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file
00:05:36.897  ++ export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock
00:05:36.897  ++ DEFAULT_RPC_ADDR=/var/tmp/spdk.sock
00:05:36.897  ++ '[' -z /var/spdk/dependencies ']'
00:05:36.897  ++ export DEPENDENCY_DIR
00:05:36.897  ++ export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin
00:05:36.897  ++ SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin
00:05:36.897  ++ export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples
00:05:36.897  ++ SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples
00:05:36.897  ++ export QEMU_BIN=
00:05:36.897  ++ QEMU_BIN=
00:05:36.897  ++ export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64'
00:05:36.897  ++ VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64'
00:05:36.897  ++ export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer
00:05:36.897  ++ AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer
00:05:36.897  ++ export UNBIND_ENTIRE_IOMMU_GROUP=yes
00:05:36.897  ++ UNBIND_ENTIRE_IOMMU_GROUP=yes
00:05:36.897  ++ _LCOV_MAIN=0
00:05:36.897  ++ _LCOV_LLVM=1
00:05:36.897  ++ _LCOV=
00:05:36.897  ++ [[ '' == *clang* ]]
00:05:36.897  ++ [[ 0 -eq 1 ]]
00:05:36.897  ++ _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh'
00:05:36.897  ++ _lcov_opt[_LCOV_MAIN]=
00:05:36.897  ++ lcov_opt=
00:05:36.897  ++ '[' 0 -eq 0 ']'
00:05:36.897  ++ export valgrind=
00:05:36.897  ++ valgrind=
00:05:36.897  +++ uname -s
00:05:36.897  ++ '[' Linux = Linux ']'
00:05:36.897  ++ HUGEMEM=4096
00:05:36.897  ++ export CLEAR_HUGE=yes
00:05:36.897  ++ CLEAR_HUGE=yes
00:05:36.897  ++ [[ 0 -eq 1 ]]
00:05:36.897  ++ [[ 0 -eq 1 ]]
00:05:36.897  ++ MAKE=make
00:05:36.897  +++ nproc
00:05:36.897  ++ MAKEFLAGS=-j10
00:05:36.897  ++ export HUGEMEM=4096
00:05:36.897  ++ HUGEMEM=4096
00:05:36.897  ++ '[' -z /home/vagrant/spdk_repo/spdk/../output ']'
00:05:36.897  ++ NO_HUGE=()
00:05:36.897  ++ TEST_MODE=
00:05:36.897  ++ [[ -z '' ]]
00:05:36.897  ++ PYTHONPATH+=:/home/vagrant/spdk_repo/spdk/test/rpc_plugins
00:05:36.897  ++ exec
00:05:36.897  ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins
00:05:36.897  ++ /home/vagrant/spdk_repo/spdk/scripts/rpc.py --server
00:05:36.897  ++ set_test_storage 2147483648
00:05:36.897  ++ [[ -v testdir ]]
00:05:36.897  ++ local requested_size=2147483648
00:05:36.897  ++ local mount target_dir
00:05:36.897  ++ local -A mounts fss sizes avails uses
00:05:36.897  ++ local source fs size avail mount use
00:05:36.897  ++ local storage_fallback storage_candidates
00:05:36.897  +++ mktemp -udt spdk.XXXXXX
00:05:36.897  ++ storage_fallback=/tmp/spdk.cL452y
00:05:36.897  ++ storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback")
00:05:36.897  ++ [[ -n '' ]]
00:05:36.897  ++ [[ -n '' ]]
00:05:36.897  ++ mkdir -p /home/vagrant/spdk_repo/spdk/test/unit /tmp/spdk.cL452y/tests/unit /tmp/spdk.cL452y
00:05:36.897  ++ requested_size=2214592512
00:05:36.897  ++ read -r source fs size use avail _ mount
00:05:36.897  +++ df -T
00:05:36.897  +++ grep -v Filesystem
00:05:36.897  ++ mounts["$mount"]=tmpfs
00:05:36.897  ++ fss["$mount"]=tmpfs
00:05:36.897  ++ avails["$mount"]=1252601856
00:05:36.897  ++ sizes["$mount"]=1253683200
00:05:36.897  ++ uses["$mount"]=1081344
00:05:36.897  ++ read -r source fs size use avail _ mount
00:05:36.897  ++ mounts["$mount"]=/dev/vda1
00:05:36.897  ++ fss["$mount"]=ext4
00:05:36.897  ++ avails["$mount"]=9643610112
00:05:36.897  ++ sizes["$mount"]=20616794112
00:05:36.897  ++ uses["$mount"]=10956406784
00:05:36.897  ++ read -r source fs size use avail _ mount
00:05:36.897  ++ mounts["$mount"]=tmpfs
00:05:36.897  ++ fss["$mount"]=tmpfs
00:05:36.897  ++ avails["$mount"]=6268399616
00:05:36.897  ++ sizes["$mount"]=6268399616
00:05:36.897  ++ uses["$mount"]=0
00:05:36.897  ++ read -r source fs size use avail _ mount
00:05:36.897  ++ mounts["$mount"]=tmpfs
00:05:36.897  ++ fss["$mount"]=tmpfs
00:05:36.897  ++ avails["$mount"]=5242880
00:05:36.897  ++ sizes["$mount"]=5242880
00:05:36.897  ++ uses["$mount"]=0
00:05:36.897  ++ read -r source fs size use avail _ mount
00:05:36.897  ++ mounts["$mount"]=/dev/vda15
00:05:36.897  ++ fss["$mount"]=vfat
00:05:36.897  ++ avails["$mount"]=103061504
00:05:36.897  ++ sizes["$mount"]=109395968
00:05:36.897  ++ uses["$mount"]=6334464
00:05:36.897  ++ read -r source fs size use avail _ mount
00:05:36.897  ++ mounts["$mount"]=tmpfs
00:05:36.897  ++ fss["$mount"]=tmpfs
00:05:36.897  ++ avails["$mount"]=1253675008
00:05:36.897  ++ sizes["$mount"]=1253679104
00:05:36.897  ++ uses["$mount"]=4096
00:05:36.897  ++ read -r source fs size use avail _ mount
00:05:36.897  ++ mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt/output
00:05:36.897  ++ fss["$mount"]=fuse.sshfs
00:05:36.897  ++ avails["$mount"]=94995369984
00:05:36.897  ++ sizes["$mount"]=105088212992
00:05:36.897  ++ uses["$mount"]=4707409920
00:05:36.897  ++ read -r source fs size use avail _ mount
00:05:36.897  ++ printf '* Looking for test storage...\n'
00:05:36.897  * Looking for test storage...
00:05:36.897  ++ local target_space new_size
00:05:36.897  ++ for target_dir in "${storage_candidates[@]}"
00:05:36.897  +++ df /home/vagrant/spdk_repo/spdk/test/unit
00:05:36.897  +++ awk '$1 !~ /Filesystem/{print $6}'
00:05:36.897  ++ mount=/
00:05:36.897  ++ target_space=9643610112
00:05:36.897  ++ (( target_space == 0 || target_space < requested_size ))
00:05:36.897  ++ (( target_space >= requested_size ))
00:05:36.897  ++ [[ ext4 == tmpfs ]]
00:05:36.897  ++ [[ ext4 == ramfs ]]
00:05:36.897  ++ [[ / == / ]]
00:05:36.897  ++ new_size=13170999296
00:05:36.897  ++ (( new_size * 100 / sizes[/] > 95 ))
00:05:36.897  ++ export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit
00:05:36.897  ++ SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit
00:05:36.897  ++ printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/unit
00:05:36.897  * Found test storage at /home/vagrant/spdk_repo/spdk/test/unit
00:05:36.897  ++ return 0
00:05:36.897  ++ set -o errtrace
00:05:36.897  ++ shopt -s extdebug
00:05:36.897  ++ trap 'trap - ERR; print_backtrace >&2' ERR
00:05:36.897  ++ PS4=' \t	-- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ '
00:05:36.897    16:49:29	-- common/autotest_common.sh@1682 -- # true
00:05:36.897    16:49:29	-- common/autotest_common.sh@1684 -- # xtrace_fd
00:05:36.898    16:49:29	-- common/autotest_common.sh@25 -- # [[ -n '' ]]
00:05:36.898    16:49:29	-- common/autotest_common.sh@29 -- # exec
00:05:36.898    16:49:29	-- common/autotest_common.sh@31 -- # xtrace_restore
00:05:36.898    16:49:29	-- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]'
00:05:36.898    16:49:29	-- common/autotest_common.sh@17 -- # (( 0 == 0 ))
00:05:36.898    16:49:29	-- common/autotest_common.sh@18 -- # set -x
00:05:36.898    16:49:29	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:05:36.898     16:49:29	-- common/autotest_common.sh@1690 -- # lcov --version
00:05:36.898     16:49:29	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:05:36.898    16:49:29	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:05:36.898    16:49:29	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:05:36.898    16:49:29	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:05:36.898    16:49:29	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:05:36.898    16:49:29	-- scripts/common.sh@335 -- # IFS=.-:
00:05:36.898    16:49:29	-- scripts/common.sh@335 -- # read -ra ver1
00:05:36.898    16:49:29	-- scripts/common.sh@336 -- # IFS=.-:
00:05:36.898    16:49:29	-- scripts/common.sh@336 -- # read -ra ver2
00:05:36.898    16:49:29	-- scripts/common.sh@337 -- # local 'op=<'
00:05:36.898    16:49:29	-- scripts/common.sh@339 -- # ver1_l=2
00:05:36.898    16:49:29	-- scripts/common.sh@340 -- # ver2_l=1
00:05:36.898    16:49:29	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:05:36.898    16:49:29	-- scripts/common.sh@343 -- # case "$op" in
00:05:36.898    16:49:29	-- scripts/common.sh@344 -- # : 1
00:05:36.898    16:49:29	-- scripts/common.sh@363 -- # (( v = 0 ))
00:05:36.898    16:49:29	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:05:36.898     16:49:29	-- scripts/common.sh@364 -- # decimal 1
00:05:36.898     16:49:29	-- scripts/common.sh@352 -- # local d=1
00:05:36.898     16:49:29	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:05:36.898     16:49:29	-- scripts/common.sh@354 -- # echo 1
00:05:36.898    16:49:29	-- scripts/common.sh@364 -- # ver1[v]=1
00:05:36.898     16:49:29	-- scripts/common.sh@365 -- # decimal 2
00:05:36.898     16:49:29	-- scripts/common.sh@352 -- # local d=2
00:05:36.898     16:49:29	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:05:36.898     16:49:29	-- scripts/common.sh@354 -- # echo 2
00:05:36.898    16:49:29	-- scripts/common.sh@365 -- # ver2[v]=2
00:05:36.898    16:49:29	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:05:36.898    16:49:29	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:05:36.898    16:49:29	-- scripts/common.sh@367 -- # return 0
00:05:36.898    16:49:29	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:05:36.898    16:49:29	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:05:36.898  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:36.898  		--rc genhtml_branch_coverage=1
00:05:36.898  		--rc genhtml_function_coverage=1
00:05:36.898  		--rc genhtml_legend=1
00:05:36.898  		--rc geninfo_all_blocks=1
00:05:36.898  		--rc geninfo_unexecuted_blocks=1
00:05:36.898  		
00:05:36.898  		'
00:05:36.898    16:49:29	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:05:36.898  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:36.898  		--rc genhtml_branch_coverage=1
00:05:36.898  		--rc genhtml_function_coverage=1
00:05:36.898  		--rc genhtml_legend=1
00:05:36.898  		--rc geninfo_all_blocks=1
00:05:36.898  		--rc geninfo_unexecuted_blocks=1
00:05:36.898  		
00:05:36.898  		'
00:05:36.898    16:49:29	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:05:36.898  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:36.898  		--rc genhtml_branch_coverage=1
00:05:36.898  		--rc genhtml_function_coverage=1
00:05:36.898  		--rc genhtml_legend=1
00:05:36.898  		--rc geninfo_all_blocks=1
00:05:36.898  		--rc geninfo_unexecuted_blocks=1
00:05:36.898  		
00:05:36.898  		'
00:05:36.898    16:49:29	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:05:36.898  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:36.898  		--rc genhtml_branch_coverage=1
00:05:36.898  		--rc genhtml_function_coverage=1
00:05:36.898  		--rc genhtml_legend=1
00:05:36.898  		--rc geninfo_all_blocks=1
00:05:36.898  		--rc geninfo_unexecuted_blocks=1
00:05:36.898  		
00:05:36.898  		'
00:05:36.898   16:49:29	-- unit/unittest.sh@17 -- # cd /home/vagrant/spdk_repo/spdk
00:05:36.898   16:49:29	-- unit/unittest.sh@151 -- # '[' 0 -eq 1 ']'
00:05:36.898   16:49:29	-- unit/unittest.sh@158 -- # '[' -z x ']'
00:05:36.898   16:49:29	-- unit/unittest.sh@165 -- # '[' 0 -eq 1 ']'
00:05:36.898   16:49:29	-- unit/unittest.sh@174 -- # [[ y == y ]]
00:05:36.898   16:49:29	-- unit/unittest.sh@175 -- # UT_COVERAGE=/home/vagrant/spdk_repo/spdk/../output/ut_coverage
00:05:36.898   16:49:29	-- unit/unittest.sh@176 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/ut_coverage
00:05:36.898   16:49:29	-- unit/unittest.sh@178 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -d . -t Baseline -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info
00:05:55.036  /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found
00:05:55.036  geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno
00:05:55.036  /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found
00:05:55.036  geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno
00:05:55.036  /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found
00:05:55.036  geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno
00:06:27.112    16:50:15	-- unit/unittest.sh@182 -- # uname -m
00:06:27.112   16:50:15	-- unit/unittest.sh@182 -- # '[' x86_64 = aarch64 ']'
00:06:27.112   16:50:15	-- unit/unittest.sh@186 -- # run_test unittest_pci_event /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut
00:06:27.112   16:50:15	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:06:27.112   16:50:15	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:06:27.112   16:50:15	-- common/autotest_common.sh@10 -- # set +x
00:06:27.112  ************************************
00:06:27.112  START TEST unittest_pci_event
00:06:27.112  ************************************
00:06:27.113   16:50:15	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut
00:06:27.113  
00:06:27.113  
00:06:27.113       CUnit - A unit testing framework for C - Version 2.1-3
00:06:27.113       http://cunit.sourceforge.net/
00:06:27.113  
00:06:27.113  
00:06:27.113  Suite: pci_event
00:06:27.113    Test: test_pci_parse_event ...[2024-11-19 16:50:15.962132] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 162:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 0000
00:06:27.113  [2024-11-19 16:50:15.963224] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 185:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 000000
00:06:27.113  passed
00:06:27.113  
00:06:27.113  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:27.113                suites      1      1    n/a      0        0
00:06:27.113                 tests      1      1      1      0        0
00:06:27.113               asserts     15     15     15      0      n/a
00:06:27.113  
00:06:27.113  Elapsed time =    0.001 seconds
00:06:27.113  
00:06:27.113  real	0m0.044s
00:06:27.113  user	0m0.012s
00:06:27.113  sys	0m0.027s
00:06:27.113   16:50:15	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:06:27.113   16:50:15	-- common/autotest_common.sh@10 -- # set +x
00:06:27.113  ************************************
00:06:27.113  END TEST unittest_pci_event
00:06:27.113  ************************************
00:06:27.113   16:50:16	-- unit/unittest.sh@187 -- # run_test unittest_include /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut
00:06:27.113   16:50:16	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:06:27.113   16:50:16	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:06:27.113   16:50:16	-- common/autotest_common.sh@10 -- # set +x
00:06:27.113  ************************************
00:06:27.113  START TEST unittest_include
00:06:27.113  ************************************
00:06:27.113   16:50:16	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut
00:06:27.113  
00:06:27.113  
00:06:27.113       CUnit - A unit testing framework for C - Version 2.1-3
00:06:27.113       http://cunit.sourceforge.net/
00:06:27.113  
00:06:27.113  
00:06:27.113  Suite: histogram
00:06:27.113    Test: histogram_test ...passed
00:06:27.113    Test: histogram_merge ...passed
00:06:27.113  
00:06:27.113  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:27.113                suites      1      1    n/a      0        0
00:06:27.113                 tests      2      2      2      0        0
00:06:27.113               asserts     50     50     50      0      n/a
00:06:27.113  
00:06:27.113  Elapsed time =    0.007 seconds
00:06:27.113  
00:06:27.113  real	0m0.041s
00:06:27.113  user	0m0.021s
00:06:27.113  sys	0m0.020s
00:06:27.113   16:50:16	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:06:27.113   16:50:16	-- common/autotest_common.sh@10 -- # set +x
00:06:27.113  ************************************
00:06:27.113  END TEST unittest_include
00:06:27.113  ************************************
00:06:27.113   16:50:16	-- unit/unittest.sh@188 -- # run_test unittest_bdev unittest_bdev
00:06:27.113   16:50:16	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:06:27.113   16:50:16	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:06:27.113   16:50:16	-- common/autotest_common.sh@10 -- # set +x
00:06:27.113  ************************************
00:06:27.113  START TEST unittest_bdev
00:06:27.113  ************************************
00:06:27.113   16:50:16	-- common/autotest_common.sh@1114 -- # unittest_bdev
00:06:27.113   16:50:16	-- unit/unittest.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev.c/bdev_ut
00:06:27.113  
00:06:27.113  
00:06:27.113       CUnit - A unit testing framework for C - Version 2.1-3
00:06:27.113       http://cunit.sourceforge.net/
00:06:27.113  
00:06:27.113  
00:06:27.113  Suite: bdev
00:06:27.113    Test: bytes_to_blocks_test ...passed
00:06:27.113    Test: num_blocks_test ...passed
00:06:27.113    Test: io_valid_test ...passed
00:06:27.113    Test: open_write_test ...[2024-11-19 16:50:16.290759] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev1 already claimed: type exclusive_write by module bdev_ut
00:06:27.113  [2024-11-19 16:50:16.291367] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev4 already claimed: type exclusive_write by module bdev_ut
00:06:27.113  [2024-11-19 16:50:16.291657] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev5 already claimed: type exclusive_write by module bdev_ut
00:06:27.113  passed
00:06:27.113    Test: claim_test ...passed
00:06:27.113    Test: alias_add_del_test ...[2024-11-19 16:50:16.420860] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name bdev0 already exists
00:06:27.113  [2024-11-19 16:50:16.421150] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4583:spdk_bdev_alias_add: *ERROR*: Empty alias passed
00:06:27.113  [2024-11-19 16:50:16.421234] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name proper alias 0 already exists
00:06:27.113  passed
00:06:27.113    Test: get_device_stat_test ...passed
00:06:27.113    Test: bdev_io_types_test ...passed
00:06:27.113    Test: bdev_io_wait_test ...passed
00:06:27.113    Test: bdev_io_spans_split_test ...passed
00:06:27.113    Test: bdev_io_boundary_split_test ...passed
00:06:27.113    Test: bdev_io_max_size_and_segment_split_test ...[2024-11-19 16:50:16.635926] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:3185:_bdev_rw_split: *ERROR*: The first child io was less than a block size
00:06:27.113  passed
00:06:27.113    Test: bdev_io_mix_split_test ...passed
00:06:27.113    Test: bdev_io_split_with_io_wait ...passed
00:06:27.113    Test: bdev_io_write_unit_split_test ...[2024-11-19 16:50:16.821718] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32
00:06:27.113  [2024-11-19 16:50:16.822070] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32
00:06:27.113  [2024-11-19 16:50:16.822144] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 1 does not match the write_unit_size 32
00:06:27.113  [2024-11-19 16:50:16.822266] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 32 does not match the write_unit_size 64
00:06:27.113  passed
00:06:27.113    Test: bdev_io_alignment_with_boundary ...passed
00:06:27.113    Test: bdev_io_alignment ...passed
00:06:27.113    Test: bdev_histograms ...passed
00:06:27.113    Test: bdev_write_zeroes ...passed
00:06:27.113    Test: bdev_compare_and_write ...passed
00:06:27.113    Test: bdev_compare ...passed
00:06:27.113    Test: bdev_compare_emulated ...passed
00:06:27.113    Test: bdev_zcopy_write ...passed
00:06:27.113    Test: bdev_zcopy_read ...passed
00:06:27.113    Test: bdev_open_while_hotremove ...passed
00:06:27.113    Test: bdev_close_while_hotremove ...passed
00:06:27.113    Test: bdev_open_ext_test ...[2024-11-19 16:50:17.480975] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8046:spdk_bdev_open_ext: *ERROR*: Missing event callback function
00:06:27.113  passed
00:06:27.113    Test: bdev_open_ext_unregister ...[2024-11-19 16:50:17.481409] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8046:spdk_bdev_open_ext: *ERROR*: Missing event callback function
00:06:27.113  passed
00:06:27.113    Test: bdev_set_io_timeout ...passed
00:06:27.113    Test: bdev_set_qd_sampling ...passed
00:06:27.113    Test: lba_range_overlap ...passed
00:06:27.113    Test: lock_lba_range_check_ranges ...passed
00:06:27.113    Test: lock_lba_range_with_io_outstanding ...passed
00:06:27.113    Test: lock_lba_range_overlapped ...passed
00:06:27.113    Test: bdev_quiesce ...[2024-11-19 16:50:17.797726] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9969:_spdk_bdev_quiesce: *ERROR*: The range to unquiesce was not found.
00:06:27.113  passed
00:06:27.113    Test: bdev_io_abort ...passed
00:06:27.113    Test: bdev_unmap ...passed
00:06:27.113    Test: bdev_write_zeroes_split_test ...passed
00:06:27.113    Test: bdev_set_options_test ...passed[2024-11-19 16:50:17.982158] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 485:spdk_bdev_set_opts: *ERROR*: opts_size inside opts cannot be zero value
00:06:27.113  
00:06:27.113    Test: bdev_get_memory_domains ...passed
00:06:27.113    Test: bdev_io_ext ...passed
00:06:27.113    Test: bdev_io_ext_no_opts ...passed
00:06:27.113    Test: bdev_io_ext_invalid_opts ...passed
00:06:27.113    Test: bdev_io_ext_split ...passed
00:06:27.113    Test: bdev_io_ext_bounce_buffer ...passed
00:06:27.113    Test: bdev_register_uuid_alias ...[2024-11-19 16:50:18.274746] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name 7fc46a30-622f-42d9-90f6-a2bcbcfb28b8 already exists
00:06:27.113  [2024-11-19 16:50:18.274986] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7603:bdev_register: *ERROR*: Unable to add uuid:7fc46a30-622f-42d9-90f6-a2bcbcfb28b8 alias for bdev bdev0
00:06:27.113  passed
00:06:27.113    Test: bdev_unregister_by_name ...[2024-11-19 16:50:18.303721] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7836:spdk_bdev_unregister_by_name: *ERROR*: Failed to open bdev with name: bdev1
00:06:27.113  [2024-11-19 16:50:18.303853] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7844:spdk_bdev_unregister_by_name: *ERROR*: Bdev bdev was not registered by the specified module.
00:06:27.113  passed
00:06:27.113    Test: for_each_bdev_test ...passed
00:06:27.113    Test: bdev_seek_test ...passed
00:06:27.113    Test: bdev_copy ...passed
00:06:27.113    Test: bdev_copy_split_test ...passed
00:06:27.113    Test: examine_locks ...passed
00:06:27.113    Test: claim_v2_rwo ...[2024-11-19 16:50:18.455205] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut
00:06:27.113  [2024-11-19 16:50:18.455390] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8570:claim_verify_rwo: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut
00:06:27.113  [2024-11-19 16:50:18.455488] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut
00:06:27.114  [2024-11-19 16:50:18.455619] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut
00:06:27.114  [2024-11-19 16:50:18.455723] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut
00:06:27.114  passed[2024-11-19 16:50:18.455837] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8565:claim_verify_rwo: *ERROR*: bdev0: key option not supported with read-write-once claims
00:06:27.114  
00:06:27.114    Test: claim_v2_rom ...[2024-11-19 16:50:18.456081] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut
00:06:27.114  [2024-11-19 16:50:18.456211] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut
00:06:27.114  [2024-11-19 16:50:18.456292] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut
00:06:27.114  [2024-11-19 16:50:18.456447] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut
00:06:27.114  [2024-11-19 16:50:18.456520] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8608:claim_verify_rom: *ERROR*: bdev0: key option not supported with read-only-may claims
00:06:27.114  [2024-11-19 16:50:18.456602] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8603:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor
00:06:27.114  passed
00:06:27.114    Test: claim_v2_rwm ...[2024-11-19 16:50:18.456905] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8638:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims
00:06:27.114  [2024-11-19 16:50:18.457073] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut
00:06:27.114  [2024-11-19 16:50:18.457178] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut
00:06:27.114  [2024-11-19 16:50:18.457279] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut
00:06:27.114  [2024-11-19 16:50:18.457330] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut
00:06:27.114  [2024-11-19 16:50:18.457407] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8658:claim_verify_rwm: *ERROR*: bdev bdev0 already claimed with another key: type read_many_write_many by module bdev_ut
00:06:27.114  passed[2024-11-19 16:50:18.457476] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8638:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims
00:06:27.114  
00:06:27.114    Test: claim_v2_existing_writer ...[2024-11-19 16:50:18.457755] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8603:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor
00:06:27.114  [2024-11-19 16:50:18.457886] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8603:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor
00:06:27.114  passed
00:06:27.114    Test: claim_v2_existing_v1 ...[2024-11-19 16:50:18.458190] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut
00:06:27.114  [2024-11-19 16:50:18.458285] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut
00:06:27.114  [2024-11-19 16:50:18.458362] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut
00:06:27.114  passed
00:06:27.114    Test: claim_v1_existing_v2 ...[2024-11-19 16:50:18.458598] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut
00:06:27.114  [2024-11-19 16:50:18.458671] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut
00:06:27.114  [2024-11-19 16:50:18.458726] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut
00:06:27.114  passed
00:06:27.114    Test: examine_claimed ...[2024-11-19 16:50:18.459104] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module vbdev_ut_examine1
00:06:27.114  passed
00:06:27.114  
00:06:27.114  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:27.114                suites      1      1    n/a      0        0
00:06:27.114                 tests     59     59     59      0        0
00:06:27.114               asserts   4599   4599   4599      0      n/a
00:06:27.114  
00:06:27.114  Elapsed time =    2.274 seconds
00:06:27.114   16:50:18	-- unit/unittest.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut
00:06:27.114  
00:06:27.114  
00:06:27.114       CUnit - A unit testing framework for C - Version 2.1-3
00:06:27.114       http://cunit.sourceforge.net/
00:06:27.114  
00:06:27.114  
00:06:27.114  Suite: nvme
00:06:27.114    Test: test_create_ctrlr ...passed
00:06:27.114    Test: test_reset_ctrlr ...[2024-11-19 16:50:18.515514] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed.
00:06:27.114  passed
00:06:27.114    Test: test_race_between_reset_and_destruct_ctrlr ...passed
00:06:27.114    Test: test_failover_ctrlr ...passed
00:06:27.114    Test: test_race_between_failover_and_add_secondary_trid ...[2024-11-19 16:50:18.519944] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed.
00:06:27.114  [2024-11-19 16:50:18.520405] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed.
00:06:27.114  [2024-11-19 16:50:18.520841] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed.
00:06:27.114  passed
00:06:27.114    Test: test_pending_reset ...[2024-11-19 16:50:18.523352] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed.
00:06:27.114  [2024-11-19 16:50:18.523789] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed.
00:06:27.114  passed
00:06:27.114    Test: test_attach_ctrlr ...[2024-11-19 16:50:18.525362] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4236:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed
00:06:27.114  passed
00:06:27.114    Test: test_aer_cb ...passed
00:06:27.114    Test: test_submit_nvme_cmd ...passed
00:06:27.114    Test: test_add_remove_trid ...passed
00:06:27.114    Test: test_abort ...[2024-11-19 16:50:18.530059] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:7227:bdev_nvme_comparev_and_writev_done: *ERROR*: Unexpected write success after compare failure.
00:06:27.114  passed
00:06:27.114    Test: test_get_io_qpair ...passed
00:06:27.114    Test: test_bdev_unregister ...passed
00:06:27.114    Test: test_compare_ns ...passed
00:06:27.114    Test: test_init_ana_log_page ...passed
00:06:27.114    Test: test_get_memory_domains ...passed
00:06:27.114    Test: test_reconnect_qpair ...[2024-11-19 16:50:18.534360] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed.
00:06:27.114  passed
00:06:27.114    Test: test_create_bdev_ctrlr ...[2024-11-19 16:50:18.535328] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5279:bdev_nvme_check_multipath: *ERROR*: cntlid 18 are duplicated.
00:06:27.114  passed
00:06:27.114    Test: test_add_multi_ns_to_bdev ...[2024-11-19 16:50:18.537092] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4492:nvme_bdev_add_ns: *ERROR*: Namespaces are not identical.
00:06:27.114  passed
00:06:27.114    Test: test_add_multi_io_paths_to_nbdev_ch ...passed
00:06:27.114    Test: test_admin_path ...passed
00:06:27.114    Test: test_reset_bdev_ctrlr ...passed
00:06:27.114    Test: test_find_io_path ...passed
00:06:27.114    Test: test_retry_io_if_ana_state_is_updating ...passed
00:06:27.114    Test: test_retry_io_for_io_path_error ...passed
00:06:27.114    Test: test_retry_io_count ...passed
00:06:27.114    Test: test_concurrent_read_ana_log_page ...passed
00:06:27.114    Test: test_retry_io_for_ana_error ...passed
00:06:27.114    Test: test_check_io_error_resiliency_params ...[2024-11-19 16:50:18.546572] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5932:bdev_nvme_check_io_error_resiliency_params: *ERROR*: ctrlr_loss_timeout_sec can't be less than -1.
00:06:27.114  [2024-11-19 16:50:18.546749] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5936:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0.
00:06:27.114  [2024-11-19 16:50:18.546921] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5945:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0.
00:06:27.114  [2024-11-19 16:50:18.547075] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5948:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than ctrlr_loss_timeout_sec.
00:06:27.114  [2024-11-19 16:50:18.547195] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5960:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0.
00:06:27.114  [2024-11-19 16:50:18.547348] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5960:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0.
00:06:27.114  [2024-11-19 16:50:18.547487] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5940:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io-fail_timeout_sec.
00:06:27.114  [2024-11-19 16:50:18.547642] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5955:bdev_nvme_check_io_error_resiliency_params: *ERROR*: fast_io_fail_timeout_sec can't be more than ctrlr_loss_timeout_sec.
00:06:27.114  [2024-11-19 16:50:18.547773] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5952:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io_fail_timeout_sec.
00:06:27.114  passed
00:06:27.114    Test: test_retry_io_if_ctrlr_is_resetting ...passed
00:06:27.114    Test: test_reconnect_ctrlr ...[2024-11-19 16:50:18.549032] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed.
00:06:27.114  [2024-11-19 16:50:18.549292] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed.
00:06:27.114  [2024-11-19 16:50:18.549721] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed.
00:06:27.114  [2024-11-19 16:50:18.550015] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed.
00:06:27.114  [2024-11-19 16:50:18.550288] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed.
00:06:27.114  passed
00:06:27.114    Test: test_retry_failover_ctrlr ...[2024-11-19 16:50:18.550960] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed.
00:06:27.114  passed
00:06:27.115    Test: test_fail_path ...[2024-11-19 16:50:18.551892] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed.
00:06:27.115  [2024-11-19 16:50:18.552189] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed.
00:06:27.115  [2024-11-19 16:50:18.552441] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed.
00:06:27.115  [2024-11-19 16:50:18.552680] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed.
00:06:27.115  [2024-11-19 16:50:18.552939] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed.
00:06:27.115  passed
00:06:27.115    Test: test_nvme_ns_cmp ...passed
00:06:27.115    Test: test_ana_transition ...passed
00:06:27.115    Test: test_set_preferred_path ...passed
00:06:27.115    Test: test_find_next_io_path ...passed
00:06:27.115    Test: test_find_io_path_min_qd ...passed
00:06:27.115    Test: test_disable_auto_failback ...[2024-11-19 16:50:18.555467] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed.
00:06:27.115  passed
00:06:27.115    Test: test_set_multipath_policy ...passed
00:06:27.115    Test: test_uuid_generation ...passed
00:06:27.115    Test: test_retry_io_to_same_path ...passed
00:06:27.115    Test: test_race_between_reset_and_disconnected ...passed
00:06:27.115    Test: test_ctrlr_op_rpc ...passed
00:06:27.115    Test: test_bdev_ctrlr_op_rpc ...passed
00:06:27.115    Test: test_disable_enable_ctrlr ...[2024-11-19 16:50:18.560637] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed.
00:06:27.115  [2024-11-19 16:50:18.560914] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed.
00:06:27.115  passed
00:06:27.115    Test: test_delete_ctrlr_done ...passed
00:06:27.115    Test: test_ns_remove_during_reset ...passed
00:06:27.115  
00:06:27.115  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:27.115                suites      1      1    n/a      0        0
00:06:27.115                 tests     48     48     48      0        0
00:06:27.115               asserts   3553   3553   3553      0      n/a
00:06:27.115  
00:06:27.115  Elapsed time =    0.038 seconds
00:06:27.115   16:50:18	-- unit/unittest.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut
00:06:27.115  Test Options
00:06:27.115  blocklen = 4096, strip_size = 64, max_io_size = 1024, g_max_base_drives = 32, g_max_raids = 2
00:06:27.115  
00:06:27.115  
00:06:27.115       CUnit - A unit testing framework for C - Version 2.1-3
00:06:27.115       http://cunit.sourceforge.net/
00:06:27.115  
00:06:27.115  
00:06:27.115  Suite: raid
00:06:27.115    Test: test_create_raid ...passed
00:06:27.115    Test: test_create_raid_superblock ...passed
00:06:27.115    Test: test_delete_raid ...passed
00:06:27.115    Test: test_create_raid_invalid_args ...[2024-11-19 16:50:18.615714] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1357:_raid_bdev_create: *ERROR*: Unsupported raid level '-1'
00:06:27.115  [2024-11-19 16:50:18.616389] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1351:_raid_bdev_create: *ERROR*: Invalid strip size 1231
00:06:27.115  [2024-11-19 16:50:18.617178] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1341:_raid_bdev_create: *ERROR*: Duplicate raid bdev name found: raid1
00:06:27.115  [2024-11-19 16:50:18.617679] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:2934:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed
00:06:27.115  [2024-11-19 16:50:18.618913] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:2934:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed
00:06:27.115  passed
00:06:27.115    Test: test_delete_raid_invalid_args ...passed
00:06:27.115    Test: test_io_channel ...passed
00:06:27.115    Test: test_reset_io ...passed
00:06:27.115    Test: test_write_io ...passed
00:06:27.115    Test: test_read_io ...passed
00:06:27.115    Test: test_unmap_io ...passed
00:06:27.115    Test: test_io_failure ...[2024-11-19 16:50:19.839861] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c: 832:raid_bdev_submit_request: *ERROR*: submit request, invalid io type 0
00:06:27.115  passed
00:06:27.115    Test: test_multi_raid_no_io ...passed
00:06:27.115    Test: test_multi_raid_with_io ...passed
00:06:27.115    Test: test_io_type_supported ...passed
00:06:27.115    Test: test_raid_json_dump_info ...passed
00:06:27.115    Test: test_context_size ...passed
00:06:27.115    Test: test_raid_level_conversions ...passed
00:06:27.115    Test: test_raid_process ...passed
00:06:27.115    Test: test_raid_io_split ...passed
00:06:27.115  
00:06:27.115  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:27.115                suites      1      1    n/a      0        0
00:06:27.115                 tests     19     19     19      0        0
00:06:27.115               asserts 177879 177879 177879      0      n/a
00:06:27.115  
00:06:27.115  Elapsed time =    1.235 seconds
00:06:27.115   16:50:19	-- unit/unittest.sh@23 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut
00:06:27.115  
00:06:27.115  
00:06:27.115       CUnit - A unit testing framework for C - Version 2.1-3
00:06:27.115       http://cunit.sourceforge.net/
00:06:27.115  
00:06:27.115  
00:06:27.115  Suite: raid_sb
00:06:27.115    Test: test_raid_bdev_write_superblock ...passed
00:06:27.115    Test: test_raid_bdev_load_base_bdev_superblock ...passed
00:06:27.115    Test: test_raid_bdev_parse_superblock ...[2024-11-19 16:50:19.894674] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 120:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev
00:06:27.115  passed
00:06:27.115  
00:06:27.115  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:27.115                suites      1      1    n/a      0        0
00:06:27.115                 tests      3      3      3      0        0
00:06:27.115               asserts     32     32     32      0      n/a
00:06:27.115  
00:06:27.115  Elapsed time =    0.001 seconds
00:06:27.115   16:50:19	-- unit/unittest.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/concat.c/concat_ut
00:06:27.115  
00:06:27.115  
00:06:27.115       CUnit - A unit testing framework for C - Version 2.1-3
00:06:27.115       http://cunit.sourceforge.net/
00:06:27.115  
00:06:27.115  
00:06:27.115  Suite: concat
00:06:27.115    Test: test_concat_start ...passed
00:06:27.115    Test: test_concat_rw ...passed
00:06:27.115    Test: test_concat_null_payload ...passed
00:06:27.115  
00:06:27.115  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:27.115                suites      1      1    n/a      0        0
00:06:27.115                 tests      3      3      3      0        0
00:06:27.115               asserts   8097   8097   8097      0      n/a
00:06:27.115  
00:06:27.115  Elapsed time =    0.006 seconds
00:06:27.115   16:50:19	-- unit/unittest.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid1.c/raid1_ut
00:06:27.374  
00:06:27.374  
00:06:27.374       CUnit - A unit testing framework for C - Version 2.1-3
00:06:27.374       http://cunit.sourceforge.net/
00:06:27.374  
00:06:27.374  
00:06:27.374  Suite: raid1
00:06:27.374    Test: test_raid1_start ...passed
00:06:27.374    Test: test_raid1_read_balancing ...passed
00:06:27.374  
00:06:27.374  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:27.374                suites      1      1    n/a      0        0
00:06:27.374                 tests      2      2      2      0        0
00:06:27.374               asserts   2856   2856   2856      0      n/a
00:06:27.374  
00:06:27.374  Elapsed time =    0.005 seconds
00:06:27.374   16:50:20	-- unit/unittest.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut
00:06:27.374  
00:06:27.374  
00:06:27.374       CUnit - A unit testing framework for C - Version 2.1-3
00:06:27.374       http://cunit.sourceforge.net/
00:06:27.374  
00:06:27.374  
00:06:27.374  Suite: zone
00:06:27.374    Test: test_zone_get_operation ...passed
00:06:27.374    Test: test_bdev_zone_get_info ...passed
00:06:27.374    Test: test_bdev_zone_management ...passed
00:06:27.374    Test: test_bdev_zone_append ...passed
00:06:27.374    Test: test_bdev_zone_append_with_md ...passed
00:06:27.374    Test: test_bdev_zone_appendv ...passed
00:06:27.374    Test: test_bdev_zone_appendv_with_md ...passed
00:06:27.374    Test: test_bdev_io_get_append_location ...passed
00:06:27.374  
00:06:27.374  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:27.374                suites      1      1    n/a      0        0
00:06:27.374                 tests      8      8      8      0        0
00:06:27.374               asserts     94     94     94      0      n/a
00:06:27.374  
00:06:27.374  Elapsed time =    0.001 seconds
00:06:27.374   16:50:20	-- unit/unittest.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/gpt/gpt.c/gpt_ut
00:06:27.374  
00:06:27.374  
00:06:27.374       CUnit - A unit testing framework for C - Version 2.1-3
00:06:27.374       http://cunit.sourceforge.net/
00:06:27.374  
00:06:27.374  
00:06:27.374  Suite: gpt_parse
00:06:27.374    Test: test_parse_mbr_and_primary ...[2024-11-19 16:50:20.075046] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL
00:06:27.374  [2024-11-19 16:50:20.075640] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL
00:06:27.374  [2024-11-19 16:50:20.075768] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873
00:06:27.374  [2024-11-19 16:50:20.075916] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header
00:06:27.374  [2024-11-19 16:50:20.076140] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c:  88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128
00:06:27.374  [2024-11-19 16:50:20.076356] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions
00:06:27.374  passed
00:06:27.374    Test: test_parse_secondary ...[2024-11-19 16:50:20.077281] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873
00:06:27.374  [2024-11-19 16:50:20.077404] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header
00:06:27.374  [2024-11-19 16:50:20.077559] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c:  88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128
00:06:27.374  [2024-11-19 16:50:20.077734] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions
00:06:27.374  passed
00:06:27.374    Test: test_check_mbr ...[2024-11-19 16:50:20.078735] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL
00:06:27.374  [2024-11-19 16:50:20.078862] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL
00:06:27.374  passed
00:06:27.374    Test: test_read_header ...[2024-11-19 16:50:20.079280] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=600
00:06:27.374  [2024-11-19 16:50:20.079460] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 177:gpt_read_header: *ERROR*: head crc32 does not match, provided=584158336, calculated=3316781438
00:06:27.374  [2024-11-19 16:50:20.079660] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 184:gpt_read_header: *ERROR*: signature did not match
00:06:27.374  [2024-11-19 16:50:20.079862] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 191:gpt_read_header: *ERROR*: head my_lba(7016996765293437281) != expected(1)
00:06:27.374  [2024-11-19 16:50:20.079967] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 135:gpt_lba_range_check: *ERROR*: Head's usable_lba_end(7016996765293437281) > lba_end(0)
00:06:27.374  [2024-11-19 16:50:20.080112] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 197:gpt_read_header: *ERROR*: lba range check error
00:06:27.374  passed
00:06:27.374    Test: test_read_partitions ...[2024-11-19 16:50:20.080307] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c:  88:gpt_read_partitions: *ERROR*: Num_partition_entries=256 which exceeds max=128
00:06:27.374  [2024-11-19 16:50:20.080417] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c:  95:gpt_read_partitions: *ERROR*: Partition_entry_size(0) != expected(80)
00:06:27.374  [2024-11-19 16:50:20.080570] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c:  59:gpt_get_partitions_buf: *ERROR*: Buffer size is not enough
00:06:27.374  [2024-11-19 16:50:20.080659] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 105:gpt_read_partitions: *ERROR*: Failed to get gpt partitions buf
00:06:27.374  [2024-11-19 16:50:20.081140] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 113:gpt_read_partitions: *ERROR*: GPT partition entry array crc32 did not match
00:06:27.374  passed
00:06:27.374  
00:06:27.374  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:27.374                suites      1      1    n/a      0        0
00:06:27.374                 tests      5      5      5      0        0
00:06:27.374               asserts     33     33     33      0      n/a
00:06:27.374  
00:06:27.374  Elapsed time =    0.005 seconds
00:06:27.374   16:50:20	-- unit/unittest.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/part.c/part_ut
00:06:27.374  
00:06:27.374  
00:06:27.374       CUnit - A unit testing framework for C - Version 2.1-3
00:06:27.374       http://cunit.sourceforge.net/
00:06:27.374  
00:06:27.374  
00:06:27.374  Suite: bdev_part
00:06:27.374    Test: part_test ...[2024-11-19 16:50:20.129436] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name test1 already exists
00:06:27.374  passed
00:06:27.374    Test: part_free_test ...passed
00:06:27.374    Test: part_get_io_channel_test ...passed
00:06:27.374    Test: part_construct_ext ...passed
00:06:27.374  
00:06:27.374  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:27.374                suites      1      1    n/a      0        0
00:06:27.374                 tests      4      4      4      0        0
00:06:27.374               asserts     48     48     48      0      n/a
00:06:27.374  
00:06:27.374  Elapsed time =    0.069 seconds
00:06:27.374   16:50:20	-- unit/unittest.sh@29 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut
00:06:27.633  
00:06:27.633  
00:06:27.633       CUnit - A unit testing framework for C - Version 2.1-3
00:06:27.633       http://cunit.sourceforge.net/
00:06:27.633  
00:06:27.633  
00:06:27.633  Suite: scsi_nvme_suite
00:06:27.633    Test: scsi_nvme_translate_test ...passed
00:06:27.633  
00:06:27.633  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:27.633                suites      1      1    n/a      0        0
00:06:27.633                 tests      1      1      1      0        0
00:06:27.633               asserts    104    104    104      0      n/a
00:06:27.633  
00:06:27.633  Elapsed time =    0.000 seconds
00:06:27.633   16:50:20	-- unit/unittest.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut
00:06:27.633  
00:06:27.633  
00:06:27.633       CUnit - A unit testing framework for C - Version 2.1-3
00:06:27.633       http://cunit.sourceforge.net/
00:06:27.633  
00:06:27.633  
00:06:27.633  Suite: lvol
00:06:27.633    Test: ut_lvs_init ...[2024-11-19 16:50:20.296231] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 180:_vbdev_lvs_create_cb: *ERROR*: Cannot create lvol store bdev
00:06:27.633  [2024-11-19 16:50:20.297157] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 264:vbdev_lvs_create: *ERROR*: Cannot create blobstore device
00:06:27.633  passed
00:06:27.633    Test: ut_lvol_init ...passed
00:06:27.633    Test: ut_lvol_snapshot ...passed
00:06:27.633    Test: ut_lvol_clone ...passed
00:06:27.633    Test: ut_lvs_destroy ...passed
00:06:27.633    Test: ut_lvs_unload ...passed
00:06:27.633    Test: ut_lvol_resize ...[2024-11-19 16:50:20.301151] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1391:vbdev_lvol_resize: *ERROR*: lvol does not exist
00:06:27.633  passed
00:06:27.633    Test: ut_lvol_set_read_only ...passed
00:06:27.633    Test: ut_lvol_hotremove ...passed
00:06:27.633    Test: ut_vbdev_lvol_get_io_channel ...passed
00:06:27.633    Test: ut_vbdev_lvol_io_type_supported ...passed
00:06:27.633    Test: ut_lvol_read_write ...passed
00:06:27.633    Test: ut_vbdev_lvol_submit_request ...passed
00:06:27.633    Test: ut_lvol_examine_config ...passed
00:06:27.633    Test: ut_lvol_examine_disk ...[2024-11-19 16:50:20.303647] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1533:_vbdev_lvs_examine_finish: *ERROR*: Error opening lvol UNIT_TEST_UUID
00:06:27.633  passed
00:06:27.633    Test: ut_lvol_rename ...[2024-11-19 16:50:20.305326] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 105:_vbdev_lvol_change_bdev_alias: *ERROR*: cannot add alias 'lvs/new_lvol_name'
00:06:27.633  [2024-11-19 16:50:20.305613] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1341:vbdev_lvol_rename: *ERROR*: renaming lvol to 'new_lvol_name' does not succeed
00:06:27.633  passed
00:06:27.633    Test: ut_bdev_finish ...passed
00:06:27.633    Test: ut_lvs_rename ...passed
00:06:27.633    Test: ut_lvol_seek ...passed
00:06:27.633    Test: ut_esnap_dev_create ...[2024-11-19 16:50:20.307436] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1868:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : NULL esnap ID
00:06:27.633  [2024-11-19 16:50:20.307654] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1874:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID length (36)
00:06:27.633  [2024-11-19 16:50:20.307818] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1879:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID: not a UUID
00:06:27.633  [2024-11-19 16:50:20.307959] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1900:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : unable to claim esnap bdev 'a27fd8fe-d4b9-431e-a044-271016228ce4': -1
00:06:27.633  passed
00:06:27.633    Test: ut_lvol_esnap_clone_bad_args ...[2024-11-19 16:50:20.308369] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1277:vbdev_lvol_create_bdev_clone: *ERROR*: lvol store not specified
00:06:27.633  [2024-11-19 16:50:20.308548] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1284:vbdev_lvol_create_bdev_clone: *ERROR*: bdev '255f4236-9427-42d0-a9d1-aa17f37dd8db' could not be opened: error -19
00:06:27.633  passed
00:06:27.633  
00:06:27.633  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:27.633                suites      1      1    n/a      0        0
00:06:27.633                 tests     21     21     21      0        0
00:06:27.633               asserts    712    712    712      0      n/a
00:06:27.633  
00:06:27.633  Elapsed time =    0.009 seconds
00:06:27.633   16:50:20	-- unit/unittest.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut
00:06:27.633  
00:06:27.633  
00:06:27.633       CUnit - A unit testing framework for C - Version 2.1-3
00:06:27.633       http://cunit.sourceforge.net/
00:06:27.633  
00:06:27.633  
00:06:27.633  Suite: zone_block
00:06:27.633    Test: test_zone_block_create ...passed
00:06:27.633    Test: test_zone_block_create_invalid ...[2024-11-19 16:50:20.386911] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 624:zone_block_insert_name: *ERROR*: base bdev Nvme0n1 already claimed
00:06:27.634  [2024-11-19 16:50:20.387499] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c:  58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-11-19 16:50:20.387935] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 721:zone_block_register: *ERROR*: Base bdev zone_dev1 is already a zoned bdev
00:06:27.634  [2024-11-19 16:50:20.388143] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c:  58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-11-19 16:50:20.388482] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 860:vbdev_zone_block_create: *ERROR*: Zone capacity can't be 0
00:06:27.634  [2024-11-19 16:50:20.388592] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c:  58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argument[2024-11-19 16:50:20.388818] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 865:vbdev_zone_block_create: *ERROR*: Optimal open zones can't be 0
00:06:27.634  [2024-11-19 16:50:20.389006] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c:  58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argumentpassed
00:06:27.634    Test: test_get_zone_info ...[2024-11-19 16:50:20.390128] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:06:27.634  [2024-11-19 16:50:20.390533] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:06:27.634  passed[2024-11-19 16:50:20.390682] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:06:27.634  
00:06:27.634    Test: test_supported_io_types ...passed
00:06:27.634    Test: test_reset_zone ...[2024-11-19 16:50:20.392343] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:06:27.634  [2024-11-19 16:50:20.392519] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:06:27.634  passed
00:06:27.634    Test: test_open_zone ...[2024-11-19 16:50:20.393263] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:06:27.634  [2024-11-19 16:50:20.394221] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:06:27.634  [2024-11-19 16:50:20.394515] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:06:27.634  passed
00:06:27.634    Test: test_zone_write ...[2024-11-19 16:50:20.395431] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2
00:06:27.634  [2024-11-19 16:50:20.395608] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:06:27.634  [2024-11-19 16:50:20.395763] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000)
00:06:27.634  [2024-11-19 16:50:20.395932] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:06:27.634  [2024-11-19 16:50:20.403962] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x407, wp 0x405)
00:06:27.634  [2024-11-19 16:50:20.404141] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:06:27.634  [2024-11-19 16:50:20.404336] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x400, wp 0x405)
00:06:27.634  [2024-11-19 16:50:20.404491] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:06:27.634  [2024-11-19 16:50:20.412602] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0)
00:06:27.634  [2024-11-19 16:50:20.412738] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:06:27.634  passed
00:06:27.634    Test: test_zone_read ...[2024-11-19 16:50:20.413459] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x4ff8, len 0x10)
00:06:27.634  [2024-11-19 16:50:20.413641] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:06:27.634  [2024-11-19 16:50:20.413868] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 460:zone_block_read: *ERROR*: Trying to read from invalid zone (lba 0x5000)
00:06:27.634  [2024-11-19 16:50:20.414014] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:06:27.634  [2024-11-19 16:50:20.414761] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x3f8, len 0x10)
00:06:27.634  [2024-11-19 16:50:20.414938] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:06:27.634  passed
00:06:27.634    Test: test_close_zone ...[2024-11-19 16:50:20.415704] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:06:27.634  [2024-11-19 16:50:20.416014] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:06:27.634  [2024-11-19 16:50:20.416536] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:06:27.634  [2024-11-19 16:50:20.416705] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:06:27.634  passed
00:06:27.634    Test: test_finish_zone ...[2024-11-19 16:50:20.418023] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:06:27.634  [2024-11-19 16:50:20.418199] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:06:27.634  passed
00:06:27.634    Test: test_append_zone ...[2024-11-19 16:50:20.419044] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2
00:06:27.634  [2024-11-19 16:50:20.419218] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:06:27.634  [2024-11-19 16:50:20.419427] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000)
00:06:27.634  [2024-11-19 16:50:20.419556] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:06:27.634  [2024-11-19 16:50:20.436943] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0)
00:06:27.634  [2024-11-19 16:50:20.437151] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:06:27.634  passed
00:06:27.634  
00:06:27.634  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:27.634                suites      1      1    n/a      0        0
00:06:27.634                 tests     11     11     11      0        0
00:06:27.634               asserts   3437   3437   3437      0      n/a
00:06:27.634  
00:06:27.634  Elapsed time =    0.047 seconds
00:06:27.892   16:50:20	-- unit/unittest.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/mt/bdev.c/bdev_ut
00:06:27.892  
00:06:27.892  
00:06:27.892       CUnit - A unit testing framework for C - Version 2.1-3
00:06:27.892       http://cunit.sourceforge.net/
00:06:27.892  
00:06:27.892  
00:06:27.892  Suite: bdev
00:06:27.892    Test: basic ...[2024-11-19 16:50:20.570145] thread.c:2361:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x558b10bf0401): Operation not permitted (rc=-1)
00:06:27.892  [2024-11-19 16:50:20.570643] thread.c:2361:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device 0x6130000003c0 (0x558b10bf03c0): Operation not permitted (rc=-1)
00:06:27.892  [2024-11-19 16:50:20.570794] thread.c:2361:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x558b10bf0401): Operation not permitted (rc=-1)
00:06:27.892  passed
00:06:27.892    Test: unregister_and_close ...passed
00:06:27.892    Test: unregister_and_close_different_threads ...passed
00:06:28.150    Test: basic_qos ...passed
00:06:28.150    Test: put_channel_during_reset ...passed
00:06:28.150    Test: aborted_reset ...passed
00:06:28.150    Test: aborted_reset_no_outstanding_io ...passed
00:06:28.150    Test: io_during_reset ...passed
00:06:28.409    Test: reset_completions ...passed
00:06:28.409    Test: io_during_qos_queue ...passed
00:06:28.409    Test: io_during_qos_reset ...passed
00:06:28.409    Test: enomem ...passed
00:06:28.667    Test: enomem_multi_bdev ...passed
00:06:28.667    Test: enomem_multi_bdev_unregister ...passed
00:06:28.667    Test: enomem_multi_io_target ...passed
00:06:28.667    Test: qos_dynamic_enable ...passed
00:06:28.667    Test: bdev_histograms_mt ...passed
00:06:28.925    Test: bdev_set_io_timeout_mt ...[2024-11-19 16:50:21.578273] thread.c: 467:spdk_thread_lib_fini: *ERROR*: io_device 0x6130000003c0 not unregistered
00:06:28.925  passed
00:06:28.925    Test: lock_lba_range_then_submit_io ...[2024-11-19 16:50:21.604671] thread.c:2165:spdk_io_device_register: *ERROR*: io_device 0x558b10bf0380 already registered (old:0x6130000003c0 new:0x613000000c80)
00:06:28.925  passed
00:06:28.925    Test: unregister_during_reset ...passed
00:06:28.925    Test: event_notify_and_close ...passed
00:06:29.183    Test: unregister_and_qos_poller ...passed
00:06:29.183  Suite: bdev_wrong_thread
00:06:29.184    Test: spdk_bdev_register_wt ...[2024-11-19 16:50:21.805002] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8364:spdk_bdev_register: *ERROR*: Cannot examine bdev wt_bdev on thread 0x618000001480 (0x618000001480)
00:06:29.184  passed
00:06:29.184    Test: spdk_bdev_examine_wt ...[2024-11-19 16:50:21.805417] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 793:spdk_bdev_examine: *ERROR*: Cannot examine bdev ut_bdev_wt on thread 0x618000001480 (0x618000001480)
00:06:29.184  passed
00:06:29.184  
00:06:29.184  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:29.184                suites      2      2    n/a      0        0
00:06:29.184                 tests     24     24     24      0        0
00:06:29.184               asserts    621    621    621      0      n/a
00:06:29.184  
00:06:29.184  Elapsed time =    1.262 seconds
00:06:29.184  
00:06:29.184  real	0m5.695s
00:06:29.184  user	0m2.335s
00:06:29.184  sys	0m3.299s
00:06:29.184   16:50:21	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:06:29.184   16:50:21	-- common/autotest_common.sh@10 -- # set +x
00:06:29.184  ************************************
00:06:29.184  END TEST unittest_bdev
00:06:29.184  ************************************
00:06:29.184   16:50:21	-- unit/unittest.sh@189 -- # grep -q '#define SPDK_CONFIG_CRYPTO 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h
00:06:29.184   16:50:21	-- unit/unittest.sh@194 -- # grep -q '#define SPDK_CONFIG_VBDEV_COMPRESS 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h
00:06:29.184   16:50:21	-- unit/unittest.sh@199 -- # grep -q '#define SPDK_CONFIG_DPDK_COMPRESSDEV 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h
00:06:29.184   16:50:21	-- unit/unittest.sh@203 -- # grep -q '#define SPDK_CONFIG_RAID5F 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h
00:06:29.184   16:50:21	-- unit/unittest.sh@204 -- # run_test unittest_bdev_raid5f /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid5f.c/raid5f_ut
00:06:29.184   16:50:21	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:06:29.184   16:50:21	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:06:29.184   16:50:21	-- common/autotest_common.sh@10 -- # set +x
00:06:29.184  ************************************
00:06:29.184  START TEST unittest_bdev_raid5f
00:06:29.184  ************************************
00:06:29.184   16:50:21	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid5f.c/raid5f_ut
00:06:29.184  
00:06:29.184  
00:06:29.184       CUnit - A unit testing framework for C - Version 2.1-3
00:06:29.184       http://cunit.sourceforge.net/
00:06:29.184  
00:06:29.184  
00:06:29.184  Suite: raid5f
00:06:29.184    Test: test_raid5f_start ...passed
00:06:29.750    Test: test_raid5f_submit_read_request ...passed
00:06:30.007    Test: test_raid5f_stripe_request_map_iovecs ...passed
00:06:34.208    Test: test_raid5f_submit_full_stripe_write_request ...passed
00:06:52.289    Test: test_raid5f_chunk_write_error ...passed
00:07:00.404    Test: test_raid5f_chunk_write_error_with_enomem ...passed
00:07:02.360    Test: test_raid5f_submit_full_stripe_write_request_degraded ...passed
00:07:28.895    Test: test_raid5f_submit_read_request_degraded ...passed
00:07:28.895  
00:07:28.895  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:28.895                suites      1      1    n/a      0        0
00:07:28.895                 tests      8      8      8      0        0
00:07:28.895               asserts 351864 351864 351864      0      n/a
00:07:28.895  
00:07:28.895  Elapsed time =   57.192 seconds
00:07:28.895  
00:07:28.896  real	0m57.316s
00:07:28.896  user	0m53.846s
00:07:28.896  sys	0m3.466s
00:07:28.896   16:51:19	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:28.896   16:51:19	-- common/autotest_common.sh@10 -- # set +x
00:07:28.896  ************************************
00:07:28.896  END TEST unittest_bdev_raid5f
00:07:28.896  ************************************
00:07:28.896   16:51:19	-- unit/unittest.sh@207 -- # run_test unittest_blob_blobfs unittest_blob
00:07:28.896   16:51:19	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:07:28.896   16:51:19	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:28.896   16:51:19	-- common/autotest_common.sh@10 -- # set +x
00:07:28.896  ************************************
00:07:28.896  START TEST unittest_blob_blobfs
00:07:28.896  ************************************
00:07:28.896   16:51:19	-- common/autotest_common.sh@1114 -- # unittest_blob
00:07:28.896   16:51:19	-- unit/unittest.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut ]]
00:07:28.896   16:51:19	-- unit/unittest.sh@39 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut
00:07:28.896  
00:07:28.896  
00:07:28.896       CUnit - A unit testing framework for C - Version 2.1-3
00:07:28.896       http://cunit.sourceforge.net/
00:07:28.896  
00:07:28.896  
00:07:28.896  Suite: blob_nocopy_noextent
00:07:28.896    Test: blob_init ...[2024-11-19 16:51:19.321687] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500
00:07:28.896  passed
00:07:28.896    Test: blob_thin_provision ...passed
00:07:28.896    Test: blob_read_only ...passed
00:07:28.896    Test: bs_load ...[2024-11-19 16:51:19.437289] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000)
00:07:28.896  passed
00:07:28.896    Test: bs_load_custom_cluster_size ...passed
00:07:28.896    Test: bs_load_after_failed_grow ...passed
00:07:28.896    Test: bs_cluster_sz ...[2024-11-19 16:51:19.467901] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0
00:07:28.896  [2024-11-19 16:51:19.468426] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size.
00:07:28.896  [2024-11-19 16:51:19.468716] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096
00:07:28.896  passed
00:07:28.896    Test: bs_resize_md ...passed
00:07:28.896    Test: bs_destroy ...passed
00:07:28.896    Test: bs_type ...passed
00:07:28.896    Test: bs_super_block ...passed
00:07:28.896    Test: bs_test_recover_cluster_count ...passed
00:07:28.896    Test: bs_grow_live ...passed
00:07:28.896    Test: bs_grow_live_no_space ...passed
00:07:28.896    Test: bs_test_grow ...passed
00:07:28.896    Test: blob_serialize_test ...passed
00:07:28.896    Test: super_block_crc ...passed
00:07:28.896    Test: blob_thin_prov_write_count_io ...passed
00:07:28.896    Test: bs_load_iter_test ...passed
00:07:28.896    Test: blob_relations ...[2024-11-19 16:51:19.634021] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:28.896  [2024-11-19 16:51:19.634129] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:28.896  [2024-11-19 16:51:19.635415] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:28.896  [2024-11-19 16:51:19.635497] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:28.896  passed
00:07:28.896    Test: blob_relations2 ...[2024-11-19 16:51:19.650690] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:28.896  [2024-11-19 16:51:19.650785] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:28.896  [2024-11-19 16:51:19.650827] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:28.896  [2024-11-19 16:51:19.650861] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:28.896  [2024-11-19 16:51:19.652583] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:28.896  [2024-11-19 16:51:19.652650] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:28.896  [2024-11-19 16:51:19.653227] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:28.896  [2024-11-19 16:51:19.653284] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:28.896  passed
00:07:28.896    Test: blob_relations3 ...passed
00:07:28.896    Test: blobstore_clean_power_failure ...passed
00:07:28.896    Test: blob_delete_snapshot_power_failure ...[2024-11-19 16:51:19.811334] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5
00:07:28.896  [2024-11-19 16:51:19.823953] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5
00:07:28.896  [2024-11-19 16:51:19.824046] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone
00:07:28.896  [2024-11-19 16:51:19.824088] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:28.896  [2024-11-19 16:51:19.836752] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5
00:07:28.896  [2024-11-19 16:51:19.836846] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail
00:07:28.896  [2024-11-19 16:51:19.836895] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone
00:07:28.896  [2024-11-19 16:51:19.836941] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:28.896  [2024-11-19 16:51:19.849814] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob
00:07:28.896  [2024-11-19 16:51:19.849958] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:28.896  [2024-11-19 16:51:19.862810] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone
00:07:28.896  [2024-11-19 16:51:19.862948] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:28.896  [2024-11-19 16:51:19.875918] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob
00:07:28.896  [2024-11-19 16:51:19.876033] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:28.896  passed
00:07:28.896    Test: blob_create_snapshot_power_failure ...[2024-11-19 16:51:19.913916] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5
00:07:28.896  [2024-11-19 16:51:19.938430] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5
00:07:28.896  [2024-11-19 16:51:19.951149] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5
00:07:28.896  passed
00:07:28.896    Test: blob_io_unit ...passed
00:07:28.896    Test: blob_io_unit_compatibility ...passed
00:07:28.896    Test: blob_ext_md_pages ...passed
00:07:28.896    Test: blob_esnap_io_4096_4096 ...passed
00:07:28.896    Test: blob_esnap_io_512_512 ...passed
00:07:28.896    Test: blob_esnap_io_4096_512 ...passed
00:07:28.896    Test: blob_esnap_io_512_4096 ...passed
00:07:28.896  Suite: blob_bs_nocopy_noextent
00:07:28.896    Test: blob_open ...passed
00:07:28.896    Test: blob_create ...[2024-11-19 16:51:20.197280] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters)
00:07:28.896  passed
00:07:28.896    Test: blob_create_loop ...passed
00:07:28.896    Test: blob_create_fail ...[2024-11-19 16:51:20.292038] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters)
00:07:28.896  passed
00:07:28.896    Test: blob_create_internal ...passed
00:07:28.896    Test: blob_create_zero_extent ...passed
00:07:28.896    Test: blob_snapshot ...passed
00:07:28.896    Test: blob_clone ...passed
00:07:28.896    Test: blob_inflate ...[2024-11-19 16:51:20.476017] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent.
00:07:28.896  passed
00:07:28.896    Test: blob_delete ...passed
00:07:28.896    Test: blob_resize_test ...[2024-11-19 16:51:20.546365] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28
00:07:28.896  passed
00:07:28.896    Test: channel_ops ...passed
00:07:28.896    Test: blob_super ...passed
00:07:28.896    Test: blob_rw_verify_iov ...passed
00:07:28.896    Test: blob_unmap ...passed
00:07:28.896    Test: blob_iter ...passed
00:07:28.896    Test: blob_parse_md ...passed
00:07:28.896    Test: bs_load_pending_removal ...passed
00:07:28.896    Test: bs_unload ...[2024-11-19 16:51:20.809028] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs
00:07:28.896  passed
00:07:28.896    Test: bs_usable_clusters ...passed
00:07:28.896    Test: blob_crc ...[2024-11-19 16:51:20.875458] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000
00:07:28.896  [2024-11-19 16:51:20.875595] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000
00:07:28.896  passed
00:07:28.896    Test: blob_flags ...passed
00:07:28.896    Test: bs_version ...passed
00:07:28.896    Test: blob_set_xattrs_test ...[2024-11-19 16:51:20.978080] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters)
00:07:28.896  [2024-11-19 16:51:20.978184] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters)
00:07:28.896  passed
00:07:28.896    Test: blob_thin_prov_alloc ...passed
00:07:28.896    Test: blob_insert_cluster_msg_test ...passed
00:07:28.896    Test: blob_thin_prov_rw ...passed
00:07:28.896    Test: blob_thin_prov_rle ...passed
00:07:28.896    Test: blob_thin_prov_rw_iov ...passed
00:07:28.896    Test: blob_snapshot_rw ...passed
00:07:28.896    Test: blob_snapshot_rw_iov ...passed
00:07:28.896    Test: blob_inflate_rw ...passed
00:07:28.896    Test: blob_snapshot_freeze_io ...passed
00:07:28.896    Test: blob_operation_split_rw ...passed
00:07:29.156    Test: blob_operation_split_rw_iov ...passed
00:07:29.156    Test: blob_simultaneous_operations ...[2024-11-19 16:51:21.898474] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:07:29.156  [2024-11-19 16:51:21.898574] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:29.156  [2024-11-19 16:51:21.900032] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:07:29.156  [2024-11-19 16:51:21.900093] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:29.156  [2024-11-19 16:51:21.912482] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:07:29.156  [2024-11-19 16:51:21.912560] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:29.156  [2024-11-19 16:51:21.912678] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:07:29.156  [2024-11-19 16:51:21.912709] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:29.156  passed
00:07:29.156    Test: blob_persist_test ...passed
00:07:29.415    Test: blob_decouple_snapshot ...passed
00:07:29.415    Test: blob_seek_io_unit ...passed
00:07:29.415    Test: blob_nested_freezes ...passed
00:07:29.415  Suite: blob_blob_nocopy_noextent
00:07:29.415    Test: blob_write ...passed
00:07:29.415    Test: blob_read ...passed
00:07:29.415    Test: blob_rw_verify ...passed
00:07:29.415    Test: blob_rw_verify_iov_nomem ...passed
00:07:29.415    Test: blob_rw_iov_read_only ...passed
00:07:29.674    Test: blob_xattr ...passed
00:07:29.674    Test: blob_dirty_shutdown ...passed
00:07:29.674    Test: blob_is_degraded ...passed
00:07:29.674  Suite: blob_esnap_bs_nocopy_noextent
00:07:29.674    Test: blob_esnap_create ...passed
00:07:29.674    Test: blob_esnap_thread_add_remove ...passed
00:07:29.674    Test: blob_esnap_clone_snapshot ...passed
00:07:29.674    Test: blob_esnap_clone_inflate ...passed
00:07:29.674    Test: blob_esnap_clone_decouple ...passed
00:07:29.933    Test: blob_esnap_clone_reload ...passed
00:07:29.933    Test: blob_esnap_hotplug ...passed
00:07:29.933  Suite: blob_nocopy_extent
00:07:29.933    Test: blob_init ...[2024-11-19 16:51:22.598367] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500
00:07:29.933  passed
00:07:29.933    Test: blob_thin_provision ...passed
00:07:29.933    Test: blob_read_only ...passed
00:07:29.933    Test: bs_load ...[2024-11-19 16:51:22.646781] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000)
00:07:29.933  passed
00:07:29.933    Test: bs_load_custom_cluster_size ...passed
00:07:29.933    Test: bs_load_after_failed_grow ...passed
00:07:29.933    Test: bs_cluster_sz ...[2024-11-19 16:51:22.672143] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0
00:07:29.933  [2024-11-19 16:51:22.672493] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size.
00:07:29.933  [2024-11-19 16:51:22.672552] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096
00:07:29.933  passed
00:07:29.933    Test: bs_resize_md ...passed
00:07:29.933    Test: bs_destroy ...passed
00:07:29.933    Test: bs_type ...passed
00:07:29.933    Test: bs_super_block ...passed
00:07:29.933    Test: bs_test_recover_cluster_count ...passed
00:07:29.933    Test: bs_grow_live ...passed
00:07:29.933    Test: bs_grow_live_no_space ...passed
00:07:29.933    Test: bs_test_grow ...passed
00:07:29.933    Test: blob_serialize_test ...passed
00:07:29.933    Test: super_block_crc ...passed
00:07:30.193    Test: blob_thin_prov_write_count_io ...passed
00:07:30.193    Test: bs_load_iter_test ...passed
00:07:30.193    Test: blob_relations ...[2024-11-19 16:51:22.820462] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:30.193  [2024-11-19 16:51:22.820567] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:30.193  [2024-11-19 16:51:22.821542] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:30.193  [2024-11-19 16:51:22.821609] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:30.193  passed
00:07:30.193    Test: blob_relations2 ...[2024-11-19 16:51:22.835469] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:30.193  [2024-11-19 16:51:22.835555] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:30.193  [2024-11-19 16:51:22.835581] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:30.193  [2024-11-19 16:51:22.835610] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:30.193  [2024-11-19 16:51:22.837030] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:30.193  [2024-11-19 16:51:22.837087] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:30.193  [2024-11-19 16:51:22.837662] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:30.193  [2024-11-19 16:51:22.837711] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:30.193  passed
00:07:30.193    Test: blob_relations3 ...passed
00:07:30.193    Test: blobstore_clean_power_failure ...passed
00:07:30.193    Test: blob_delete_snapshot_power_failure ...[2024-11-19 16:51:22.988920] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5
00:07:30.193  [2024-11-19 16:51:23.000904] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5
00:07:30.193  [2024-11-19 16:51:23.013007] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5
00:07:30.193  [2024-11-19 16:51:23.013094] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone
00:07:30.193  [2024-11-19 16:51:23.013123] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:30.193  [2024-11-19 16:51:23.025668] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5
00:07:30.193  [2024-11-19 16:51:23.025753] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail
00:07:30.193  [2024-11-19 16:51:23.025787] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone
00:07:30.193  [2024-11-19 16:51:23.025815] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:30.193  [2024-11-19 16:51:23.038126] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5
00:07:30.193  [2024-11-19 16:51:23.038212] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail
00:07:30.193  [2024-11-19 16:51:23.038240] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone
00:07:30.193  [2024-11-19 16:51:23.038284] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:30.193  [2024-11-19 16:51:23.050555] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob
00:07:30.193  [2024-11-19 16:51:23.050669] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:30.452  [2024-11-19 16:51:23.063188] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone
00:07:30.452  [2024-11-19 16:51:23.063308] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:30.452  [2024-11-19 16:51:23.075895] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob
00:07:30.452  [2024-11-19 16:51:23.075995] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:30.452  passed
00:07:30.452    Test: blob_create_snapshot_power_failure ...[2024-11-19 16:51:23.112251] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5
00:07:30.452  [2024-11-19 16:51:23.124170] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5
00:07:30.452  [2024-11-19 16:51:23.147530] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5
00:07:30.452  [2024-11-19 16:51:23.159870] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5
00:07:30.452  passed
00:07:30.452    Test: blob_io_unit ...passed
00:07:30.452    Test: blob_io_unit_compatibility ...passed
00:07:30.452    Test: blob_ext_md_pages ...passed
00:07:30.452    Test: blob_esnap_io_4096_4096 ...passed
00:07:30.452    Test: blob_esnap_io_512_512 ...passed
00:07:30.710    Test: blob_esnap_io_4096_512 ...passed
00:07:30.710    Test: blob_esnap_io_512_4096 ...passed
00:07:30.710  Suite: blob_bs_nocopy_extent
00:07:30.710    Test: blob_open ...passed
00:07:30.710    Test: blob_create ...[2024-11-19 16:51:23.395970] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters)
00:07:30.710  passed
00:07:30.710    Test: blob_create_loop ...passed
00:07:30.710    Test: blob_create_fail ...[2024-11-19 16:51:23.491717] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters)
00:07:30.710  passed
00:07:30.710    Test: blob_create_internal ...passed
00:07:30.710    Test: blob_create_zero_extent ...passed
00:07:30.969    Test: blob_snapshot ...passed
00:07:30.969    Test: blob_clone ...passed
00:07:30.969    Test: blob_inflate ...[2024-11-19 16:51:23.673319] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent.
00:07:30.969  passed
00:07:30.969    Test: blob_delete ...passed
00:07:30.969    Test: blob_resize_test ...[2024-11-19 16:51:23.741449] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28
00:07:30.969  passed
00:07:30.969    Test: channel_ops ...passed
00:07:30.969    Test: blob_super ...passed
00:07:31.290    Test: blob_rw_verify_iov ...passed
00:07:31.290    Test: blob_unmap ...passed
00:07:31.290    Test: blob_iter ...passed
00:07:31.290    Test: blob_parse_md ...passed
00:07:31.290    Test: bs_load_pending_removal ...passed
00:07:31.290    Test: bs_unload ...[2024-11-19 16:51:23.998163] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs
00:07:31.290  passed
00:07:31.290    Test: bs_usable_clusters ...passed
00:07:31.290    Test: blob_crc ...[2024-11-19 16:51:24.065142] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000
00:07:31.290  [2024-11-19 16:51:24.065253] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000
00:07:31.290  passed
00:07:31.290    Test: blob_flags ...passed
00:07:31.290    Test: bs_version ...passed
00:07:31.548    Test: blob_set_xattrs_test ...[2024-11-19 16:51:24.165729] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters)
00:07:31.548  [2024-11-19 16:51:24.165824] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters)
00:07:31.548  passed
00:07:31.548    Test: blob_thin_prov_alloc ...passed
00:07:31.548    Test: blob_insert_cluster_msg_test ...passed
00:07:31.548    Test: blob_thin_prov_rw ...passed
00:07:31.548    Test: blob_thin_prov_rle ...passed
00:07:31.807    Test: blob_thin_prov_rw_iov ...passed
00:07:31.807    Test: blob_snapshot_rw ...passed
00:07:31.807    Test: blob_snapshot_rw_iov ...passed
00:07:32.066    Test: blob_inflate_rw ...passed
00:07:32.066    Test: blob_snapshot_freeze_io ...passed
00:07:32.066    Test: blob_operation_split_rw ...passed
00:07:32.325    Test: blob_operation_split_rw_iov ...passed
00:07:32.325    Test: blob_simultaneous_operations ...[2024-11-19 16:51:25.083615] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:07:32.325  [2024-11-19 16:51:25.083724] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:32.325  [2024-11-19 16:51:25.085159] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:07:32.325  [2024-11-19 16:51:25.085218] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:32.325  [2024-11-19 16:51:25.098257] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:07:32.325  [2024-11-19 16:51:25.098368] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:32.325  [2024-11-19 16:51:25.098491] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:07:32.325  [2024-11-19 16:51:25.098513] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:32.325  passed
00:07:32.325    Test: blob_persist_test ...passed
00:07:32.584    Test: blob_decouple_snapshot ...passed
00:07:32.584    Test: blob_seek_io_unit ...passed
00:07:32.584    Test: blob_nested_freezes ...passed
00:07:32.584  Suite: blob_blob_nocopy_extent
00:07:32.584    Test: blob_write ...passed
00:07:32.584    Test: blob_read ...passed
00:07:32.584    Test: blob_rw_verify ...passed
00:07:32.584    Test: blob_rw_verify_iov_nomem ...passed
00:07:32.843    Test: blob_rw_iov_read_only ...passed
00:07:32.843    Test: blob_xattr ...passed
00:07:32.843    Test: blob_dirty_shutdown ...passed
00:07:32.843    Test: blob_is_degraded ...passed
00:07:32.843  Suite: blob_esnap_bs_nocopy_extent
00:07:32.843    Test: blob_esnap_create ...passed
00:07:32.843    Test: blob_esnap_thread_add_remove ...passed
00:07:32.843    Test: blob_esnap_clone_snapshot ...passed
00:07:33.100    Test: blob_esnap_clone_inflate ...passed
00:07:33.100    Test: blob_esnap_clone_decouple ...passed
00:07:33.100    Test: blob_esnap_clone_reload ...passed
00:07:33.100    Test: blob_esnap_hotplug ...passed
00:07:33.100  Suite: blob_copy_noextent
00:07:33.101    Test: blob_init ...[2024-11-19 16:51:25.817585] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500
00:07:33.101  passed
00:07:33.101    Test: blob_thin_provision ...passed
00:07:33.101    Test: blob_read_only ...passed
00:07:33.101    Test: bs_load ...[2024-11-19 16:51:25.867554] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000)
00:07:33.101  passed
00:07:33.101    Test: bs_load_custom_cluster_size ...passed
00:07:33.101    Test: bs_load_after_failed_grow ...passed
00:07:33.101    Test: bs_cluster_sz ...[2024-11-19 16:51:25.893928] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0
00:07:33.101  [2024-11-19 16:51:25.894141] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size.
00:07:33.101  [2024-11-19 16:51:25.894187] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096
00:07:33.101  passed
00:07:33.101    Test: bs_resize_md ...passed
00:07:33.101    Test: bs_destroy ...passed
00:07:33.101    Test: bs_type ...passed
00:07:33.359    Test: bs_super_block ...passed
00:07:33.359    Test: bs_test_recover_cluster_count ...passed
00:07:33.359    Test: bs_grow_live ...passed
00:07:33.359    Test: bs_grow_live_no_space ...passed
00:07:33.359    Test: bs_test_grow ...passed
00:07:33.359    Test: blob_serialize_test ...passed
00:07:33.359    Test: super_block_crc ...passed
00:07:33.359    Test: blob_thin_prov_write_count_io ...passed
00:07:33.359    Test: bs_load_iter_test ...passed
00:07:33.359    Test: blob_relations ...[2024-11-19 16:51:26.055934] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:33.359  [2024-11-19 16:51:26.056038] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:33.359  [2024-11-19 16:51:26.056743] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:33.359  [2024-11-19 16:51:26.056788] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:33.359  passed
00:07:33.359    Test: blob_relations2 ...[2024-11-19 16:51:26.071127] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:33.359  [2024-11-19 16:51:26.071211] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:33.359  [2024-11-19 16:51:26.071238] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:33.359  [2024-11-19 16:51:26.071252] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:33.359  [2024-11-19 16:51:26.072291] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:33.359  [2024-11-19 16:51:26.072354] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:33.359  [2024-11-19 16:51:26.072894] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:33.359  [2024-11-19 16:51:26.072946] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:33.359  passed
00:07:33.359    Test: blob_relations3 ...passed
00:07:33.359    Test: blobstore_clean_power_failure ...passed
00:07:33.617    Test: blob_delete_snapshot_power_failure ...[2024-11-19 16:51:26.230937] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5
00:07:33.617  [2024-11-19 16:51:26.243293] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5
00:07:33.617  [2024-11-19 16:51:26.243400] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone
00:07:33.617  [2024-11-19 16:51:26.243433] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:33.617  [2024-11-19 16:51:26.255913] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5
00:07:33.617  [2024-11-19 16:51:26.256016] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail
00:07:33.617  [2024-11-19 16:51:26.256051] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone
00:07:33.617  [2024-11-19 16:51:26.256074] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:33.617  [2024-11-19 16:51:26.268453] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob
00:07:33.617  [2024-11-19 16:51:26.268572] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:33.617  [2024-11-19 16:51:26.280968] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone
00:07:33.617  [2024-11-19 16:51:26.281108] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:33.617  [2024-11-19 16:51:26.293623] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob
00:07:33.617  [2024-11-19 16:51:26.293756] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:33.617  passed
00:07:33.617    Test: blob_create_snapshot_power_failure ...[2024-11-19 16:51:26.330362] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5
00:07:33.617  [2024-11-19 16:51:26.354112] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5
00:07:33.617  [2024-11-19 16:51:26.366448] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5
00:07:33.617  passed
00:07:33.617    Test: blob_io_unit ...passed
00:07:33.617    Test: blob_io_unit_compatibility ...passed
00:07:33.617    Test: blob_ext_md_pages ...passed
00:07:33.617    Test: blob_esnap_io_4096_4096 ...passed
00:07:33.876    Test: blob_esnap_io_512_512 ...passed
00:07:33.876    Test: blob_esnap_io_4096_512 ...passed
00:07:33.876    Test: blob_esnap_io_512_4096 ...passed
00:07:33.876  Suite: blob_bs_copy_noextent
00:07:33.876    Test: blob_open ...passed
00:07:33.876    Test: blob_create ...[2024-11-19 16:51:26.615582] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters)
00:07:33.876  passed
00:07:33.876    Test: blob_create_loop ...passed
00:07:33.876    Test: blob_create_fail ...[2024-11-19 16:51:26.712947] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters)
00:07:33.876  passed
00:07:34.134    Test: blob_create_internal ...passed
00:07:34.134    Test: blob_create_zero_extent ...passed
00:07:34.134    Test: blob_snapshot ...passed
00:07:34.134    Test: blob_clone ...passed
00:07:34.134    Test: blob_inflate ...[2024-11-19 16:51:26.898225] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent.
00:07:34.134  passed
00:07:34.134    Test: blob_delete ...passed
00:07:34.135    Test: blob_resize_test ...[2024-11-19 16:51:26.968283] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28
00:07:34.135  passed
00:07:34.393    Test: channel_ops ...passed
00:07:34.393    Test: blob_super ...passed
00:07:34.393    Test: blob_rw_verify_iov ...passed
00:07:34.393    Test: blob_unmap ...passed
00:07:34.393    Test: blob_iter ...passed
00:07:34.393    Test: blob_parse_md ...passed
00:07:34.393    Test: bs_load_pending_removal ...passed
00:07:34.393    Test: bs_unload ...[2024-11-19 16:51:27.239871] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs
00:07:34.393  passed
00:07:34.651    Test: bs_usable_clusters ...passed
00:07:34.651    Test: blob_crc ...[2024-11-19 16:51:27.308317] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000
00:07:34.651  [2024-11-19 16:51:27.308441] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000
00:07:34.651  passed
00:07:34.651    Test: blob_flags ...passed
00:07:34.651    Test: bs_version ...passed
00:07:34.651    Test: blob_set_xattrs_test ...[2024-11-19 16:51:27.412511] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters)
00:07:34.651  [2024-11-19 16:51:27.412621] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters)
00:07:34.651  passed
00:07:34.910    Test: blob_thin_prov_alloc ...passed
00:07:34.910    Test: blob_insert_cluster_msg_test ...passed
00:07:34.910    Test: blob_thin_prov_rw ...passed
00:07:34.910    Test: blob_thin_prov_rle ...passed
00:07:34.910    Test: blob_thin_prov_rw_iov ...passed
00:07:34.910    Test: blob_snapshot_rw ...passed
00:07:35.168    Test: blob_snapshot_rw_iov ...passed
00:07:35.426    Test: blob_inflate_rw ...passed
00:07:35.426    Test: blob_snapshot_freeze_io ...passed
00:07:35.426    Test: blob_operation_split_rw ...passed
00:07:35.684    Test: blob_operation_split_rw_iov ...passed
00:07:35.685    Test: blob_simultaneous_operations ...[2024-11-19 16:51:28.417041] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:07:35.685  [2024-11-19 16:51:28.417143] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:35.685  [2024-11-19 16:51:28.417795] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:07:35.685  [2024-11-19 16:51:28.417843] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:35.685  [2024-11-19 16:51:28.420781] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:07:35.685  [2024-11-19 16:51:28.420838] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:35.685  [2024-11-19 16:51:28.421301] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:07:35.685  [2024-11-19 16:51:28.421332] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:35.685  passed
00:07:35.685    Test: blob_persist_test ...passed
00:07:35.685    Test: blob_decouple_snapshot ...passed
00:07:35.943    Test: blob_seek_io_unit ...passed
00:07:35.943    Test: blob_nested_freezes ...passed
00:07:35.943  Suite: blob_blob_copy_noextent
00:07:35.943    Test: blob_write ...passed
00:07:35.943    Test: blob_read ...passed
00:07:35.943    Test: blob_rw_verify ...passed
00:07:35.943    Test: blob_rw_verify_iov_nomem ...passed
00:07:35.943    Test: blob_rw_iov_read_only ...passed
00:07:35.943    Test: blob_xattr ...passed
00:07:36.200    Test: blob_dirty_shutdown ...passed
00:07:36.201    Test: blob_is_degraded ...passed
00:07:36.201  Suite: blob_esnap_bs_copy_noextent
00:07:36.201    Test: blob_esnap_create ...passed
00:07:36.201    Test: blob_esnap_thread_add_remove ...passed
00:07:36.201    Test: blob_esnap_clone_snapshot ...passed
00:07:36.201    Test: blob_esnap_clone_inflate ...passed
00:07:36.459    Test: blob_esnap_clone_decouple ...passed
00:07:36.459    Test: blob_esnap_clone_reload ...passed
00:07:36.459    Test: blob_esnap_hotplug ...passed
00:07:36.459  Suite: blob_copy_extent
00:07:36.459    Test: blob_init ...[2024-11-19 16:51:29.234335] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500
00:07:36.459  passed
00:07:36.459    Test: blob_thin_provision ...passed
00:07:36.459    Test: blob_read_only ...passed
00:07:36.459    Test: bs_load ...[2024-11-19 16:51:29.313843] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000)
00:07:36.459  passed
00:07:36.717    Test: bs_load_custom_cluster_size ...passed
00:07:36.717    Test: bs_load_after_failed_grow ...passed
00:07:36.717    Test: bs_cluster_sz ...[2024-11-19 16:51:29.355860] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0
00:07:36.717  [2024-11-19 16:51:29.356081] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size.
00:07:36.717  [2024-11-19 16:51:29.356124] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096
00:07:36.717  passed
00:07:36.717    Test: bs_resize_md ...passed
00:07:36.717    Test: bs_destroy ...passed
00:07:36.717    Test: bs_type ...passed
00:07:36.717    Test: bs_super_block ...passed
00:07:36.717    Test: bs_test_recover_cluster_count ...passed
00:07:36.717    Test: bs_grow_live ...passed
00:07:36.717    Test: bs_grow_live_no_space ...passed
00:07:36.717    Test: bs_test_grow ...passed
00:07:36.717    Test: blob_serialize_test ...passed
00:07:36.717    Test: super_block_crc ...passed
00:07:36.717    Test: blob_thin_prov_write_count_io ...passed
00:07:36.976    Test: bs_load_iter_test ...passed
00:07:36.976    Test: blob_relations ...[2024-11-19 16:51:29.608111] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:36.976  [2024-11-19 16:51:29.608256] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:36.976  [2024-11-19 16:51:29.609493] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:36.976  [2024-11-19 16:51:29.609564] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:36.976  passed
00:07:36.976    Test: blob_relations2 ...[2024-11-19 16:51:29.632986] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:36.976  [2024-11-19 16:51:29.633125] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:36.976  [2024-11-19 16:51:29.633172] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:36.976  [2024-11-19 16:51:29.633202] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:36.976  [2024-11-19 16:51:29.634715] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:36.976  [2024-11-19 16:51:29.634780] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:36.976  [2024-11-19 16:51:29.635500] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:36.976  [2024-11-19 16:51:29.635564] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:36.976  passed
00:07:36.976    Test: blob_relations3 ...passed
00:07:37.234    Test: blobstore_clean_power_failure ...passed
00:07:37.234    Test: blob_delete_snapshot_power_failure ...[2024-11-19 16:51:29.914425] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5
00:07:37.234  [2024-11-19 16:51:29.935462] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5
00:07:37.234  [2024-11-19 16:51:29.956647] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5
00:07:37.235  [2024-11-19 16:51:29.956798] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone
00:07:37.235  [2024-11-19 16:51:29.956835] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:37.235  [2024-11-19 16:51:29.981629] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5
00:07:37.235  [2024-11-19 16:51:29.981758] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail
00:07:37.235  [2024-11-19 16:51:29.981784] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone
00:07:37.235  [2024-11-19 16:51:29.981816] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:37.235  [2024-11-19 16:51:30.003772] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5
00:07:37.235  [2024-11-19 16:51:30.003905] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail
00:07:37.235  [2024-11-19 16:51:30.003932] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone
00:07:37.235  [2024-11-19 16:51:30.003962] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:37.235  [2024-11-19 16:51:30.025968] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob
00:07:37.235  [2024-11-19 16:51:30.026132] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:37.235  [2024-11-19 16:51:30.048157] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone
00:07:37.235  [2024-11-19 16:51:30.048322] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:37.235  [2024-11-19 16:51:30.069978] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob
00:07:37.235  [2024-11-19 16:51:30.070127] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:37.493  passed
00:07:37.493    Test: blob_create_snapshot_power_failure ...[2024-11-19 16:51:30.136661] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5
00:07:37.493  [2024-11-19 16:51:30.158672] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5
00:07:37.493  [2024-11-19 16:51:30.200914] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5
00:07:37.493  [2024-11-19 16:51:30.222146] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5
00:07:37.493  passed
00:07:37.493    Test: blob_io_unit ...passed
00:07:37.493    Test: blob_io_unit_compatibility ...passed
00:07:37.493    Test: blob_ext_md_pages ...passed
00:07:37.751    Test: blob_esnap_io_4096_4096 ...passed
00:07:37.751    Test: blob_esnap_io_512_512 ...passed
00:07:37.751    Test: blob_esnap_io_4096_512 ...passed
00:07:37.751    Test: blob_esnap_io_512_4096 ...passed
00:07:37.751  Suite: blob_bs_copy_extent
00:07:37.751    Test: blob_open ...passed
00:07:38.010    Test: blob_create ...[2024-11-19 16:51:30.627191] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters)
00:07:38.010  passed
00:07:38.010    Test: blob_create_loop ...passed
00:07:38.010    Test: blob_create_fail ...[2024-11-19 16:51:30.761493] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters)
00:07:38.010  passed
00:07:38.010    Test: blob_create_internal ...passed
00:07:38.010    Test: blob_create_zero_extent ...passed
00:07:38.268    Test: blob_snapshot ...passed
00:07:38.268    Test: blob_clone ...passed
00:07:38.268    Test: blob_inflate ...[2024-11-19 16:51:30.968180] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent.
00:07:38.268  passed
00:07:38.268    Test: blob_delete ...passed
00:07:38.268    Test: blob_resize_test ...[2024-11-19 16:51:31.083982] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28
00:07:38.268  passed
00:07:38.526    Test: channel_ops ...passed
00:07:38.526    Test: blob_super ...passed
00:07:38.526    Test: blob_rw_verify_iov ...passed
00:07:38.526    Test: blob_unmap ...passed
00:07:38.785    Test: blob_iter ...passed
00:07:38.785    Test: blob_parse_md ...passed
00:07:38.785    Test: bs_load_pending_removal ...passed
00:07:38.785    Test: bs_unload ...[2024-11-19 16:51:31.549159] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs
00:07:38.785  passed
00:07:38.785    Test: bs_usable_clusters ...passed
00:07:39.043    Test: blob_crc ...[2024-11-19 16:51:31.670529] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000
00:07:39.043  [2024-11-19 16:51:31.670700] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000
00:07:39.043  passed
00:07:39.043    Test: blob_flags ...passed
00:07:39.043    Test: bs_version ...passed
00:07:39.043    Test: blob_set_xattrs_test ...[2024-11-19 16:51:31.853243] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters)
00:07:39.043  [2024-11-19 16:51:31.853382] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters)
00:07:39.043  passed
00:07:39.302    Test: blob_thin_prov_alloc ...passed
00:07:39.302    Test: blob_insert_cluster_msg_test ...passed
00:07:39.302    Test: blob_thin_prov_rw ...passed
00:07:39.560    Test: blob_thin_prov_rle ...passed
00:07:39.560    Test: blob_thin_prov_rw_iov ...passed
00:07:39.560    Test: blob_snapshot_rw ...passed
00:07:39.560    Test: blob_snapshot_rw_iov ...passed
00:07:39.836    Test: blob_inflate_rw ...passed
00:07:40.118    Test: blob_snapshot_freeze_io ...passed
00:07:40.118    Test: blob_operation_split_rw ...passed
00:07:40.376    Test: blob_operation_split_rw_iov ...passed
00:07:40.376    Test: blob_simultaneous_operations ...[2024-11-19 16:51:33.054537] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:07:40.376  [2024-11-19 16:51:33.054679] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:40.376  [2024-11-19 16:51:33.055445] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:07:40.376  [2024-11-19 16:51:33.055492] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:40.376  [2024-11-19 16:51:33.058741] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:07:40.376  [2024-11-19 16:51:33.058797] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:40.376  [2024-11-19 16:51:33.058925] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:07:40.376  [2024-11-19 16:51:33.058953] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:40.376  passed
00:07:40.376    Test: blob_persist_test ...passed
00:07:40.376    Test: blob_decouple_snapshot ...passed
00:07:40.635    Test: blob_seek_io_unit ...passed
00:07:40.635    Test: blob_nested_freezes ...passed
00:07:40.635  Suite: blob_blob_copy_extent
00:07:40.635    Test: blob_write ...passed
00:07:40.635    Test: blob_read ...passed
00:07:40.894    Test: blob_rw_verify ...passed
00:07:40.894    Test: blob_rw_verify_iov_nomem ...passed
00:07:40.894    Test: blob_rw_iov_read_only ...passed
00:07:40.894    Test: blob_xattr ...passed
00:07:41.153    Test: blob_dirty_shutdown ...passed
00:07:41.153    Test: blob_is_degraded ...passed
00:07:41.153  Suite: blob_esnap_bs_copy_extent
00:07:41.153    Test: blob_esnap_create ...passed
00:07:41.153    Test: blob_esnap_thread_add_remove ...passed
00:07:41.153    Test: blob_esnap_clone_snapshot ...passed
00:07:41.412    Test: blob_esnap_clone_inflate ...passed
00:07:41.412    Test: blob_esnap_clone_decouple ...passed
00:07:41.412    Test: blob_esnap_clone_reload ...passed
00:07:41.412    Test: blob_esnap_hotplug ...passed
00:07:41.412  
00:07:41.412  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:41.412                suites     16     16    n/a      0        0
00:07:41.412                 tests    348    348    348      0        0
00:07:41.412               asserts  92605  92605  92605      0      n/a
00:07:41.412  
00:07:41.412  Elapsed time =   14.902 seconds
00:07:41.671   16:51:34	-- unit/unittest.sh@41 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob_bdev.c/blob_bdev_ut
00:07:41.671  
00:07:41.671  
00:07:41.671       CUnit - A unit testing framework for C - Version 2.1-3
00:07:41.671       http://cunit.sourceforge.net/
00:07:41.671  
00:07:41.671  
00:07:41.671  Suite: blob_bdev
00:07:41.671    Test: create_bs_dev ...passed
00:07:41.671    Test: create_bs_dev_ro ...[2024-11-19 16:51:34.365589] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 507:spdk_bdev_create_bs_dev: *ERROR*: bdev name 'nope': unsupported options
00:07:41.671  passed
00:07:41.671    Test: create_bs_dev_rw ...passed
00:07:41.671    Test: claim_bs_dev ...[2024-11-19 16:51:34.366797] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 340:spdk_bs_bdev_claim: *ERROR*: could not claim bs dev
00:07:41.671  passed
00:07:41.671    Test: claim_bs_dev_ro ...passed
00:07:41.671    Test: deferred_destroy_refs ...passed
00:07:41.671    Test: deferred_destroy_channels ...passed
00:07:41.671    Test: deferred_destroy_threads ...passed
00:07:41.671  
00:07:41.671  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:41.671                suites      1      1    n/a      0        0
00:07:41.671                 tests      8      8      8      0        0
00:07:41.671               asserts    119    119    119      0      n/a
00:07:41.671  
00:07:41.671  Elapsed time =    0.001 seconds
00:07:41.671   16:51:34	-- unit/unittest.sh@42 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/tree.c/tree_ut
00:07:41.671  
00:07:41.671  
00:07:41.671       CUnit - A unit testing framework for C - Version 2.1-3
00:07:41.671       http://cunit.sourceforge.net/
00:07:41.671  
00:07:41.671  
00:07:41.671  Suite: tree
00:07:41.671    Test: blobfs_tree_op_test ...passed
00:07:41.671  
00:07:41.671  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:41.671                suites      1      1    n/a      0        0
00:07:41.671                 tests      1      1      1      0        0
00:07:41.671               asserts     27     27     27      0      n/a
00:07:41.672  
00:07:41.672  Elapsed time =    0.000 seconds
00:07:41.672   16:51:34	-- unit/unittest.sh@43 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut
00:07:41.672  
00:07:41.672  
00:07:41.672       CUnit - A unit testing framework for C - Version 2.1-3
00:07:41.672       http://cunit.sourceforge.net/
00:07:41.672  
00:07:41.672  
00:07:41.672  Suite: blobfs_async_ut
00:07:41.931    Test: fs_init ...passed
00:07:41.931    Test: fs_open ...passed
00:07:41.931    Test: fs_create ...passed
00:07:41.931    Test: fs_truncate ...passed
00:07:41.931    Test: fs_rename ...[2024-11-19 16:51:34.647855] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1476:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=file1 to deleted
00:07:41.931  passed
00:07:41.931    Test: fs_rw_async ...passed
00:07:41.931    Test: fs_writev_readv_async ...passed
00:07:41.931    Test: tree_find_buffer_ut ...passed
00:07:41.931    Test: channel_ops ...passed
00:07:41.931    Test: channel_ops_sync ...passed
00:07:41.931  
00:07:41.931  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:41.931                suites      1      1    n/a      0        0
00:07:41.931                 tests     10     10     10      0        0
00:07:41.931               asserts    292    292    292      0      n/a
00:07:41.931  
00:07:41.931  Elapsed time =    0.272 seconds
00:07:41.931   16:51:34	-- unit/unittest.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut
00:07:42.191  
00:07:42.191  
00:07:42.191       CUnit - A unit testing framework for C - Version 2.1-3
00:07:42.191       http://cunit.sourceforge.net/
00:07:42.191  
00:07:42.191  
00:07:42.191  Suite: blobfs_sync_ut
00:07:42.191    Test: cache_read_after_write ...[2024-11-19 16:51:34.890885] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1476:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=testfile to deleted
00:07:42.191  passed
00:07:42.191    Test: file_length ...passed
00:07:42.191    Test: append_write_to_extend_blob ...passed
00:07:42.191    Test: partial_buffer ...passed
00:07:42.191    Test: cache_write_null_buffer ...passed
00:07:42.191    Test: fs_create_sync ...passed
00:07:42.191    Test: fs_rename_sync ...passed
00:07:42.191    Test: cache_append_no_cache ...passed
00:07:42.191    Test: fs_delete_file_without_close ...passed
00:07:42.191  
00:07:42.191  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:42.191                suites      1      1    n/a      0        0
00:07:42.191                 tests      9      9      9      0        0
00:07:42.191               asserts    345    345    345      0      n/a
00:07:42.191  
00:07:42.191  Elapsed time =    0.384 seconds
00:07:42.450   16:51:35	-- unit/unittest.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut
00:07:42.450  
00:07:42.450  
00:07:42.450       CUnit - A unit testing framework for C - Version 2.1-3
00:07:42.450       http://cunit.sourceforge.net/
00:07:42.450  
00:07:42.450  
00:07:42.450  Suite: blobfs_bdev_ut
00:07:42.450    Test: spdk_blobfs_bdev_detect_test ...passed
00:07:42.450    Test: spdk_blobfs_bdev_create_test ...[2024-11-19 16:51:35.073931] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c:  59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1
00:07:42.450  [2024-11-19 16:51:35.074300] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c:  59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1
00:07:42.450  passed
00:07:42.450    Test: spdk_blobfs_bdev_mount_test ...passed
00:07:42.450  
00:07:42.450  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:42.450                suites      1      1    n/a      0        0
00:07:42.450                 tests      3      3      3      0        0
00:07:42.450               asserts      9      9      9      0      n/a
00:07:42.450  
00:07:42.450  Elapsed time =    0.001 seconds
00:07:42.450  
00:07:42.450  real	0m15.801s
00:07:42.450  user	0m15.032s
00:07:42.450  sys	0m0.970s
00:07:42.450   16:51:35	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:42.450   16:51:35	-- common/autotest_common.sh@10 -- # set +x
00:07:42.450  ************************************
00:07:42.450  END TEST unittest_blob_blobfs
00:07:42.450  ************************************
00:07:42.450   16:51:35	-- unit/unittest.sh@208 -- # run_test unittest_event unittest_event
00:07:42.450   16:51:35	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:07:42.450   16:51:35	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:42.450   16:51:35	-- common/autotest_common.sh@10 -- # set +x
00:07:42.450  ************************************
00:07:42.451  START TEST unittest_event
00:07:42.451  ************************************
00:07:42.451   16:51:35	-- common/autotest_common.sh@1114 -- # unittest_event
00:07:42.451   16:51:35	-- unit/unittest.sh@50 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/app.c/app_ut
00:07:42.451  
00:07:42.451  
00:07:42.451       CUnit - A unit testing framework for C - Version 2.1-3
00:07:42.451       http://cunit.sourceforge.net/
00:07:42.451  
00:07:42.451  
00:07:42.451  Suite: app_suite
00:07:42.451    Test: test_spdk_app_parse_args ...app_ut [options]
00:07:42.451  options:
00:07:42.451   -c, --config <config>     JSON config file (default none)
00:07:42.451       --json <config>       JSON config file (default none)
00:07:42.451       --json-ignore-init-errors
00:07:42.451                             don't exit on invalid config entryapp_ut: invalid option -- 'z'
00:07:42.451  
00:07:42.451   -d, --limit-coredump      do not set max coredump size to RLIM_INFINITY
00:07:42.451   -g, --single-file-segments
00:07:42.451                             force creating just one hugetlbfs file
00:07:42.451   -h, --help                show this usage
00:07:42.451   -i, --shm-id <id>         shared memory ID (optional)
00:07:42.451   -m, --cpumask <mask or list>    core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK
00:07:42.451       --lcores <list>       lcore to CPU mapping list. The list is in the format:
00:07:42.451                             <lcores[@CPUs]>[<,lcores[@CPUs]>...]
00:07:42.451                             lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"'
00:07:42.451                             Within the group, '-' is used for range separator,
00:07:42.451                             ',' is used for single number separator.
00:07:42.451                             '( )' can be omitted for single element group,
00:07:42.451                             '@' can be omitted if cpus and lcores have the same value
00:07:42.451   -n, --mem-channels <num>  channel number of memory channels used for DPDK
00:07:42.451   -p, --main-core <id>      main (primary) core for DPDK
00:07:42.451   -r, --rpc-socket <path>   RPC listen address (default /var/tmp/spdk.sock)
00:07:42.451   -s, --mem-size <size>     memory size in MB for DPDK (default: 0MB)
00:07:42.451       --disable-cpumask-locks    Disable CPU core lock files.
00:07:42.451       --silence-noticelog   disable notice level logging to stderr
00:07:42.451       --msg-mempool-size <size>  global message memory pool size in count (default: 262143)
00:07:42.451   -u, --no-pci              disable PCI access
00:07:42.451       --wait-for-rpc        wait for RPCs to initialize subsystems
00:07:42.451       --max-delay <num>     maximum reactor delay (in microseconds)
00:07:42.451   -B, --pci-blocked <bdf>   pci addr to block (can be used more than once)
00:07:42.451   -A, --pci-allowed <bdf>   pci addr to allow (-B and -A cannot be used at the same time)
00:07:42.451   -R, --huge-unlink         unlink huge files after initialization
00:07:42.451   -v, --version             print SPDK version
00:07:42.451       --huge-dir <path>     use a specific hugetlbfs mount to reserve memory from
00:07:42.451       --iova-mode <pa/va>   set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA)
00:07:42.451       --base-virtaddr <addr>      the base virtual address for DPDK (default: 0x200000000000)
00:07:42.451       --num-trace-entries <num>   number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768)
00:07:42.451                                   Tracepoints vary in size and can use more than one trace entry.
00:07:42.451       --rpcs-allowed	   comma-separated list of permitted RPCS
00:07:42.451       --env-context         Opaque context for use of the env implementation
00:07:42.451       --vfio-vf-token       VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver
00:07:42.451       --no-huge             run without using hugepages
00:07:42.451   -L, --logflag <flag>    enable log flag (all, json_util, log, rpc, thread, trace)
00:07:42.451   -e, --tpoint-group <group-name>[:<tpoint_mask>]
00:07:42.451                             group_name - tracepoint group name for spdk trace buffers (thread, all)
00:07:42.451                             tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1).
00:07:42.451                              Groups and masks can be combined (e.g. thread,bdev:0x1).
00:07:42.451                              All available tpoints can be found in /include/spdk_internal/trace_defs.h
00:07:42.451       --interrupt-mode      set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode)
00:07:42.451  app_ut [options]
00:07:42.451  options:
00:07:42.451   -c, --config <config>     JSON config file (default none)
00:07:42.451       --json <config>       JSON config file (default none)
00:07:42.451       --json-ignore-init-errors
00:07:42.451                             don't exit on invalid config entry
00:07:42.451   -d, --limit-coredump      do not set max coredump size to RLIM_INFINITY
00:07:42.451   -g, --single-file-segments
00:07:42.451                             force creating just one hugetlbfs file
00:07:42.451   -h, --help                show this usage
00:07:42.451   -i, --shm-id <id>         shared memory ID (optional)
00:07:42.451   -m, --cpumask <mask or list>    core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK
00:07:42.451       --lcores <list>       lcore to CPU mapping list. The list is in the format:
00:07:42.451                             <lcores[@CPUs]>[<,lcores[@CPUs]>...]
00:07:42.451                             lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"'
00:07:42.451                             Within the group, '-' is used for range separator,
00:07:42.451                             ',' is used for single number separator.
00:07:42.451                             '( )' can be omitted for single element group,
00:07:42.451                             '@' can be omitted if cpus and lcores have the same value
00:07:42.451   -n, --mem-channels <num>  channel number of memory channels used for DPDK
00:07:42.451   -p, --main-core <id>      main (primary) core for DPDK
00:07:42.451   -r, --rpc-socket <path>   RPC listen address (default /var/tmp/spdk.sock)
00:07:42.451   -s, --mem-size <size>     memory size in MB for DPDK (default: 0MB)
00:07:42.451       --disable-cpumask-locks    Disable CPU core lock files.
00:07:42.451       --silence-noticelog   disable notice level logging to stderr
00:07:42.451       --msg-mempool-size <size>  global message memory pool size in count (default: 262143)
00:07:42.451   -u, --no-pci              disable PCI access
00:07:42.451       --wait-for-rpc        wait for RPCs to initialize subsystems
00:07:42.451       --max-delay <num>     maximum reactor delay (in microseconds)
00:07:42.451   -B, --pci-blocked <bdf>   pci addr to block (can be used more than once)
00:07:42.451   -A, --pci-allowed <bdf>   pci addr to allow (-B and -A cannot be used at the same time)
00:07:42.451   -R, --huge-unlink         unlink huge files after initialization
00:07:42.451   -v, --version             print SPDK versionapp_ut: unrecognized option '--test-long-opt'
00:07:42.451  
00:07:42.451       --huge-dir <path>     use a specific hugetlbfs mount to reserve memory from
00:07:42.451       --iova-mode <pa/va>   set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA)
00:07:42.451       --base-virtaddr <addr>      the base virtual address for DPDK (default: 0x200000000000)
00:07:42.451       --num-trace-entries <num>   number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768)
00:07:42.451                                   Tracepoints vary in size and can use more than one trace entry.
00:07:42.451       --rpcs-allowed	   comma-separated list of permitted RPCS
00:07:42.451       --env-context         Opaque context for use of the env implementation
00:07:42.451       --vfio-vf-token       VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver
00:07:42.451       --no-huge             run without using hugepages
00:07:42.451   -L, --logflag <flag>    enable log flag (all, json_util, log, rpc, thread, trace)
00:07:42.451   -e, --tpoint-group <group-name>[:<tpoint_mask>]
00:07:42.451                             group_name - tracepoint group name for spdk trace buffers (thread, all)
00:07:42.451                             tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1).
00:07:42.451                              Groups and masks can be combined (e.g. thread,bdev:0x1).
00:07:42.451                              All available tpoints can be found in /include/spdk_internal/trace_defs.h
00:07:42.451       --interrupt-mode      set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode)
00:07:42.451  [2024-11-19 16:51:35.172634] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1030:spdk_app_parse_args: *ERROR*: Duplicated option 'c' between app-specific command line parameter and generic spdk opts.
00:07:42.451  [2024-11-19 16:51:35.172937] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1211:spdk_app_parse_args: *ERROR*: -B and -W cannot be used at the same time
00:07:42.451  app_ut [options]
00:07:42.451  options:
00:07:42.451   -c, --config <config>     JSON config file (default none)
00:07:42.451       --json <config>       JSON config file (default none)
00:07:42.451       --json-ignore-init-errors
00:07:42.451                             don't exit on invalid config entry
00:07:42.451   -d, --limit-coredump      do not set max coredump size to RLIM_INFINITY
00:07:42.451   -g, --single-file-segments
00:07:42.451                             force creating just one hugetlbfs file
00:07:42.451   -h, --help                show this usage
00:07:42.451   -i, --shm-id <id>         shared memory ID (optional)
00:07:42.451   -m, --cpumask <mask or list>    core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK
00:07:42.451       --lcores <list>       lcore to CPU mapping list. The list is in the format:
00:07:42.451                             <lcores[@CPUs]>[<,lcores[@CPUs]>...]
00:07:42.451                             lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"'
00:07:42.451                             Within the group, '-' is used for range separator,
00:07:42.451                             ',' is used for single number separator.
00:07:42.451                             '( )' can be omitted for single element group,
00:07:42.451                             '@' can be omitted if cpus and lcores have the same value
00:07:42.451   -n, --mem-channels <num>  channel number of memory channels used for DPDK
00:07:42.451   -p, --main-core <id>      main (primary) core for DPDK
00:07:42.451   -r, --rpc-socket <path>   RPC listen address (default /var/tmp/spdk.sock)
00:07:42.451   -s, --mem-size <size>     memory size in MB for DPDK (default: 0MB)
00:07:42.451       --disable-cpumask-locks    Disable CPU core lock files.
00:07:42.451       --silence-noticelog   disable notice level logging to stderr
00:07:42.451       --msg-mempool-size <size>  global message memory pool size in count (default: 262143)
00:07:42.451   -u, --no-pci              disable PCI access
00:07:42.451       --wait-for-rpc        wait for RPCs to initialize subsystems
00:07:42.452       --max-delay <num>     maximum reactor delay (in microseconds)
00:07:42.452   -B, --pci-blocked <bdf>   pci addr to block (can be used more than once)
00:07:42.452   -A, --pci-allowed <bdf>   pci addr to allow (-B and -A cannot be used at the same time)
00:07:42.452   -R, --huge-unlink         unlink huge files after initialization
00:07:42.452   -v, --version             print SPDK version
00:07:42.452       --huge-dir <path>     use a specific hugetlbfs mount to reserve memory from
00:07:42.452       --iova-mode <pa/va>   set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA)
00:07:42.452       --base-virtaddr <addr>      the base virtual address for DPDK (default: 0x200000000000)
00:07:42.452       --num-trace-entries <num>   number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768)
00:07:42.452                                   Tracepoints vary in size and can use more than one trace entry.
00:07:42.452       --rpcs-allowed	   comma-separated list of permitted RPCS
00:07:42.452       --env-context         Opaque context for use of the env implementation
00:07:42.452       --vfio-vf-token       VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver
00:07:42.452       --no-huge             run without using hugepages
00:07:42.452   -L, --logflag <flag>    enable log flag (all, json_util, log, rpc, thread, trace)
00:07:42.452   -e, --tpoint-group <group-name>[:<tpoint_mask>]
00:07:42.452                             group_name - tracepoint group name for spdk trace buffers (thread, all)
00:07:42.452                             tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1).
00:07:42.452                              Groups and masks can be combined (e.g. thread,bdev:0x1).
00:07:42.452                              All available tpoints can be found in /include/spdk_internal/trace_defs.h
00:07:42.452       --interrupt-mode      set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode)
00:07:42.452  passed
00:07:42.452  
00:07:42.452  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:42.452                suites      1      1    n/a      0        0
00:07:42.452                 tests      1      1      1      0        0
00:07:42.452               asserts      8      8      8      0      n/a
00:07:42.452  
00:07:42.452  Elapsed time =    0.001 seconds
00:07:42.452  [2024-11-19 16:51:35.173177] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1116:spdk_app_parse_args: *ERROR*: Invalid main core --single-file-segments
00:07:42.452   16:51:35	-- unit/unittest.sh@51 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/reactor.c/reactor_ut
00:07:42.452  
00:07:42.452  
00:07:42.452       CUnit - A unit testing framework for C - Version 2.1-3
00:07:42.452       http://cunit.sourceforge.net/
00:07:42.452  
00:07:42.452  
00:07:42.452  Suite: app_suite
00:07:42.452    Test: test_create_reactor ...passed
00:07:42.452    Test: test_init_reactors ...passed
00:07:42.452    Test: test_event_call ...passed
00:07:42.452    Test: test_schedule_thread ...passed
00:07:42.452    Test: test_reschedule_thread ...passed
00:07:42.452    Test: test_bind_thread ...passed
00:07:42.452    Test: test_for_each_reactor ...passed
00:07:42.452    Test: test_reactor_stats ...passed
00:07:42.452    Test: test_scheduler ...passed
00:07:42.452    Test: test_governor ...passed
00:07:42.452  
00:07:42.452  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:42.452                suites      1      1    n/a      0        0
00:07:42.452                 tests     10     10     10      0        0
00:07:42.452               asserts    344    344    344      0      n/a
00:07:42.452  
00:07:42.452  Elapsed time =    0.021 seconds
00:07:42.452  
00:07:42.452  real	0m0.101s
00:07:42.452  user	0m0.060s
00:07:42.452  sys	0m0.042s
00:07:42.452   16:51:35	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:42.452   16:51:35	-- common/autotest_common.sh@10 -- # set +x
00:07:42.452  ************************************
00:07:42.452  END TEST unittest_event
00:07:42.452  ************************************
00:07:42.452    16:51:35	-- unit/unittest.sh@209 -- # uname -s
00:07:42.711   16:51:35	-- unit/unittest.sh@209 -- # '[' Linux = Linux ']'
00:07:42.711   16:51:35	-- unit/unittest.sh@210 -- # run_test unittest_ftl unittest_ftl
00:07:42.711   16:51:35	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:07:42.711   16:51:35	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:42.711   16:51:35	-- common/autotest_common.sh@10 -- # set +x
00:07:42.711  ************************************
00:07:42.711  START TEST unittest_ftl
00:07:42.711  ************************************
00:07:42.711   16:51:35	-- common/autotest_common.sh@1114 -- # unittest_ftl
00:07:42.711   16:51:35	-- unit/unittest.sh@55 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_band.c/ftl_band_ut
00:07:42.711  
00:07:42.711  
00:07:42.711       CUnit - A unit testing framework for C - Version 2.1-3
00:07:42.711       http://cunit.sourceforge.net/
00:07:42.711  
00:07:42.711  
00:07:42.711  Suite: ftl_band_suite
00:07:42.711    Test: test_band_block_offset_from_addr_base ...passed
00:07:42.711    Test: test_band_block_offset_from_addr_offset ...passed
00:07:42.711    Test: test_band_addr_from_block_offset ...passed
00:07:42.711    Test: test_band_set_addr ...passed
00:07:42.971    Test: test_invalidate_addr ...passed
00:07:42.971    Test: test_next_xfer_addr ...passed
00:07:42.971  
00:07:42.971  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:42.971                suites      1      1    n/a      0        0
00:07:42.971                 tests      6      6      6      0        0
00:07:42.971               asserts  30356  30356  30356      0      n/a
00:07:42.971  
00:07:42.971  Elapsed time =    0.250 seconds
00:07:42.971   16:51:35	-- unit/unittest.sh@56 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut
00:07:42.971  
00:07:42.971  
00:07:42.971       CUnit - A unit testing framework for C - Version 2.1-3
00:07:42.971       http://cunit.sourceforge.net/
00:07:42.971  
00:07:42.971  
00:07:42.971  Suite: ftl_bitmap
00:07:42.971    Test: test_ftl_bitmap_create ...[2024-11-19 16:51:35.716718] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c:  52:ftl_bitmap_create: *ERROR*: Buffer for bitmap must be aligned to 8 bytes
00:07:42.971  passed
00:07:42.971    Test: test_ftl_bitmap_get ...passed
00:07:42.971    Test: test_ftl_bitmap_set ...passed
00:07:42.971    Test: test_ftl_bitmap_clear ...passed
00:07:42.971    Test: test_ftl_bitmap_find_first_set ...passed
00:07:42.971    Test: test_ftl_bitmap_find_first_clear ...passed
00:07:42.971    Test: test_ftl_bitmap_count_set ...[2024-11-19 16:51:35.716992] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c:  58:ftl_bitmap_create: *ERROR*: Size of buffer for bitmap must be divisible by 8 bytes
00:07:42.971  passed
00:07:42.971  
00:07:42.971  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:42.971                suites      1      1    n/a      0        0
00:07:42.971                 tests      7      7      7      0        0
00:07:42.971               asserts    137    137    137      0      n/a
00:07:42.971  
00:07:42.971  Elapsed time =    0.001 seconds
00:07:42.971   16:51:35	-- unit/unittest.sh@57 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_io.c/ftl_io_ut
00:07:42.971  
00:07:42.971  
00:07:42.971       CUnit - A unit testing framework for C - Version 2.1-3
00:07:42.971       http://cunit.sourceforge.net/
00:07:42.971  
00:07:42.971  
00:07:42.971  Suite: ftl_io_suite
00:07:42.971    Test: test_completion ...passed
00:07:42.971    Test: test_multiple_ios ...passed
00:07:42.971  
00:07:42.971  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:42.971                suites      1      1    n/a      0        0
00:07:42.971                 tests      2      2      2      0        0
00:07:42.971               asserts     47     47     47      0      n/a
00:07:42.971  
00:07:42.971  Elapsed time =    0.004 seconds
00:07:42.971   16:51:35	-- unit/unittest.sh@58 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut
00:07:42.971  
00:07:42.971  
00:07:42.971       CUnit - A unit testing framework for C - Version 2.1-3
00:07:42.971       http://cunit.sourceforge.net/
00:07:42.971  
00:07:42.971  
00:07:42.971  Suite: ftl_mngt
00:07:42.971    Test: test_next_step ...passed
00:07:42.971    Test: test_continue_step ...passed
00:07:42.971    Test: test_get_func_and_step_cntx_alloc ...passed
00:07:42.971    Test: test_fail_step ...passed
00:07:42.971    Test: test_mngt_call_and_call_rollback ...passed
00:07:42.971    Test: test_nested_process_failure ...passed
00:07:42.971  
00:07:42.971  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:42.971                suites      1      1    n/a      0        0
00:07:42.971                 tests      6      6      6      0        0
00:07:42.971               asserts    176    176    176      0      n/a
00:07:42.971  
00:07:42.971  Elapsed time =    0.003 seconds
00:07:42.971   16:51:35	-- unit/unittest.sh@59 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut
00:07:43.231  
00:07:43.231  
00:07:43.231       CUnit - A unit testing framework for C - Version 2.1-3
00:07:43.231       http://cunit.sourceforge.net/
00:07:43.231  
00:07:43.231  
00:07:43.231  Suite: ftl_mempool
00:07:43.231    Test: test_ftl_mempool_create ...passed
00:07:43.231    Test: test_ftl_mempool_get_put ...passed
00:07:43.231  
00:07:43.231  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:43.231                suites      1      1    n/a      0        0
00:07:43.231                 tests      2      2      2      0        0
00:07:43.231               asserts     36     36     36      0      n/a
00:07:43.231  
00:07:43.231  Elapsed time =    0.000 seconds
00:07:43.231   16:51:35	-- unit/unittest.sh@60 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut
00:07:43.231  
00:07:43.231  
00:07:43.231       CUnit - A unit testing framework for C - Version 2.1-3
00:07:43.231       http://cunit.sourceforge.net/
00:07:43.231  
00:07:43.231  
00:07:43.231  Suite: ftl_addr64_suite
00:07:43.231    Test: test_addr_cached ...passed
00:07:43.231  
00:07:43.231  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:43.231                suites      1      1    n/a      0        0
00:07:43.231                 tests      1      1      1      0        0
00:07:43.231               asserts   1536   1536   1536      0      n/a
00:07:43.231  
00:07:43.231  Elapsed time =    0.000 seconds
00:07:43.231   16:51:35	-- unit/unittest.sh@61 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_sb/ftl_sb_ut
00:07:43.231  
00:07:43.231  
00:07:43.231       CUnit - A unit testing framework for C - Version 2.1-3
00:07:43.231       http://cunit.sourceforge.net/
00:07:43.231  
00:07:43.231  
00:07:43.231  Suite: ftl_sb
00:07:43.231    Test: test_sb_crc_v2 ...passed
00:07:43.231    Test: test_sb_crc_v3 ...passed
00:07:43.231    Test: test_sb_v3_md_layout ...[2024-11-19 16:51:35.900750] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 143:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Missing regions
00:07:43.231  [2024-11-19 16:51:35.901510] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 131:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow
00:07:43.231  [2024-11-19 16:51:35.901651] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow
00:07:43.231  [2024-11-19 16:51:35.901771] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow
00:07:43.231  [2024-11-19 16:51:35.901874] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found
00:07:43.231  [2024-11-19 16:51:35.902041] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c:  93:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Unsupported MD region type found
00:07:43.231  [2024-11-19 16:51:35.902141] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c:  88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found
00:07:43.231  [2024-11-19 16:51:35.902253] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c:  88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found
00:07:43.231  [2024-11-19 16:51:35.902397] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found
00:07:43.231  [2024-11-19 16:51:35.902522] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found
00:07:43.231  passed
00:07:43.231    Test: test_sb_v5_md_layout ...[2024-11-19 16:51:35.902648] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found
00:07:43.231  passed
00:07:43.231  
00:07:43.231  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:43.231                suites      1      1    n/a      0        0
00:07:43.231                 tests      4      4      4      0        0
00:07:43.231               asserts    148    148    148      0      n/a
00:07:43.231  
00:07:43.231  Elapsed time =    0.002 seconds
00:07:43.231   16:51:35	-- unit/unittest.sh@62 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut
00:07:43.231  
00:07:43.231  
00:07:43.231       CUnit - A unit testing framework for C - Version 2.1-3
00:07:43.231       http://cunit.sourceforge.net/
00:07:43.231  
00:07:43.231  
00:07:43.231  Suite: ftl_layout_upgrade
00:07:43.231    Test: test_l2p_upgrade ...passed
00:07:43.231  
00:07:43.231  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:43.231                suites      1      1    n/a      0        0
00:07:43.231                 tests      1      1      1      0        0
00:07:43.231               asserts    140    140    140      0      n/a
00:07:43.231  
00:07:43.231  Elapsed time =    0.001 seconds
00:07:43.231  
00:07:43.231  real	0m0.632s
00:07:43.231  user	0m0.265s
00:07:43.231  sys	0m0.370s
00:07:43.231   16:51:35	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:43.231   16:51:35	-- common/autotest_common.sh@10 -- # set +x
00:07:43.231  ************************************
00:07:43.231  END TEST unittest_ftl
00:07:43.231  ************************************
00:07:43.231   16:51:36	-- unit/unittest.sh@213 -- # run_test unittest_accel /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut
00:07:43.231   16:51:36	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:07:43.231   16:51:36	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:43.231   16:51:36	-- common/autotest_common.sh@10 -- # set +x
00:07:43.231  ************************************
00:07:43.231  START TEST unittest_accel
00:07:43.231  ************************************
00:07:43.231   16:51:36	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut
00:07:43.231  
00:07:43.231  
00:07:43.231       CUnit - A unit testing framework for C - Version 2.1-3
00:07:43.231       http://cunit.sourceforge.net/
00:07:43.231  
00:07:43.231  
00:07:43.231  Suite: accel_sequence
00:07:43.231    Test: test_sequence_fill_copy ...passed
00:07:43.231    Test: test_sequence_abort ...passed
00:07:43.231    Test: test_sequence_append_error ...passed
00:07:43.231    Test: test_sequence_completion_error ...[2024-11-19 16:51:36.065885] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1926:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7fdc628287c0
00:07:43.231  [2024-11-19 16:51:36.066414] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1926:accel_sequence_task_cb: *ERROR*: Failed to execute decompress operation, sequence: 0x7fdc628287c0
00:07:43.231  [2024-11-19 16:51:36.066484] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1836:accel_process_sequence: *ERROR*: Failed to submit fill operation, sequence: 0x7fdc628287c0
00:07:43.231  passed
00:07:43.231    Test: test_sequence_decompress ...[2024-11-19 16:51:36.066559] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1836:accel_process_sequence: *ERROR*: Failed to submit decompress operation, sequence: 0x7fdc628287c0
00:07:43.231  passed
00:07:43.231    Test: test_sequence_reverse ...passed
00:07:43.231    Test: test_sequence_copy_elision ...passed
00:07:43.231    Test: test_sequence_accel_buffers ...passed
00:07:43.231    Test: test_sequence_memory_domain ...[2024-11-19 16:51:36.081400] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1728:accel_task_pull_data: *ERROR*: Failed to pull data from memory domain: UT_DMA, rc: -7
00:07:43.231  passed
00:07:43.231    Test: test_sequence_module_memory_domain ...[2024-11-19 16:51:36.081634] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1767:accel_task_push_data: *ERROR*: Failed to push data to memory domain: UT_DMA, rc: -98
00:07:43.231  passed
00:07:43.231    Test: test_sequence_crypto ...passed
00:07:43.231    Test: test_sequence_driver ...[2024-11-19 16:51:36.090053] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1875:accel_process_sequence: *ERROR*: Failed to execute sequence: 0x7fdc61c007c0 using driver: ut
00:07:43.231  [2024-11-19 16:51:36.090203] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1939:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7fdc61c007c0 through driver: ut
00:07:43.231  passed
00:07:43.491    Test: test_sequence_same_iovs ...passed
00:07:43.491    Test: test_sequence_crc32 ...passed
00:07:43.491  Suite: accel
00:07:43.491    Test: test_spdk_accel_task_complete ...passed
00:07:43.491    Test: test_get_task ...passed
00:07:43.491    Test: test_spdk_accel_submit_copy ...passed
00:07:43.491    Test: test_spdk_accel_submit_dualcast ...[2024-11-19 16:51:36.096531] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 432:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses
00:07:43.491  [2024-11-19 16:51:36.096616] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 432:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses
00:07:43.491  passed
00:07:43.491    Test: test_spdk_accel_submit_compare ...passed
00:07:43.491    Test: test_spdk_accel_submit_fill ...passed
00:07:43.491    Test: test_spdk_accel_submit_crc32c ...passed
00:07:43.491    Test: test_spdk_accel_submit_crc32cv ...passed
00:07:43.491    Test: test_spdk_accel_submit_copy_crc32c ...passed
00:07:43.491    Test: test_spdk_accel_submit_xor ...passed
00:07:43.491    Test: test_spdk_accel_module_find_by_name ...passed
00:07:43.491    Test: test_spdk_accel_module_register ...passed
00:07:43.491  
00:07:43.491  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:43.491                suites      2      2    n/a      0        0
00:07:43.491                 tests     26     26     26      0        0
00:07:43.491               asserts    831    831    831      0      n/a
00:07:43.491  
00:07:43.491  Elapsed time =    0.045 seconds
00:07:43.491  
00:07:43.491  real	0m0.100s
00:07:43.491  user	0m0.025s
00:07:43.491  sys	0m0.075s
00:07:43.491   16:51:36	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:43.491   16:51:36	-- common/autotest_common.sh@10 -- # set +x
00:07:43.491  ************************************
00:07:43.491  END TEST unittest_accel
00:07:43.491  ************************************
00:07:43.491   16:51:36	-- unit/unittest.sh@214 -- # run_test unittest_ioat /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut
00:07:43.491   16:51:36	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:07:43.491   16:51:36	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:43.491   16:51:36	-- common/autotest_common.sh@10 -- # set +x
00:07:43.491  ************************************
00:07:43.491  START TEST unittest_ioat
00:07:43.491  ************************************
00:07:43.491   16:51:36	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut
00:07:43.491  
00:07:43.491  
00:07:43.491       CUnit - A unit testing framework for C - Version 2.1-3
00:07:43.491       http://cunit.sourceforge.net/
00:07:43.491  
00:07:43.491  
00:07:43.491  Suite: ioat
00:07:43.491    Test: ioat_state_check ...passed
00:07:43.491  
00:07:43.491  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:43.491                suites      1      1    n/a      0        0
00:07:43.491                 tests      1      1      1      0        0
00:07:43.491               asserts     32     32     32      0      n/a
00:07:43.491  
00:07:43.491  Elapsed time =    0.000 seconds
00:07:43.491  
00:07:43.491  real	0m0.035s
00:07:43.491  user	0m0.021s
00:07:43.491  sys	0m0.014s
00:07:43.491   16:51:36	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:43.491   16:51:36	-- common/autotest_common.sh@10 -- # set +x
00:07:43.491  ************************************
00:07:43.491  END TEST unittest_ioat
00:07:43.491  ************************************
00:07:43.491   16:51:36	-- unit/unittest.sh@215 -- # grep -q '#define SPDK_CONFIG_IDXD 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h
00:07:43.491   16:51:36	-- unit/unittest.sh@216 -- # run_test unittest_idxd_user /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut
00:07:43.491   16:51:36	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:07:43.491   16:51:36	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:43.491   16:51:36	-- common/autotest_common.sh@10 -- # set +x
00:07:43.491  ************************************
00:07:43.491  START TEST unittest_idxd_user
00:07:43.491  ************************************
00:07:43.491   16:51:36	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut
00:07:43.491  
00:07:43.491  
00:07:43.491       CUnit - A unit testing framework for C - Version 2.1-3
00:07:43.491       http://cunit.sourceforge.net/
00:07:43.491  
00:07:43.491  
00:07:43.491  Suite: idxd_user
00:07:43.491    Test: test_idxd_wait_cmd ...[2024-11-19 16:51:36.331820] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c:  52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1
00:07:43.491  [2024-11-19 16:51:36.332758] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c:  46:idxd_wait_cmd: *ERROR*: Command timeout, waited 1
00:07:43.491  passed
00:07:43.491    Test: test_idxd_reset_dev ...[2024-11-19 16:51:36.333078] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c:  52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1
00:07:43.491  [2024-11-19 16:51:36.333232] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 132:idxd_reset_dev: *ERROR*: Error resetting device 4294967274
00:07:43.491  passed
00:07:43.491    Test: test_idxd_group_config ...passed
00:07:43.491    Test: test_idxd_wq_config ...passed
00:07:43.491  
00:07:43.492  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:43.492                suites      1      1    n/a      0        0
00:07:43.492                 tests      4      4      4      0        0
00:07:43.492               asserts     20     20     20      0      n/a
00:07:43.492  
00:07:43.492  Elapsed time =    0.001 seconds
00:07:43.752  
00:07:43.752  real	0m0.043s
00:07:43.752  user	0m0.027s
00:07:43.752  sys	0m0.016s
00:07:43.752   16:51:36	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:43.752   16:51:36	-- common/autotest_common.sh@10 -- # set +x
00:07:43.752  ************************************
00:07:43.752  END TEST unittest_idxd_user
00:07:43.752  ************************************
00:07:43.752   16:51:36	-- unit/unittest.sh@218 -- # run_test unittest_iscsi unittest_iscsi
00:07:43.752   16:51:36	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:07:43.752   16:51:36	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:43.752   16:51:36	-- common/autotest_common.sh@10 -- # set +x
00:07:43.752  ************************************
00:07:43.752  START TEST unittest_iscsi
00:07:43.752  ************************************
00:07:43.752   16:51:36	-- common/autotest_common.sh@1114 -- # unittest_iscsi
00:07:43.752   16:51:36	-- unit/unittest.sh@66 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/conn.c/conn_ut
00:07:43.752  
00:07:43.752  
00:07:43.752       CUnit - A unit testing framework for C - Version 2.1-3
00:07:43.752       http://cunit.sourceforge.net/
00:07:43.752  
00:07:43.752  
00:07:43.752  Suite: conn_suite
00:07:43.752    Test: read_task_split_in_order_case ...passed
00:07:43.752    Test: read_task_split_reverse_order_case ...passed
00:07:43.752    Test: propagate_scsi_error_status_for_split_read_tasks ...passed
00:07:43.752    Test: process_non_read_task_completion_test ...passed
00:07:43.752    Test: free_tasks_on_connection ...passed
00:07:43.752    Test: free_tasks_with_queued_datain ...passed
00:07:43.752    Test: abort_queued_datain_task_test ...passed
00:07:43.752    Test: abort_queued_datain_tasks_test ...passed
00:07:43.752  
00:07:43.752  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:43.752                suites      1      1    n/a      0        0
00:07:43.752                 tests      8      8      8      0        0
00:07:43.752               asserts    230    230    230      0      n/a
00:07:43.752  
00:07:43.752  Elapsed time =    0.000 seconds
00:07:43.752   16:51:36	-- unit/unittest.sh@67 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/param.c/param_ut
00:07:43.752  
00:07:43.752  
00:07:43.752       CUnit - A unit testing framework for C - Version 2.1-3
00:07:43.752       http://cunit.sourceforge.net/
00:07:43.752  
00:07:43.752  
00:07:43.752  Suite: iscsi_suite
00:07:43.752    Test: param_negotiation_test ...passed
00:07:43.752    Test: list_negotiation_test ...passed
00:07:43.752    Test: parse_valid_test ...passed
00:07:43.752    Test: parse_invalid_test ...[2024-11-19 16:51:36.497613] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 202:iscsi_parse_param: *ERROR*: '=' not found
00:07:43.752  [2024-11-19 16:51:36.497939] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 202:iscsi_parse_param: *ERROR*: '=' not found
00:07:43.752  [2024-11-19 16:51:36.498003] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 208:iscsi_parse_param: *ERROR*: Empty key
00:07:43.752  [2024-11-19 16:51:36.498093] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 248:iscsi_parse_param: *ERROR*: Overflow Val 8193
00:07:43.752  [2024-11-19 16:51:36.498257] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 248:iscsi_parse_param: *ERROR*: Overflow Val 256
00:07:43.752  [2024-11-19 16:51:36.498345] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 215:iscsi_parse_param: *ERROR*: Key name length is bigger than 63
00:07:43.752  [2024-11-19 16:51:36.498483] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 229:iscsi_parse_param: *ERROR*: Duplicated Key B
00:07:43.752  passed
00:07:43.752  
00:07:43.752  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:43.752                suites      1      1    n/a      0        0
00:07:43.752                 tests      4      4      4      0        0
00:07:43.752               asserts    161    161    161      0      n/a
00:07:43.752  
00:07:43.752  Elapsed time =    0.006 seconds
00:07:43.752   16:51:36	-- unit/unittest.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/tgt_node.c/tgt_node_ut
00:07:43.752  
00:07:43.752  
00:07:43.752       CUnit - A unit testing framework for C - Version 2.1-3
00:07:43.752       http://cunit.sourceforge.net/
00:07:43.752  
00:07:43.752  
00:07:43.752  Suite: iscsi_target_node_suite
00:07:43.752    Test: add_lun_test_cases ...[2024-11-19 16:51:36.541140] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1248:iscsi_tgt_node_add_lun: *ERROR*: Target has active connections (count=1)
00:07:43.752  [2024-11-19 16:51:36.541487] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1254:iscsi_tgt_node_add_lun: *ERROR*: Specified LUN ID (-2) is negative
00:07:43.752  [2024-11-19 16:51:36.541593] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1260:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found
00:07:43.752  [2024-11-19 16:51:36.541643] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1260:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found
00:07:43.752  [2024-11-19 16:51:36.541684] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1266:iscsi_tgt_node_add_lun: *ERROR*: spdk_scsi_dev_add_lun failed
00:07:43.752  passed
00:07:43.752    Test: allow_any_allowed ...passed
00:07:43.752    Test: allow_ipv6_allowed ...passed
00:07:43.752    Test: allow_ipv6_denied ...passed
00:07:43.752    Test: allow_ipv6_invalid ...passed
00:07:43.752    Test: allow_ipv4_allowed ...passed
00:07:43.752    Test: allow_ipv4_denied ...passed
00:07:43.752    Test: allow_ipv4_invalid ...passed
00:07:43.752    Test: node_access_allowed ...passed
00:07:43.752    Test: node_access_denied_by_empty_netmask ...passed
00:07:43.752    Test: node_access_multi_initiator_groups_cases ...passed
00:07:43.752    Test: allow_iscsi_name_multi_maps_case ...passed
00:07:43.752    Test: chap_param_test_cases ...[2024-11-19 16:51:36.542205] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=0)
00:07:43.752  [2024-11-19 16:51:36.542258] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=0,r=0,m=1)
00:07:43.752  [2024-11-19 16:51:36.542326] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=0,m=1)
00:07:43.752  [2024-11-19 16:51:36.542381] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=1)
00:07:43.752  [2024-11-19 16:51:36.542431] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1026:iscsi_check_chap_params: *ERROR*: Invalid auth group ID (-1)
00:07:43.752  passed
00:07:43.752  
00:07:43.752  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:43.752                suites      1      1    n/a      0        0
00:07:43.752                 tests     13     13     13      0        0
00:07:43.752               asserts     50     50     50      0      n/a
00:07:43.752  
00:07:43.752  Elapsed time =    0.001 seconds
00:07:43.752   16:51:36	-- unit/unittest.sh@69 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/iscsi.c/iscsi_ut
00:07:43.752  
00:07:43.752  
00:07:43.752       CUnit - A unit testing framework for C - Version 2.1-3
00:07:43.752       http://cunit.sourceforge.net/
00:07:43.752  
00:07:43.752  
00:07:43.752  Suite: iscsi_suite
00:07:43.752    Test: op_login_check_target_test ...passed
00:07:43.752    Test: op_login_session_normal_test ...[2024-11-19 16:51:36.584158] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1434:iscsi_op_login_check_target: *ERROR*: access denied
00:07:43.752  [2024-11-19 16:51:36.584462] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty
00:07:43.752  [2024-11-19 16:51:36.584505] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty
00:07:43.752  [2024-11-19 16:51:36.584869] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty
00:07:43.752  [2024-11-19 16:51:36.584929] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 695:append_iscsi_sess: *ERROR*: spdk_get_iscsi_sess_by_tsih failed
00:07:43.752  [2024-11-19 16:51:36.585023] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1467:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed
00:07:43.752  [2024-11-19 16:51:36.585487] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 702:append_iscsi_sess: *ERROR*: no MCS session for init port name=iqn.2017-11.spdk.io:i0001, tsih=256, cid=0
00:07:43.753  [2024-11-19 16:51:36.585548] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1467:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed
00:07:43.753  passed
00:07:43.753    Test: maxburstlength_test ...[2024-11-19 16:51:36.586186] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4211:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU
00:07:43.753  [2024-11-19 16:51:36.586250] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4548:iscsi_pdu_hdr_handle: *ERROR*: processing PDU header (opcode=5) failed on NULL(NULL)
00:07:43.753  passed
00:07:43.753    Test: underflow_for_read_transfer_test ...passed
00:07:43.753    Test: underflow_for_zero_read_transfer_test ...passed
00:07:43.753    Test: underflow_for_request_sense_test ...passed
00:07:43.753    Test: underflow_for_check_condition_test ...passed
00:07:43.753    Test: add_transfer_task_test ...passed
00:07:43.753    Test: get_transfer_task_test ...passed
00:07:43.753    Test: del_transfer_task_test ...passed
00:07:43.753    Test: clear_all_transfer_tasks_test ...passed
00:07:43.753    Test: build_iovs_test ...passed
00:07:43.753    Test: build_iovs_with_md_test ...passed
00:07:43.753    Test: pdu_hdr_op_login_test ...[2024-11-19 16:51:36.588941] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1251:iscsi_op_login_rsp_init: *ERROR*: transit error
00:07:43.753  [2024-11-19 16:51:36.589296] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1258:iscsi_op_login_rsp_init: *ERROR*: unsupported version min 1/max 0, expecting 0
00:07:43.753  [2024-11-19 16:51:36.589379] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1272:iscsi_op_login_rsp_init: *ERROR*: Received reserved NSG code: 2
00:07:43.753  passed
00:07:43.753    Test: pdu_hdr_op_text_test ...[2024-11-19 16:51:36.589720] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2240:iscsi_pdu_hdr_op_text: *ERROR*: data segment len(=69) > immediate data len(=68)
00:07:43.753  [2024-11-19 16:51:36.589796] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2272:iscsi_pdu_hdr_op_text: *ERROR*: final and continue
00:07:43.753  [2024-11-19 16:51:36.590092] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2285:iscsi_pdu_hdr_op_text: *ERROR*: The correct itt is 5679, and the current itt is 5678...
00:07:43.753  passed
00:07:43.753    Test: pdu_hdr_op_logout_test ...[2024-11-19 16:51:36.590176] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2515:iscsi_pdu_hdr_op_logout: *ERROR*: Target can accept logout only with reason "close the session" on discovery session. 1 is not acceptable reason.
00:07:43.753  passed
00:07:43.753    Test: pdu_hdr_op_scsi_test ...[2024-11-19 16:51:36.590738] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3336:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session
00:07:43.753  [2024-11-19 16:51:36.590788] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3336:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session
00:07:43.753  [2024-11-19 16:51:36.591157] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3364:iscsi_pdu_hdr_op_scsi: *ERROR*: Bidirectional CDB is not supported
00:07:43.753  [2024-11-19 16:51:36.591248] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3397:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=69) > immediate data len(=68)
00:07:43.753  [2024-11-19 16:51:36.591566] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3404:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=68) > task transfer len(=67)
00:07:43.753  [2024-11-19 16:51:36.591847] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3428:iscsi_pdu_hdr_op_scsi: *ERROR*: Reject scsi cmd with EDTL > 0 but (R | W) == 0
00:07:43.753  passed
00:07:43.753    Test: pdu_hdr_op_task_mgmt_test ...[2024-11-19 16:51:36.591949] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3605:iscsi_pdu_hdr_op_task: *ERROR*: ISCSI_OP_TASK not allowed in discovery and invalid session
00:07:43.753  [2024-11-19 16:51:36.592231] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3694:iscsi_pdu_hdr_op_task: *ERROR*: unsupported function 0
00:07:43.753  passed
00:07:43.753    Test: pdu_hdr_op_nopout_test ...[2024-11-19 16:51:36.592527] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3713:iscsi_pdu_hdr_op_nopout: *ERROR*: ISCSI_OP_NOPOUT not allowed in discovery session
00:07:43.753  [2024-11-19 16:51:36.592611] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3735:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3
00:07:43.753  [2024-11-19 16:51:36.592888] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3735:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3
00:07:43.753  [2024-11-19 16:51:36.592935] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3743:iscsi_pdu_hdr_op_nopout: *ERROR*: got NOPOUT ITT=0xffffffff, I=0
00:07:43.753  passed
00:07:43.753    Test: pdu_hdr_op_data_test ...[2024-11-19 16:51:36.592971] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4186:iscsi_pdu_hdr_op_data: *ERROR*: ISCSI_OP_SCSI_DATAOUT not allowed in discovery session
00:07:43.753  [2024-11-19 16:51:36.593293] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4203:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=0
00:07:43.753  [2024-11-19 16:51:36.593352] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4211:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU
00:07:43.753  [2024-11-19 16:51:36.593405] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4216:iscsi_pdu_hdr_op_data: *ERROR*: The r2t task tag is 0, and the dataout task tag is 1
00:07:43.753  [2024-11-19 16:51:36.593677] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4222:iscsi_pdu_hdr_op_data: *ERROR*: DataSN(1) exp=0 error
00:07:43.753  [2024-11-19 16:51:36.594001] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4233:iscsi_pdu_hdr_op_data: *ERROR*: offset(4096) error
00:07:43.753  [2024-11-19 16:51:36.594047] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4243:iscsi_pdu_hdr_op_data: *ERROR*: R2T burst(65536) > MaxBurstLength(65535)
00:07:43.753  passed
00:07:43.753    Test: empty_text_with_cbit_test ...passed
00:07:43.753    Test: pdu_payload_read_test ...[2024-11-19 16:51:36.596295] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4631:iscsi_pdu_payload_read: *ERROR*: Data(65537) > MaxSegment(65536)
00:07:43.753  passed
00:07:43.753    Test: data_out_pdu_sequence_test ...passed
00:07:43.753    Test: immediate_data_and_data_out_pdu_sequence_test ...passed
00:07:43.753  
00:07:43.753  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:43.753                suites      1      1    n/a      0        0
00:07:43.753                 tests     24     24     24      0        0
00:07:43.753               asserts 150253 150253 150253      0      n/a
00:07:43.753  
00:07:43.753  Elapsed time =    0.020 seconds
00:07:44.013   16:51:36	-- unit/unittest.sh@70 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/init_grp.c/init_grp_ut
00:07:44.013  
00:07:44.013  
00:07:44.013       CUnit - A unit testing framework for C - Version 2.1-3
00:07:44.013       http://cunit.sourceforge.net/
00:07:44.013  
00:07:44.013  
00:07:44.013  Suite: init_grp_suite
00:07:44.013    Test: create_initiator_group_success_case ...passed
00:07:44.013    Test: find_initiator_group_success_case ...passed
00:07:44.013    Test: register_initiator_group_twice_case ...passed
00:07:44.013    Test: add_initiator_name_success_case ...passed
00:07:44.013    Test: add_initiator_name_fail_case ...[2024-11-19 16:51:36.643258] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c:  54:iscsi_init_grp_add_initiator: *ERROR*: > MAX_INITIATOR(=256) is not allowed
00:07:44.013  passed
00:07:44.013    Test: delete_all_initiator_names_success_case ...passed
00:07:44.013    Test: add_netmask_success_case ...passed
00:07:44.013    Test: add_netmask_fail_case ...passed[2024-11-19 16:51:36.643749] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 188:iscsi_init_grp_add_netmask: *ERROR*: > MAX_NETMASK(=256) is not allowed
00:07:44.013  
00:07:44.013    Test: delete_all_netmasks_success_case ...passed
00:07:44.013    Test: initiator_name_overwrite_all_to_any_case ...passed
00:07:44.013    Test: netmask_overwrite_all_to_any_case ...passed
00:07:44.013    Test: add_delete_initiator_names_case ...passed
00:07:44.013    Test: add_duplicated_initiator_names_case ...passed
00:07:44.013    Test: delete_nonexisting_initiator_names_case ...passed
00:07:44.013    Test: add_delete_netmasks_case ...passed
00:07:44.013    Test: add_duplicated_netmasks_case ...passed
00:07:44.013    Test: delete_nonexisting_netmasks_case ...passed
00:07:44.013  
00:07:44.013  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:44.013                suites      1      1    n/a      0        0
00:07:44.013                 tests     17     17     17      0        0
00:07:44.013               asserts    108    108    108      0      n/a
00:07:44.013  
00:07:44.013  Elapsed time =    0.001 seconds
00:07:44.013   16:51:36	-- unit/unittest.sh@71 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/portal_grp.c/portal_grp_ut
00:07:44.013  
00:07:44.013  
00:07:44.013       CUnit - A unit testing framework for C - Version 2.1-3
00:07:44.013       http://cunit.sourceforge.net/
00:07:44.013  
00:07:44.013  
00:07:44.013  Suite: portal_grp_suite
00:07:44.013    Test: portal_create_ipv4_normal_case ...passed
00:07:44.013    Test: portal_create_ipv6_normal_case ...passed
00:07:44.013    Test: portal_create_ipv4_wildcard_case ...passed
00:07:44.013    Test: portal_create_ipv6_wildcard_case ...passed
00:07:44.013    Test: portal_create_twice_case ...[2024-11-19 16:51:36.682400] /home/vagrant/spdk_repo/spdk/lib/iscsi/portal_grp.c: 113:iscsi_portal_create: *ERROR*: portal (192.168.2.0, 3260) already exists
00:07:44.013  passed
00:07:44.013    Test: portal_grp_register_unregister_case ...passed
00:07:44.013    Test: portal_grp_register_twice_case ...passed
00:07:44.013    Test: portal_grp_add_delete_case ...passed
00:07:44.013    Test: portal_grp_add_delete_twice_case ...passed
00:07:44.013  
00:07:44.013  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:44.013                suites      1      1    n/a      0        0
00:07:44.013                 tests      9      9      9      0        0
00:07:44.013               asserts     44     44     44      0      n/a
00:07:44.013  
00:07:44.013  Elapsed time =    0.004 seconds
00:07:44.013  
00:07:44.013  real	0m0.279s
00:07:44.013  user	0m0.139s
00:07:44.013  sys	0m0.142s
00:07:44.013   16:51:36	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:44.013   16:51:36	-- common/autotest_common.sh@10 -- # set +x
00:07:44.013  ************************************
00:07:44.013  END TEST unittest_iscsi
00:07:44.013  ************************************
00:07:44.013   16:51:36	-- unit/unittest.sh@219 -- # run_test unittest_json unittest_json
00:07:44.013   16:51:36	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:07:44.013   16:51:36	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:44.013   16:51:36	-- common/autotest_common.sh@10 -- # set +x
00:07:44.013  ************************************
00:07:44.013  START TEST unittest_json
00:07:44.013  ************************************
00:07:44.013   16:51:36	-- common/autotest_common.sh@1114 -- # unittest_json
00:07:44.013   16:51:36	-- unit/unittest.sh@75 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_parse.c/json_parse_ut
00:07:44.013  
00:07:44.013  
00:07:44.013       CUnit - A unit testing framework for C - Version 2.1-3
00:07:44.013       http://cunit.sourceforge.net/
00:07:44.013  
00:07:44.013  
00:07:44.013  Suite: json
00:07:44.013    Test: test_parse_literal ...passed
00:07:44.013    Test: test_parse_string_simple ...passed
00:07:44.013    Test: test_parse_string_control_chars ...passed
00:07:44.013    Test: test_parse_string_utf8 ...passed
00:07:44.014    Test: test_parse_string_escapes_twochar ...passed
00:07:44.014    Test: test_parse_string_escapes_unicode ...passed
00:07:44.014    Test: test_parse_number ...passed
00:07:44.014    Test: test_parse_array ...passed
00:07:44.014    Test: test_parse_object ...passed
00:07:44.014    Test: test_parse_nesting ...passed
00:07:44.014    Test: test_parse_comment ...passed
00:07:44.014  
00:07:44.014  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:44.014                suites      1      1    n/a      0        0
00:07:44.014                 tests     11     11     11      0        0
00:07:44.014               asserts   1516   1516   1516      0      n/a
00:07:44.014  
00:07:44.014  Elapsed time =    0.002 seconds
00:07:44.014   16:51:36	-- unit/unittest.sh@76 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_util.c/json_util_ut
00:07:44.014  
00:07:44.014  
00:07:44.014       CUnit - A unit testing framework for C - Version 2.1-3
00:07:44.014       http://cunit.sourceforge.net/
00:07:44.014  
00:07:44.014  
00:07:44.014  Suite: json
00:07:44.014    Test: test_strequal ...passed
00:07:44.014    Test: test_num_to_uint16 ...passed
00:07:44.014    Test: test_num_to_int32 ...passed
00:07:44.014    Test: test_num_to_uint64 ...passed
00:07:44.014    Test: test_decode_object ...passed
00:07:44.014    Test: test_decode_array ...passed
00:07:44.014    Test: test_decode_bool ...passed
00:07:44.014    Test: test_decode_uint16 ...passed
00:07:44.014    Test: test_decode_int32 ...passed
00:07:44.014    Test: test_decode_uint32 ...passed
00:07:44.014    Test: test_decode_uint64 ...passed
00:07:44.014    Test: test_decode_string ...passed
00:07:44.014    Test: test_decode_uuid ...passed
00:07:44.014    Test: test_find ...passed
00:07:44.014    Test: test_find_array ...passed
00:07:44.014    Test: test_iterating ...passed
00:07:44.014    Test: test_free_object ...passed
00:07:44.014  
00:07:44.014  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:44.014                suites      1      1    n/a      0        0
00:07:44.014                 tests     17     17     17      0        0
00:07:44.014               asserts    236    236    236      0      n/a
00:07:44.014  
00:07:44.014  Elapsed time =    0.001 seconds
00:07:44.014   16:51:36	-- unit/unittest.sh@77 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_write.c/json_write_ut
00:07:44.273  
00:07:44.273  
00:07:44.273       CUnit - A unit testing framework for C - Version 2.1-3
00:07:44.273       http://cunit.sourceforge.net/
00:07:44.273  
00:07:44.273  
00:07:44.273  Suite: json
00:07:44.273    Test: test_write_literal ...passed
00:07:44.273    Test: test_write_string_simple ...passed
00:07:44.273    Test: test_write_string_escapes ...passed
00:07:44.273    Test: test_write_string_utf16le ...passed
00:07:44.273    Test: test_write_number_int32 ...passed
00:07:44.273    Test: test_write_number_uint32 ...passed
00:07:44.273    Test: test_write_number_uint128 ...passed
00:07:44.273    Test: test_write_string_number_uint128 ...passed
00:07:44.273    Test: test_write_number_int64 ...passed
00:07:44.273    Test: test_write_number_uint64 ...passed
00:07:44.273    Test: test_write_number_double ...passed
00:07:44.273    Test: test_write_uuid ...passed
00:07:44.273    Test: test_write_array ...passed
00:07:44.273    Test: test_write_object ...passed
00:07:44.273    Test: test_write_nesting ...passed
00:07:44.273    Test: test_write_val ...passed
00:07:44.273  
00:07:44.273  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:44.273                suites      1      1    n/a      0        0
00:07:44.273                 tests     16     16     16      0        0
00:07:44.273               asserts    918    918    918      0      n/a
00:07:44.273  
00:07:44.273  Elapsed time =    0.005 seconds
00:07:44.273   16:51:36	-- unit/unittest.sh@78 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut
00:07:44.273  
00:07:44.273  
00:07:44.273       CUnit - A unit testing framework for C - Version 2.1-3
00:07:44.273       http://cunit.sourceforge.net/
00:07:44.273  
00:07:44.273  
00:07:44.273  Suite: jsonrpc
00:07:44.274    Test: test_parse_request ...passed
00:07:44.274    Test: test_parse_request_streaming ...passed
00:07:44.274  
00:07:44.274  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:44.274                suites      1      1    n/a      0        0
00:07:44.274                 tests      2      2      2      0        0
00:07:44.274               asserts    289    289    289      0      n/a
00:07:44.274  
00:07:44.274  Elapsed time =    0.004 seconds
00:07:44.274  
00:07:44.274  real	0m0.164s
00:07:44.274  user	0m0.088s
00:07:44.274  sys	0m0.078s
00:07:44.274   16:51:36	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:44.274   16:51:36	-- common/autotest_common.sh@10 -- # set +x
00:07:44.274  ************************************
00:07:44.274  END TEST unittest_json
00:07:44.274  ************************************
00:07:44.274   16:51:36	-- unit/unittest.sh@220 -- # run_test unittest_rpc unittest_rpc
00:07:44.274   16:51:36	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:07:44.274   16:51:36	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:44.274   16:51:36	-- common/autotest_common.sh@10 -- # set +x
00:07:44.274  ************************************
00:07:44.274  START TEST unittest_rpc
00:07:44.274  ************************************
00:07:44.274   16:51:37	-- common/autotest_common.sh@1114 -- # unittest_rpc
00:07:44.274   16:51:37	-- unit/unittest.sh@82 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rpc/rpc.c/rpc_ut
00:07:44.274  
00:07:44.274  
00:07:44.274       CUnit - A unit testing framework for C - Version 2.1-3
00:07:44.274       http://cunit.sourceforge.net/
00:07:44.274  
00:07:44.274  
00:07:44.274  Suite: rpc
00:07:44.274    Test: test_jsonrpc_handler ...passed
00:07:44.274    Test: test_spdk_rpc_is_method_allowed ...passed
00:07:44.274    Test: test_rpc_get_methods ...passed
00:07:44.274    Test: test_rpc_spdk_get_version ...passed
00:07:44.274    Test: test_spdk_rpc_listen_close ...passed
00:07:44.274  
00:07:44.274  [2024-11-19 16:51:37.023160] /home/vagrant/spdk_repo/spdk/lib/rpc/rpc.c: 378:rpc_get_methods: *ERROR*: spdk_json_decode_object failed
00:07:44.274  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:44.274                suites      1      1    n/a      0        0
00:07:44.274                 tests      5      5      5      0        0
00:07:44.274               asserts     20     20     20      0      n/a
00:07:44.274  
00:07:44.274  Elapsed time =    0.000 seconds
00:07:44.274  
00:07:44.274  real	0m0.032s
00:07:44.274  user	0m0.013s
00:07:44.274  sys	0m0.020s
00:07:44.274   16:51:37	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:44.274   16:51:37	-- common/autotest_common.sh@10 -- # set +x
00:07:44.274  ************************************
00:07:44.274  END TEST unittest_rpc
00:07:44.274  ************************************
00:07:44.274   16:51:37	-- unit/unittest.sh@221 -- # run_test unittest_notify /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut
00:07:44.274   16:51:37	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:07:44.274   16:51:37	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:44.274   16:51:37	-- common/autotest_common.sh@10 -- # set +x
00:07:44.274  ************************************
00:07:44.274  START TEST unittest_notify
00:07:44.274  ************************************
00:07:44.274   16:51:37	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut
00:07:44.274  
00:07:44.274  
00:07:44.274       CUnit - A unit testing framework for C - Version 2.1-3
00:07:44.274       http://cunit.sourceforge.net/
00:07:44.274  
00:07:44.274  
00:07:44.274  Suite: app_suite
00:07:44.274    Test: notify ...passed
00:07:44.274  
00:07:44.274  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:44.274                suites      1      1    n/a      0        0
00:07:44.274                 tests      1      1      1      0        0
00:07:44.274               asserts     13     13     13      0      n/a
00:07:44.274  
00:07:44.274  Elapsed time =    0.000 seconds
00:07:44.533  
00:07:44.533  real	0m0.038s
00:07:44.533  user	0m0.029s
00:07:44.533  sys	0m0.009s
00:07:44.533   16:51:37	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:44.533   16:51:37	-- common/autotest_common.sh@10 -- # set +x
00:07:44.533  ************************************
00:07:44.533  END TEST unittest_notify
00:07:44.533  ************************************
00:07:44.533   16:51:37	-- unit/unittest.sh@222 -- # run_test unittest_nvme unittest_nvme
00:07:44.533   16:51:37	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:07:44.533   16:51:37	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:44.533   16:51:37	-- common/autotest_common.sh@10 -- # set +x
00:07:44.533  ************************************
00:07:44.533  START TEST unittest_nvme
00:07:44.533  ************************************
00:07:44.533   16:51:37	-- common/autotest_common.sh@1114 -- # unittest_nvme
00:07:44.533   16:51:37	-- unit/unittest.sh@86 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme.c/nvme_ut
00:07:44.533  
00:07:44.533  
00:07:44.533       CUnit - A unit testing framework for C - Version 2.1-3
00:07:44.533       http://cunit.sourceforge.net/
00:07:44.533  
00:07:44.533  
00:07:44.533  Suite: nvme
00:07:44.533    Test: test_opc_data_transfer ...passed
00:07:44.533    Test: test_spdk_nvme_transport_id_parse_trtype ...passed
00:07:44.533    Test: test_spdk_nvme_transport_id_parse_adrfam ...passed
00:07:44.533    Test: test_trid_parse_and_compare ...[2024-11-19 16:51:37.238399] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1167:parse_next_key: *ERROR*: Key without ':' or '=' separator
00:07:44.533  [2024-11-19 16:51:37.239395] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID
00:07:44.533  [2024-11-19 16:51:37.239732] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1179:parse_next_key: *ERROR*: Key length 32 greater than maximum allowed 31
00:07:44.533  [2024-11-19 16:51:37.239972] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID
00:07:44.533  [2024-11-19 16:51:37.240079] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1190:parse_next_key: *ERROR*: Key without value
00:07:44.533  [2024-11-19 16:51:37.240316] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID
00:07:44.533  passed
00:07:44.533    Test: test_trid_trtype_str ...passed
00:07:44.533    Test: test_trid_adrfam_str ...passed
00:07:44.533    Test: test_nvme_ctrlr_probe ...[2024-11-19 16:51:37.240794] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 
00:07:44.533  passed
00:07:44.533    Test: test_spdk_nvme_probe ...[2024-11-19 16:51:37.241069] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet
00:07:44.533  [2024-11-19 16:51:37.241157] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed
00:07:44.533  [2024-11-19 16:51:37.241378] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 812:nvme_probe_internal: *ERROR*: NVMe trtype 256 (PCIE) not available
00:07:44.533  passed
00:07:44.533    Test: test_spdk_nvme_connect ...[2024-11-19 16:51:37.241505] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed
00:07:44.533  [2024-11-19 16:51:37.241770] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 989:spdk_nvme_connect: *ERROR*: No transport ID specified
00:07:44.533  [2024-11-19 16:51:37.242535] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet
00:07:44.533  passed
00:07:44.533    Test: test_nvme_ctrlr_probe_internal ...[2024-11-19 16:51:37.242698] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1000:spdk_nvme_connect: *ERROR*: Create probe context failed
00:07:44.533  [2024-11-19 16:51:37.243025] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 
00:07:44.533  passed
00:07:44.533    Test: test_nvme_init_controllers ...[2024-11-19 16:51:37.243143] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed
00:07:44.533  [2024-11-19 16:51:37.243339] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 
00:07:44.533  passed
00:07:44.533    Test: test_nvme_driver_init ...[2024-11-19 16:51:37.243586] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 578:nvme_driver_init: *ERROR*: primary process failed to reserve memory
00:07:44.533  [2024-11-19 16:51:37.243687] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet
00:07:44.533  [2024-11-19 16:51:37.353159] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 596:nvme_driver_init: *ERROR*: timeout waiting for primary process to init
00:07:44.533  passed
00:07:44.533    Test: test_spdk_nvme_detach ...passed
00:07:44.534    Test: test_nvme_completion_poll_cb ...passed
00:07:44.534    Test: test_nvme_user_copy_cmd_complete ...passed
00:07:44.534    Test: test_nvme_allocate_request_null ...passed
00:07:44.534    Test: test_nvme_allocate_request ...passed
00:07:44.534    Test: test_nvme_free_request ...passed
00:07:44.534    Test: test_nvme_allocate_request_user_copy ...[2024-11-19 16:51:37.353351] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 618:nvme_driver_init: *ERROR*: failed to initialize mutex
00:07:44.534  passed
00:07:44.534    Test: test_nvme_robust_mutex_init_shared ...passed
00:07:44.534    Test: test_nvme_request_check_timeout ...passed
00:07:44.534    Test: test_nvme_wait_for_completion ...passed
00:07:44.534    Test: test_spdk_nvme_parse_func ...passed
00:07:44.534    Test: test_spdk_nvme_detach_async ...passed
00:07:44.534    Test: test_nvme_parse_addr ...[2024-11-19 16:51:37.354462] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1577:nvme_parse_addr: *ERROR*: addr and service must both be non-NULL
00:07:44.534  passed
00:07:44.534  
00:07:44.534  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:44.534                suites      1      1    n/a      0        0
00:07:44.534                 tests     25     25     25      0        0
00:07:44.534               asserts    326    326    326      0      n/a
00:07:44.534  
00:07:44.534  Elapsed time =    0.009 seconds
00:07:44.534   16:51:37	-- unit/unittest.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut
00:07:44.793  
00:07:44.793  
00:07:44.793       CUnit - A unit testing framework for C - Version 2.1-3
00:07:44.793       http://cunit.sourceforge.net/
00:07:44.793  
00:07:44.793  
00:07:44.793  Suite: nvme_ctrlr
00:07:44.793    Test: test_nvme_ctrlr_init_en_1_rdy_0 ...[2024-11-19 16:51:37.405951] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:44.793  passed
00:07:44.793    Test: test_nvme_ctrlr_init_en_1_rdy_1 ...[2024-11-19 16:51:37.408009] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:44.793  passed
00:07:44.793    Test: test_nvme_ctrlr_init_en_0_rdy_0 ...[2024-11-19 16:51:37.409282] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:44.793  passed
00:07:44.793    Test: test_nvme_ctrlr_init_en_0_rdy_1 ...[2024-11-19 16:51:37.410529] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:44.793  passed
00:07:44.793    Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_rr ...[2024-11-19 16:51:37.411801] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:44.793  [2024-11-19 16:51:37.412960] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-11-19 16:51:37.414181] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-11-19 16:51:37.415334] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed
00:07:44.793    Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_wrr ...[2024-11-19 16:51:37.417685] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:44.793  [2024-11-19 16:51:37.419925] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-11-19 16:51:37.421123] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed
00:07:44.793    Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_vs ...[2024-11-19 16:51:37.423584] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:44.793  [2024-11-19 16:51:37.424784] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-11-19 16:51:37.427102] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed
00:07:44.793    Test: test_nvme_ctrlr_init_delay ...[2024-11-19 16:51:37.429528] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:44.793  passed
00:07:44.793    Test: test_alloc_io_qpair_rr_1 ...[2024-11-19 16:51:37.430818] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:44.793  [2024-11-19 16:51:37.431061] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5318:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs
00:07:44.793  [2024-11-19 16:51:37.431356] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method
00:07:44.793  passed
00:07:44.793    Test: test_ctrlr_get_default_ctrlr_opts ...passed
00:07:44.793    Test: test_ctrlr_get_default_io_qpair_opts ...passed
00:07:44.793    Test: test_alloc_io_qpair_wrr_1 ...[2024-11-19 16:51:37.431474] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method
00:07:44.793  [2024-11-19 16:51:37.431562] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method
00:07:44.793  [2024-11-19 16:51:37.431863] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:44.793  passed
00:07:44.793    Test: test_alloc_io_qpair_wrr_2 ...[2024-11-19 16:51:37.432178] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:44.793  [2024-11-19 16:51:37.432391] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5318:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs
00:07:44.793  passed
00:07:44.793    Test: test_spdk_nvme_ctrlr_update_firmware ...[2024-11-19 16:51:37.432812] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4846:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_update_firmware invalid size!
00:07:44.793  [2024-11-19 16:51:37.433063] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4883:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed!
00:07:44.793  [2024-11-19 16:51:37.433228] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4923:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] nvme_ctrlr_cmd_fw_commit failed!
00:07:44.793  [2024-11-19 16:51:37.433343] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4883:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed!
00:07:44.793  [2024-11-19 16:51:37.433465] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [] in failed state.
00:07:44.793  passed
00:07:44.793    Test: test_nvme_ctrlr_fail ...passed
00:07:44.793    Test: test_nvme_ctrlr_construct_intel_support_log_page_list ...passed
00:07:44.793    Test: test_nvme_ctrlr_set_supported_features ...passed
00:07:44.793    Test: test_spdk_nvme_ctrlr_doorbell_buffer_config ...passed
00:07:44.793    Test: test_nvme_ctrlr_test_active_ns ...[2024-11-19 16:51:37.433955] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:45.053  passed
00:07:45.053    Test: test_nvme_ctrlr_test_active_ns_error_case ...passed
00:07:45.053    Test: test_spdk_nvme_ctrlr_reconnect_io_qpair ...passed
00:07:45.053    Test: test_spdk_nvme_ctrlr_set_trid ...passed
00:07:45.053    Test: test_nvme_ctrlr_init_set_nvmf_ioccsz ...[2024-11-19 16:51:37.760929] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:45.053  passed
00:07:45.053    Test: test_nvme_ctrlr_init_set_num_queues ...[2024-11-19 16:51:37.768069] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:45.053  passed
00:07:45.053    Test: test_nvme_ctrlr_init_set_keep_alive_timeout ...[2024-11-19 16:51:37.769533] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:45.053  [2024-11-19 16:51:37.769752] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:2870:nvme_ctrlr_set_keep_alive_timeout_done: *ERROR*: [] Keep alive timeout Get Feature failed: SC 6 SCT 0
00:07:45.053  passed
00:07:45.053    Test: test_alloc_io_qpair_fail ...[2024-11-19 16:51:37.771029] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:45.053  [2024-11-19 16:51:37.771293] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 497:spdk_nvme_ctrlr_alloc_io_qpair: *ERROR*: [] nvme_transport_ctrlr_connect_io_qpair() failed
00:07:45.053  passed
00:07:45.053    Test: test_nvme_ctrlr_add_remove_process ...passed
00:07:45.053    Test: test_nvme_ctrlr_set_arbitration_feature ...passed
00:07:45.054    Test: test_nvme_ctrlr_set_state ...[2024-11-19 16:51:37.771575] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1465:_nvme_ctrlr_set_state: *ERROR*: [] Specified timeout would cause integer overflow. Defaulting to no timeout.
00:07:45.054  passed
00:07:45.054    Test: test_nvme_ctrlr_active_ns_list_v0 ...[2024-11-19 16:51:37.771727] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:45.054  passed
00:07:45.054    Test: test_nvme_ctrlr_active_ns_list_v2 ...[2024-11-19 16:51:37.792781] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:45.054  passed
00:07:45.054    Test: test_nvme_ctrlr_ns_mgmt ...[2024-11-19 16:51:37.832559] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:45.054  passed
00:07:45.054    Test: test_nvme_ctrlr_reset ...[2024-11-19 16:51:37.834231] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:45.054  passed
00:07:45.054    Test: test_nvme_ctrlr_aer_callback ...[2024-11-19 16:51:37.834692] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:45.054  passed
00:07:45.054    Test: test_nvme_ctrlr_ns_attr_changed ...[2024-11-19 16:51:37.836204] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:45.054  passed
00:07:45.054    Test: test_nvme_ctrlr_identify_namespaces_iocs_specific_next ...passed
00:07:45.054    Test: test_nvme_ctrlr_set_supported_log_pages ...passed
00:07:45.054    Test: test_nvme_ctrlr_set_intel_supported_log_pages ...[2024-11-19 16:51:37.837913] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:45.054  passed
00:07:45.054    Test: test_nvme_ctrlr_parse_ana_log_page ...passed
00:07:45.054    Test: test_nvme_ctrlr_ana_resize ...[2024-11-19 16:51:37.839319] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:45.054  passed
00:07:45.054    Test: test_nvme_ctrlr_get_memory_domains ...passed
00:07:45.054    Test: test_nvme_transport_ctrlr_ready ...[2024-11-19 16:51:37.840873] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4016:nvme_ctrlr_process_init: *ERROR*: [] Transport controller ready step failed: rc -1
00:07:45.054  passed
00:07:45.054    Test: test_nvme_ctrlr_disable ...[2024-11-19 16:51:37.841001] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4067:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr operation failed with error: -1, ctrlr state: 51 (error)
00:07:45.054  [2024-11-19 16:51:37.841126] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:45.054  passed
00:07:45.054  
00:07:45.054  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:45.054                suites      1      1    n/a      0        0
00:07:45.054                 tests     43     43     43      0        0
00:07:45.054               asserts  10418  10418  10418      0      n/a
00:07:45.054  
00:07:45.054  Elapsed time =    0.394 seconds
00:07:45.054   16:51:37	-- unit/unittest.sh@88 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut
00:07:45.054  
00:07:45.054  
00:07:45.054       CUnit - A unit testing framework for C - Version 2.1-3
00:07:45.054       http://cunit.sourceforge.net/
00:07:45.054  
00:07:45.054  
00:07:45.054  Suite: nvme_ctrlr_cmd
00:07:45.054    Test: test_get_log_pages ...passed
00:07:45.054    Test: test_set_feature_cmd ...passed
00:07:45.054    Test: test_set_feature_ns_cmd ...passed
00:07:45.054    Test: test_get_feature_cmd ...passed
00:07:45.054    Test: test_get_feature_ns_cmd ...passed
00:07:45.054    Test: test_abort_cmd ...passed
00:07:45.054    Test: test_set_host_id_cmds ...passed
00:07:45.054    Test: test_io_cmd_raw_no_payload_build ...passed
00:07:45.054    Test: test_io_raw_cmd ...passed
00:07:45.054    Test: test_io_raw_cmd_with_md ...passed
00:07:45.054    Test: test_namespace_attach ...passed
00:07:45.054    Test: test_namespace_detach ...passed
00:07:45.054    Test: test_namespace_create ...passed
00:07:45.054    Test: test_namespace_delete ...passed
00:07:45.054    Test: test_doorbell_buffer_config ...passed
00:07:45.054    Test: test_format_nvme ...passed
00:07:45.054    Test: test_fw_commit ...passed
00:07:45.054    Test: test_fw_image_download ...passed
00:07:45.054    Test: test_sanitize ...passed
00:07:45.054    Test: test_directive ...passed
00:07:45.054    Test: test_nvme_request_add_abort ...passed[2024-11-19 16:51:37.895642] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr_cmd.c: 508:nvme_ctrlr_cmd_set_host_id: *ERROR*: Invalid host ID size 1024
00:07:45.054  
00:07:45.054    Test: test_spdk_nvme_ctrlr_cmd_abort ...passed
00:07:45.054    Test: test_nvme_ctrlr_cmd_identify ...passed
00:07:45.054    Test: test_spdk_nvme_ctrlr_cmd_security_receive_send ...passed
00:07:45.054  
00:07:45.054  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:45.054                suites      1      1    n/a      0        0
00:07:45.054                 tests     24     24     24      0        0
00:07:45.054               asserts    198    198    198      0      n/a
00:07:45.054  
00:07:45.054  Elapsed time =    0.001 seconds
00:07:45.313   16:51:37	-- unit/unittest.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut
00:07:45.313  
00:07:45.313  
00:07:45.313       CUnit - A unit testing framework for C - Version 2.1-3
00:07:45.313       http://cunit.sourceforge.net/
00:07:45.313  
00:07:45.313  
00:07:45.313  Suite: nvme_ctrlr_cmd
00:07:45.313    Test: test_geometry_cmd ...passed
00:07:45.313    Test: test_spdk_nvme_ctrlr_is_ocssd_supported ...passed
00:07:45.313  
00:07:45.313  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:45.313                suites      1      1    n/a      0        0
00:07:45.313                 tests      2      2      2      0        0
00:07:45.313               asserts      7      7      7      0      n/a
00:07:45.313  
00:07:45.313  Elapsed time =    0.000 seconds
00:07:45.313   16:51:37	-- unit/unittest.sh@90 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut
00:07:45.313  
00:07:45.313  
00:07:45.313       CUnit - A unit testing framework for C - Version 2.1-3
00:07:45.313       http://cunit.sourceforge.net/
00:07:45.313  
00:07:45.313  
00:07:45.313  Suite: nvme
00:07:45.313    Test: test_nvme_ns_construct ...passed
00:07:45.313    Test: test_nvme_ns_uuid ...passed
00:07:45.313    Test: test_nvme_ns_csi ...passed
00:07:45.313    Test: test_nvme_ns_data ...passed
00:07:45.313    Test: test_nvme_ns_set_identify_data ...passed
00:07:45.313    Test: test_spdk_nvme_ns_get_values ...passed
00:07:45.313    Test: test_spdk_nvme_ns_is_active ...passed
00:07:45.313    Test: spdk_nvme_ns_supports ...passed
00:07:45.313    Test: test_nvme_ns_has_supported_iocs_specific_data ...passed
00:07:45.313    Test: test_nvme_ctrlr_identify_ns_iocs_specific ...passed
00:07:45.313    Test: test_nvme_ctrlr_identify_id_desc ...passed
00:07:45.313    Test: test_nvme_ns_find_id_desc ...passed
00:07:45.313  
00:07:45.313  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:45.313                suites      1      1    n/a      0        0
00:07:45.313                 tests     12     12     12      0        0
00:07:45.313               asserts     83     83     83      0      n/a
00:07:45.313  
00:07:45.313  Elapsed time =    0.001 seconds
00:07:45.313   16:51:37	-- unit/unittest.sh@91 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut
00:07:45.313  
00:07:45.313  
00:07:45.313       CUnit - A unit testing framework for C - Version 2.1-3
00:07:45.313       http://cunit.sourceforge.net/
00:07:45.313  
00:07:45.313  
00:07:45.313  Suite: nvme_ns_cmd
00:07:45.313    Test: split_test ...passed
00:07:45.313    Test: split_test2 ...passed
00:07:45.313    Test: split_test3 ...passed
00:07:45.313    Test: split_test4 ...passed
00:07:45.313    Test: test_nvme_ns_cmd_flush ...passed
00:07:45.313    Test: test_nvme_ns_cmd_dataset_management ...passed
00:07:45.313    Test: test_nvme_ns_cmd_copy ...passed
00:07:45.313    Test: test_io_flags ...[2024-11-19 16:51:38.008128] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xfffc
00:07:45.313  passed
00:07:45.313    Test: test_nvme_ns_cmd_write_zeroes ...passed
00:07:45.313    Test: test_nvme_ns_cmd_write_uncorrectable ...passed
00:07:45.313    Test: test_nvme_ns_cmd_reservation_register ...passed
00:07:45.313    Test: test_nvme_ns_cmd_reservation_release ...passed
00:07:45.313    Test: test_nvme_ns_cmd_reservation_acquire ...passed
00:07:45.313    Test: test_nvme_ns_cmd_reservation_report ...passed
00:07:45.313    Test: test_cmd_child_request ...passed
00:07:45.313    Test: test_nvme_ns_cmd_readv ...passed
00:07:45.313    Test: test_nvme_ns_cmd_read_with_md ...passed
00:07:45.313    Test: test_nvme_ns_cmd_writev ...[2024-11-19 16:51:38.009430] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 287:_nvme_ns_cmd_split_request_prp: *ERROR*: child_length 200 not even multiple of lba_size 512
00:07:45.313  passed
00:07:45.313    Test: test_nvme_ns_cmd_write_with_md ...passed
00:07:45.313    Test: test_nvme_ns_cmd_zone_append_with_md ...passed
00:07:45.313    Test: test_nvme_ns_cmd_zone_appendv_with_md ...passed
00:07:45.313    Test: test_nvme_ns_cmd_comparev ...passed
00:07:45.313    Test: test_nvme_ns_cmd_compare_and_write ...passed
00:07:45.313    Test: test_nvme_ns_cmd_compare_with_md ...passed
00:07:45.313    Test: test_nvme_ns_cmd_comparev_with_md ...passed
00:07:45.313    Test: test_nvme_ns_cmd_setup_request ...passed
00:07:45.313    Test: test_spdk_nvme_ns_cmd_readv_with_md ...passed
00:07:45.313    Test: test_spdk_nvme_ns_cmd_writev_ext ...[2024-11-19 16:51:38.011459] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f
00:07:45.313  passed
00:07:45.313    Test: test_spdk_nvme_ns_cmd_readv_ext ...passed
00:07:45.313    Test: test_nvme_ns_cmd_verify ...passed
00:07:45.313    Test: test_nvme_ns_cmd_io_mgmt_send ...[2024-11-19 16:51:38.011582] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f
00:07:45.313  passed
00:07:45.313    Test: test_nvme_ns_cmd_io_mgmt_recv ...passed
00:07:45.314  
00:07:45.314  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:45.314                suites      1      1    n/a      0        0
00:07:45.314                 tests     32     32     32      0        0
00:07:45.314               asserts    550    550    550      0      n/a
00:07:45.314  
00:07:45.314  Elapsed time =    0.005 seconds
00:07:45.314   16:51:38	-- unit/unittest.sh@92 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut
00:07:45.314  
00:07:45.314  
00:07:45.314       CUnit - A unit testing framework for C - Version 2.1-3
00:07:45.314       http://cunit.sourceforge.net/
00:07:45.314  
00:07:45.314  
00:07:45.314  Suite: nvme_ns_cmd
00:07:45.314    Test: test_nvme_ocssd_ns_cmd_vector_reset ...passed
00:07:45.314    Test: test_nvme_ocssd_ns_cmd_vector_reset_single_entry ...passed
00:07:45.314    Test: test_nvme_ocssd_ns_cmd_vector_read_with_md ...passed
00:07:45.314    Test: test_nvme_ocssd_ns_cmd_vector_read_with_md_single_entry ...passed
00:07:45.314    Test: test_nvme_ocssd_ns_cmd_vector_read ...passed
00:07:45.314    Test: test_nvme_ocssd_ns_cmd_vector_read_single_entry ...passed
00:07:45.314    Test: test_nvme_ocssd_ns_cmd_vector_write_with_md ...passed
00:07:45.314    Test: test_nvme_ocssd_ns_cmd_vector_write_with_md_single_entry ...passed
00:07:45.314    Test: test_nvme_ocssd_ns_cmd_vector_write ...passed
00:07:45.314    Test: test_nvme_ocssd_ns_cmd_vector_write_single_entry ...passed
00:07:45.314    Test: test_nvme_ocssd_ns_cmd_vector_copy ...passed
00:07:45.314    Test: test_nvme_ocssd_ns_cmd_vector_copy_single_entry ...passed
00:07:45.314  
00:07:45.314  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:45.314                suites      1      1    n/a      0        0
00:07:45.314                 tests     12     12     12      0        0
00:07:45.314               asserts    123    123    123      0      n/a
00:07:45.314  
00:07:45.314  Elapsed time =    0.001 seconds
00:07:45.314   16:51:38	-- unit/unittest.sh@93 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut
00:07:45.314  
00:07:45.314  
00:07:45.314       CUnit - A unit testing framework for C - Version 2.1-3
00:07:45.314       http://cunit.sourceforge.net/
00:07:45.314  
00:07:45.314  
00:07:45.314  Suite: nvme_qpair
00:07:45.314    Test: test3 ...passed
00:07:45.314    Test: test_ctrlr_failed ...passed
00:07:45.314    Test: struct_packing ...passed
00:07:45.314    Test: test_nvme_qpair_process_completions ...[2024-11-19 16:51:38.101539] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:07:45.314  [2024-11-19 16:51:38.101910] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:07:45.314  [2024-11-19 16:51:38.101983] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0
00:07:45.314  [2024-11-19 16:51:38.102088] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1
00:07:45.314  passed
00:07:45.314    Test: test_nvme_completion_is_retry ...passed
00:07:45.314    Test: test_get_status_string ...passed
00:07:45.314    Test: test_nvme_qpair_add_cmd_error_injection ...passed
00:07:45.314    Test: test_nvme_qpair_submit_request ...passed
00:07:45.314    Test: test_nvme_qpair_resubmit_request_with_transport_failed ...passed
00:07:45.314    Test: test_nvme_qpair_manual_complete_request ...passed
00:07:45.314    Test: test_nvme_qpair_init_deinit ...passed
00:07:45.314    Test: test_nvme_get_sgl_print_info ...passed
00:07:45.314  
00:07:45.314  [2024-11-19 16:51:38.102599] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:07:45.314  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:45.314                suites      1      1    n/a      0        0
00:07:45.314                 tests     12     12     12      0        0
00:07:45.314               asserts    154    154    154      0      n/a
00:07:45.314  
00:07:45.314  Elapsed time =    0.001 seconds
00:07:45.314   16:51:38	-- unit/unittest.sh@94 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut
00:07:45.314  
00:07:45.314  
00:07:45.314       CUnit - A unit testing framework for C - Version 2.1-3
00:07:45.314       http://cunit.sourceforge.net/
00:07:45.314  
00:07:45.314  
00:07:45.314  Suite: nvme_pcie
00:07:45.314    Test: test_prp_list_append ...[2024-11-19 16:51:38.137938] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned
00:07:45.314  [2024-11-19 16:51:38.138310] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *ERROR*: PRP 2 not page aligned (0x900800)
00:07:45.314  [2024-11-19 16:51:38.138376] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1221:nvme_pcie_prp_list_append: *ERROR*: vtophys(0x100000) failed
00:07:45.314  [2024-11-19 16:51:38.138659] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1215:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries
00:07:45.314  passed
00:07:45.314    Test: test_nvme_pcie_hotplug_monitor ...passed
00:07:45.314    Test: test_shadow_doorbell_update ...passed
00:07:45.314    Test: test_build_contig_hw_sgl_request ...passed
00:07:45.314    Test: test_nvme_pcie_qpair_build_metadata ...passed
00:07:45.314    Test: test_nvme_pcie_qpair_build_prps_sgl_request ...passed
00:07:45.314    Test: test_nvme_pcie_qpair_build_hw_sgl_request ...passed
00:07:45.314    Test: test_nvme_pcie_qpair_build_contig_request ...passed
00:07:45.314    Test: test_nvme_pcie_ctrlr_regs_get_set ...passed
00:07:45.314    Test: test_nvme_pcie_ctrlr_map_unmap_cmb ...passed
00:07:45.314    Test: test_nvme_pcie_ctrlr_map_io_cmb ...[2024-11-19 16:51:38.138770] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1215:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries
00:07:45.314  [2024-11-19 16:51:38.138969] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned
00:07:45.314  [2024-11-19 16:51:38.139060] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 442:nvme_pcie_ctrlr_map_io_cmb: *ERROR*: CMB is already in use for submission queues.
00:07:45.314  passed
00:07:45.314    Test: test_nvme_pcie_ctrlr_map_unmap_pmr ...passed
00:07:45.314    Test: test_nvme_pcie_ctrlr_config_pmr ...passed
00:07:45.314    Test: test_nvme_pcie_ctrlr_map_io_pmr ...[2024-11-19 16:51:38.139148] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 521:nvme_pcie_ctrlr_map_pmr: *ERROR*: invalid base indicator register value
00:07:45.314  [2024-11-19 16:51:38.139204] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 647:nvme_pcie_ctrlr_config_pmr: *ERROR*: PMR is already disabled
00:07:45.314  [2024-11-19 16:51:38.139261] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 699:nvme_pcie_ctrlr_map_io_pmr: *ERROR*: PMR is not supported by the controller
00:07:45.314  passed
00:07:45.314  
00:07:45.314  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:45.314                suites      1      1    n/a      0        0
00:07:45.314                 tests     14     14     14      0        0
00:07:45.314               asserts    235    235    235      0      n/a
00:07:45.314  
00:07:45.314  Elapsed time =    0.001 seconds
00:07:45.314   16:51:38	-- unit/unittest.sh@95 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut
00:07:45.574  
00:07:45.574  
00:07:45.574       CUnit - A unit testing framework for C - Version 2.1-3
00:07:45.574       http://cunit.sourceforge.net/
00:07:45.574  
00:07:45.574  
00:07:45.574  Suite: nvme_ns_cmd
00:07:45.574    Test: nvme_poll_group_create_test ...passed
00:07:45.574    Test: nvme_poll_group_add_remove_test ...passed
00:07:45.574    Test: nvme_poll_group_process_completions ...passed
00:07:45.574    Test: nvme_poll_group_destroy_test ...passed
00:07:45.575    Test: nvme_poll_group_get_free_stats ...passed
00:07:45.575  
00:07:45.575  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:45.575                suites      1      1    n/a      0        0
00:07:45.575                 tests      5      5      5      0        0
00:07:45.575               asserts     75     75     75      0      n/a
00:07:45.575  
00:07:45.575  Elapsed time =    0.001 seconds
00:07:45.575   16:51:38	-- unit/unittest.sh@96 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut
00:07:45.575  
00:07:45.575  
00:07:45.575       CUnit - A unit testing framework for C - Version 2.1-3
00:07:45.575       http://cunit.sourceforge.net/
00:07:45.575  
00:07:45.575  
00:07:45.575  Suite: nvme_quirks
00:07:45.575    Test: test_nvme_quirks_striping ...passed
00:07:45.575  
00:07:45.575  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:45.575                suites      1      1    n/a      0        0
00:07:45.575                 tests      1      1      1      0        0
00:07:45.575               asserts      5      5      5      0      n/a
00:07:45.575  
00:07:45.575  Elapsed time =    0.000 seconds
00:07:45.575   16:51:38	-- unit/unittest.sh@97 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut
00:07:45.575  
00:07:45.575  
00:07:45.575       CUnit - A unit testing framework for C - Version 2.1-3
00:07:45.575       http://cunit.sourceforge.net/
00:07:45.575  
00:07:45.575  
00:07:45.575  Suite: nvme_tcp
00:07:45.575    Test: test_nvme_tcp_pdu_set_data_buf ...passed
00:07:45.575    Test: test_nvme_tcp_build_iovs ...passed
00:07:45.575    Test: test_nvme_tcp_build_sgl_request ...[2024-11-19 16:51:38.260701] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 783:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x7fff3431ec40, and the iovcnt=16, remaining_size=28672
00:07:45.575  passed
00:07:45.575    Test: test_nvme_tcp_pdu_set_data_buf_with_md ...passed
00:07:45.575    Test: test_nvme_tcp_build_iovs_with_md ...passed
00:07:45.575    Test: test_nvme_tcp_req_complete_safe ...passed
00:07:45.575    Test: test_nvme_tcp_req_get ...passed
00:07:45.575    Test: test_nvme_tcp_req_init ...passed
00:07:45.575    Test: test_nvme_tcp_qpair_capsule_cmd_send ...passed
00:07:45.575    Test: test_nvme_tcp_qpair_write_pdu ...passed
00:07:45.575    Test: test_nvme_tcp_qpair_set_recv_state ...passed
00:07:45.575    Test: test_nvme_tcp_alloc_reqs ...[2024-11-19 16:51:38.261432] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff34320960 is same with the state(6) to be set
00:07:45.575  passed
00:07:45.575    Test: test_nvme_tcp_qpair_send_h2c_term_req ...passed
00:07:45.575    Test: test_nvme_tcp_pdu_ch_handle ...[2024-11-19 16:51:38.261784] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff3431faf0 is same with the state(5) to be set
00:07:45.575  [2024-11-19 16:51:38.261851] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1108:nvme_tcp_pdu_ch_handle: *ERROR*: Already received IC_RESP PDU, and we should reject this pdu=0x7fff34320620
00:07:45.575  [2024-11-19 16:51:38.261912] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1167:nvme_tcp_pdu_ch_handle: *ERROR*: Expected PDU header length 128, got 0
00:07:45.575  [2024-11-19 16:51:38.262010] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff3431ffb0 is same with the state(5) to be set
00:07:45.575  [2024-11-19 16:51:38.262082] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1118:nvme_tcp_pdu_ch_handle: *ERROR*: The TCP/IP tqpair connection is not negotiated
00:07:45.575  [2024-11-19 16:51:38.262179] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff3431ffb0 is same with the state(5) to be set
00:07:45.575  [2024-11-19 16:51:38.262254] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00
00:07:45.575  [2024-11-19 16:51:38.262296] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff3431ffb0 is same with the state(5) to be set
00:07:45.575  [2024-11-19 16:51:38.262367] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff3431ffb0 is same with the state(5) to be set
00:07:45.575  [2024-11-19 16:51:38.262427] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff3431ffb0 is same with the state(5) to be set
00:07:45.575  [2024-11-19 16:51:38.262495] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff3431ffb0 is same with the state(5) to be set
00:07:45.575  passed
00:07:45.575    Test: test_nvme_tcp_qpair_connect_sock ...[2024-11-19 16:51:38.262542] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff3431ffb0 is same with the state(5) to be set
00:07:45.575  [2024-11-19 16:51:38.262599] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff3431ffb0 is same with the state(5) to be set
00:07:45.575  [2024-11-19 16:51:38.262767] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2239:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 3
00:07:45.575  [2024-11-19 16:51:38.262834] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2251:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed
00:07:45.575  [2024-11-19 16:51:38.263308] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2251:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed
00:07:45.575  passed
00:07:45.575    Test: test_nvme_tcp_qpair_icreq_send ...passed
00:07:45.575    Test: test_nvme_tcp_c2h_payload_handle ...[2024-11-19 16:51:38.263459] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1282:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x7fff34320160): PDU Sequence Error
00:07:45.575  passed
00:07:45.575    Test: test_nvme_tcp_icresp_handle ...[2024-11-19 16:51:38.263608] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1508:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp PFV 0, got 1
00:07:45.575  [2024-11-19 16:51:38.263668] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1515:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp maxh2cdata >=4096, got 2048
00:07:45.575  [2024-11-19 16:51:38.263721] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff3431fb00 is same with the state(5) to be set
00:07:45.575  [2024-11-19 16:51:38.263768] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1524:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp cpda <=31, got 64
00:07:45.575  [2024-11-19 16:51:38.263822] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff3431fb00 is same with the state(5) to be set
00:07:45.575  passed
00:07:45.575    Test: test_nvme_tcp_pdu_payload_handle ...[2024-11-19 16:51:38.263890] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff3431fb00 is same with the state(0) to be set
00:07:45.575  [2024-11-19 16:51:38.263978] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1282:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x7fff34320620): PDU Sequence Error
00:07:45.575  passed
00:07:45.575    Test: test_nvme_tcp_capsule_resp_hdr_handle ...[2024-11-19 16:51:38.264092] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1585:nvme_tcp_capsule_resp_hdr_handle: *ERROR*: no tcp_req is found with cid=1 for tqpair=0x7fff3431ede0
00:07:45.575  passed
00:07:45.575    Test: test_nvme_tcp_ctrlr_connect_qpair ...passed
00:07:45.575    Test: test_nvme_tcp_ctrlr_disconnect_qpair ...[2024-11-19 16:51:38.264274] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 353:nvme_tcp_ctrlr_disconnect_qpair: *ERROR*: tqpair=0x7fff3431e460, errno=0, rc=0
00:07:45.575  [2024-11-19 16:51:38.264340] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff3431e460 is same with the state(5) to be set
00:07:45.575  [2024-11-19 16:51:38.264427] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff3431e460 is same with the state(5) to be set
00:07:45.575  [2024-11-19 16:51:38.264504] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7fff3431e460 (0): Success
00:07:45.575  passed
00:07:45.575    Test: test_nvme_tcp_ctrlr_create_io_qpair ...[2024-11-19 16:51:38.264565] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7fff3431e460 (0): Success
00:07:45.575  [2024-11-19 16:51:38.404426] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2422:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2.
00:07:45.575  [2024-11-19 16:51:38.404565] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2422:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2.
00:07:45.575  passed
00:07:45.575    Test: test_nvme_tcp_ctrlr_delete_io_qpair ...passed
00:07:45.575    Test: test_nvme_tcp_poll_group_get_stats ...passed
00:07:45.575    Test: test_nvme_tcp_ctrlr_construct ...[2024-11-19 16:51:38.404793] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2849:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer
00:07:45.575  [2024-11-19 16:51:38.404850] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2849:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer
00:07:45.575  [2024-11-19 16:51:38.405095] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2422:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2.
00:07:45.575  [2024-11-19 16:51:38.405155] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair
00:07:45.575  [2024-11-19 16:51:38.405292] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2239:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 254
00:07:45.575  [2024-11-19 16:51:38.405386] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair
00:07:45.575  [2024-11-19 16:51:38.405506] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x613000001540 with addr=192.168.1.78, port=23
00:07:45.575  [2024-11-19 16:51:38.405596] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair
00:07:45.575  passed
00:07:45.575    Test: test_nvme_tcp_qpair_submit_request ...passed[2024-11-19 16:51:38.405757] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 783:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x613000001a80, and the iovcnt=1, remaining_size=1024
00:07:45.575  [2024-11-19 16:51:38.405831] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 961:nvme_tcp_qpair_submit_request: *ERROR*: nvme_tcp_req_init() failed
00:07:45.575  
00:07:45.575  
00:07:45.575  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:45.575                suites      1      1    n/a      0        0
00:07:45.575                 tests     27     27     27      0        0
00:07:45.575               asserts    624    624    624      0      n/a
00:07:45.575  
00:07:45.575  Elapsed time =    0.145 seconds
00:07:45.834   16:51:38	-- unit/unittest.sh@98 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut
00:07:45.834  
00:07:45.834  
00:07:45.834       CUnit - A unit testing framework for C - Version 2.1-3
00:07:45.834       http://cunit.sourceforge.net/
00:07:45.834  
00:07:45.834  
00:07:45.834  Suite: nvme_transport
00:07:45.834    Test: test_nvme_get_transport ...passed
00:07:45.834    Test: test_nvme_transport_poll_group_connect_qpair ...passed
00:07:45.835    Test: test_nvme_transport_poll_group_disconnect_qpair ...passed
00:07:45.835    Test: test_nvme_transport_poll_group_add_remove ...passed
00:07:45.835    Test: test_ctrlr_get_memory_domains ...passed
00:07:45.835  
00:07:45.835  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:45.835                suites      1      1    n/a      0        0
00:07:45.835                 tests      5      5      5      0        0
00:07:45.835               asserts     28     28     28      0      n/a
00:07:45.835  
00:07:45.835  Elapsed time =    0.000 seconds
00:07:45.835   16:51:38	-- unit/unittest.sh@99 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut
00:07:45.835  
00:07:45.835  
00:07:45.835       CUnit - A unit testing framework for C - Version 2.1-3
00:07:45.835       http://cunit.sourceforge.net/
00:07:45.835  
00:07:45.835  
00:07:45.835  Suite: nvme_io_msg
00:07:45.835    Test: test_nvme_io_msg_send ...passed
00:07:45.835    Test: test_nvme_io_msg_process ...passed
00:07:45.835    Test: test_nvme_io_msg_ctrlr_register_unregister ...passed
00:07:45.835  
00:07:45.835  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:45.835                suites      1      1    n/a      0        0
00:07:45.835                 tests      3      3      3      0        0
00:07:45.835               asserts     56     56     56      0      n/a
00:07:45.835  
00:07:45.835  Elapsed time =    0.000 seconds
00:07:45.835   16:51:38	-- unit/unittest.sh@100 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut
00:07:45.835  
00:07:45.835  
00:07:45.835       CUnit - A unit testing framework for C - Version 2.1-3
00:07:45.835       http://cunit.sourceforge.net/
00:07:45.835  
00:07:45.835  
00:07:45.835  Suite: nvme_pcie_common
00:07:45.835    Test: test_nvme_pcie_ctrlr_alloc_cmb ...passed
00:07:45.835    Test: test_nvme_pcie_qpair_construct_destroy ...[2024-11-19 16:51:38.532870] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:  87:nvme_pcie_ctrlr_alloc_cmb: *ERROR*: Tried to allocate past valid CMB range!
00:07:45.835  passed
00:07:45.835    Test: test_nvme_pcie_ctrlr_cmd_create_delete_io_queue ...passed
00:07:45.835    Test: test_nvme_pcie_ctrlr_connect_qpair ...[2024-11-19 16:51:38.533736] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 503:nvme_completion_create_cq_cb: *ERROR*: nvme_create_io_cq failed!
00:07:45.835  [2024-11-19 16:51:38.533873] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 456:nvme_completion_create_sq_cb: *ERROR*: nvme_create_io_sq failed, deleting cq!
00:07:45.835  [2024-11-19 16:51:38.533920] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 550:_nvme_pcie_ctrlr_create_io_qpair: *ERROR*: Failed to send request to create_io_cq
00:07:45.835  passed
00:07:45.835    Test: test_nvme_pcie_ctrlr_construct_admin_qpair ...passed
00:07:45.835    Test: test_nvme_pcie_poll_group_get_stats ...[2024-11-19 16:51:38.534418] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1791:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer
00:07:45.835  [2024-11-19 16:51:38.534483] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1791:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer
00:07:45.835  passed
00:07:45.835  
00:07:45.835  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:45.835                suites      1      1    n/a      0        0
00:07:45.835                 tests      6      6      6      0        0
00:07:45.835               asserts    148    148    148      0      n/a
00:07:45.835  
00:07:45.835  Elapsed time =    0.002 seconds
00:07:45.835   16:51:38	-- unit/unittest.sh@101 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut
00:07:45.835  
00:07:45.835  
00:07:45.835       CUnit - A unit testing framework for C - Version 2.1-3
00:07:45.835       http://cunit.sourceforge.net/
00:07:45.835  
00:07:45.835  
00:07:45.835  Suite: nvme_fabric
00:07:45.835    Test: test_nvme_fabric_prop_set_cmd ...passed
00:07:45.835    Test: test_nvme_fabric_prop_get_cmd ...passed
00:07:45.835    Test: test_nvme_fabric_get_discovery_log_page ...passed
00:07:45.835    Test: test_nvme_fabric_discover_probe ...passed
00:07:45.835    Test: test_nvme_fabric_qpair_connect ...[2024-11-19 16:51:38.571693] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -125, trtype:(null) adrfam:(null) traddr: trsvcid: subnqn:nqn.2016-06.io.spdk:subsystem1
00:07:45.835  passed
00:07:45.835  
00:07:45.835  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:45.835                suites      1      1    n/a      0        0
00:07:45.835                 tests      5      5      5      0        0
00:07:45.835               asserts     60     60     60      0      n/a
00:07:45.835  
00:07:45.835  Elapsed time =    0.001 seconds
00:07:45.835   16:51:38	-- unit/unittest.sh@102 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut
00:07:45.835  
00:07:45.835  
00:07:45.835       CUnit - A unit testing framework for C - Version 2.1-3
00:07:45.835       http://cunit.sourceforge.net/
00:07:45.835  
00:07:45.835  
00:07:45.835  Suite: nvme_opal
00:07:45.835    Test: test_opal_nvme_security_recv_send_done ...passed
00:07:45.835    Test: test_opal_add_short_atom_header ...[2024-11-19 16:51:38.611899] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_opal.c: 171:opal_add_token_bytestring: *ERROR*: Error adding bytestring: end of buffer.
00:07:45.835  passed
00:07:45.835  
00:07:45.835  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:45.835                suites      1      1    n/a      0        0
00:07:45.835                 tests      2      2      2      0        0
00:07:45.835               asserts     22     22     22      0      n/a
00:07:45.835  
00:07:45.835  Elapsed time =    0.001 seconds
00:07:45.835  
00:07:45.835  real	0m1.415s
00:07:45.835  user	0m0.687s
00:07:45.835  sys	0m0.587s
00:07:45.835   16:51:38	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:45.835   16:51:38	-- common/autotest_common.sh@10 -- # set +x
00:07:45.835  ************************************
00:07:45.835  END TEST unittest_nvme
00:07:45.835  ************************************
00:07:45.835   16:51:38	-- unit/unittest.sh@223 -- # run_test unittest_log /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut
00:07:45.835   16:51:38	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:07:45.835   16:51:38	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:45.835   16:51:38	-- common/autotest_common.sh@10 -- # set +x
00:07:46.094  ************************************
00:07:46.094  START TEST unittest_log
00:07:46.094  ************************************
00:07:46.094   16:51:38	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut
00:07:46.094  
00:07:46.094  
00:07:46.094       CUnit - A unit testing framework for C - Version 2.1-3
00:07:46.094       http://cunit.sourceforge.net/
00:07:46.094  
00:07:46.094  
00:07:46.094  Suite: log
00:07:46.094    Test: log_test ...[2024-11-19 16:51:38.715105] log_ut.c:  54:log_test: *WARNING*: log warning unit test
00:07:46.094  [2024-11-19 16:51:38.715742] log_ut.c:  55:log_test: *DEBUG*: log test
00:07:46.094  log dump test:
00:07:46.094  00000000  6c 6f 67 20 64 75 6d 70                            log dump
00:07:46.094  spdk dump test:
00:07:46.094  00000000  73 70 64 6b 20 64 75 6d  70                        spdk dump
00:07:46.094  spdk dump test:
00:07:46.094  passed
00:07:46.095    Test: deprecation ...00000000  73 70 64 6b 20 64 75 6d  70 20 31 36 20 6d 6f 72  spdk dump 16 mor
00:07:46.095  00000010  65 20 63 68 61 72 73                              e chars
00:07:47.029  passed
00:07:47.029  
00:07:47.029  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:47.029                suites      1      1    n/a      0        0
00:07:47.029                 tests      2      2      2      0        0
00:07:47.029               asserts     73     73     73      0      n/a
00:07:47.029  
00:07:47.029  Elapsed time =    0.001 seconds
00:07:47.029  
00:07:47.029  real	0m1.043s
00:07:47.029  user	0m0.025s
00:07:47.029  sys	0m0.017s
00:07:47.029   16:51:39	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:47.029   16:51:39	-- common/autotest_common.sh@10 -- # set +x
00:07:47.029  ************************************
00:07:47.029  END TEST unittest_log
00:07:47.029  ************************************
00:07:47.029   16:51:39	-- unit/unittest.sh@224 -- # run_test unittest_lvol /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut
00:07:47.029   16:51:39	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:07:47.029   16:51:39	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:47.029   16:51:39	-- common/autotest_common.sh@10 -- # set +x
00:07:47.029  ************************************
00:07:47.029  START TEST unittest_lvol
00:07:47.029  ************************************
00:07:47.029   16:51:39	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut
00:07:47.029  
00:07:47.029  
00:07:47.029       CUnit - A unit testing framework for C - Version 2.1-3
00:07:47.029       http://cunit.sourceforge.net/
00:07:47.029  
00:07:47.029  
00:07:47.029  Suite: lvol
00:07:47.029    Test: lvs_init_unload_success ...[2024-11-19 16:51:39.833135] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 892:spdk_lvs_unload: *ERROR*: Lvols still open on lvol store
00:07:47.029  passed
00:07:47.029    Test: lvs_init_destroy_success ...[2024-11-19 16:51:39.833715] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 962:spdk_lvs_destroy: *ERROR*: Lvols still open on lvol store
00:07:47.029  passed
00:07:47.029    Test: lvs_init_opts_success ...passed
00:07:47.029    Test: lvs_unload_lvs_is_null_fail ...[2024-11-19 16:51:39.833998] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 882:spdk_lvs_unload: *ERROR*: Lvol store is NULL
00:07:47.029  passed
00:07:47.029    Test: lvs_names ...[2024-11-19 16:51:39.834065] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 726:spdk_lvs_init: *ERROR*: No name specified.
00:07:47.029  [2024-11-19 16:51:39.834128] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 720:spdk_lvs_init: *ERROR*: Name has no null terminator.
00:07:47.029  [2024-11-19 16:51:39.834329] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 736:spdk_lvs_init: *ERROR*: lvolstore with name x already exists
00:07:47.029  passed
00:07:47.029    Test: lvol_create_destroy_success ...passed
00:07:47.029    Test: lvol_create_fail ...[2024-11-19 16:51:39.835178] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 689:spdk_lvs_init: *ERROR*: Blobstore device does not exist
00:07:47.029  [2024-11-19 16:51:39.835325] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1190:spdk_lvol_create: *ERROR*: lvol store does not exist
00:07:47.029  passed
00:07:47.029    Test: lvol_destroy_fail ...[2024-11-19 16:51:39.835690] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1026:lvol_delete_blob_cb: *ERROR*: Could not remove blob on lvol gracefully - forced removal
00:07:47.029  passed
00:07:47.029    Test: lvol_close ...[2024-11-19 16:51:39.835930] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1614:spdk_lvol_close: *ERROR*: lvol does not exist
00:07:47.029  [2024-11-19 16:51:39.835991] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 995:lvol_close_blob_cb: *ERROR*: Could not close blob on lvol
00:07:47.029  passed
00:07:47.029    Test: lvol_resize ...passed
00:07:47.029    Test: lvol_set_read_only ...passed
00:07:47.029    Test: test_lvs_load ...[2024-11-19 16:51:39.836896] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 631:lvs_opts_copy: *ERROR*: opts_size should not be zero value
00:07:47.029  passed
00:07:47.029    Test: lvols_load ...[2024-11-19 16:51:39.836951] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 441:lvs_load: *ERROR*: Invalid options
00:07:47.029  [2024-11-19 16:51:39.837209] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list
00:07:47.029  [2024-11-19 16:51:39.837349] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list
00:07:47.029  passed
00:07:47.029    Test: lvol_open ...passed
00:07:47.029    Test: lvol_snapshot ...passed
00:07:47.029    Test: lvol_snapshot_fail ...[2024-11-19 16:51:39.838149] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name snap already exists
00:07:47.029  passed
00:07:47.029    Test: lvol_clone ...passed
00:07:47.029    Test: lvol_clone_fail ...[2024-11-19 16:51:39.838797] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone already exists
00:07:47.029  passed
00:07:47.029    Test: lvol_iter_clones ...passed
00:07:47.029    Test: lvol_refcnt ...[2024-11-19 16:51:39.839417] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1572:spdk_lvol_destroy: *ERROR*: Cannot destroy lvol 1aec3a28-574d-4b4d-a638-d69bc511e03c because it is still open
00:07:47.029  passed
00:07:47.029    Test: lvol_names ...[2024-11-19 16:51:39.839654] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator.
00:07:47.030  [2024-11-19 16:51:39.839783] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists
00:07:47.030  [2024-11-19 16:51:39.840037] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1169:lvs_verify_lvol_name: *ERROR*: lvol with name tmp_name is being already created
00:07:47.030  passed
00:07:47.030    Test: lvol_create_thin_provisioned ...passed
00:07:47.030    Test: lvol_rename ...[2024-11-19 16:51:39.840581] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists
00:07:47.030  [2024-11-19 16:51:39.840696] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1524:spdk_lvol_rename: *ERROR*: Lvol lvol_new already exists in lvol store lvs
00:07:47.030  passed
00:07:47.030    Test: lvs_rename ...[2024-11-19 16:51:39.840913] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 769:lvs_rename_cb: *ERROR*: Lvol store rename operation failed
00:07:47.030  passed
00:07:47.030    Test: lvol_inflate ...[2024-11-19 16:51:39.841133] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol
00:07:47.030  passed
00:07:47.030    Test: lvol_decouple_parent ...[2024-11-19 16:51:39.841430] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol
00:07:47.030  passed
00:07:47.030    Test: lvol_get_xattr ...passed
00:07:47.030    Test: lvol_esnap_reload ...passed
00:07:47.030    Test: lvol_esnap_create_bad_args ...[2024-11-19 16:51:39.841946] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1245:spdk_lvol_create_esnap_clone: *ERROR*: lvol store does not exist
00:07:47.030  [2024-11-19 16:51:39.842006] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator.
00:07:47.030  [2024-11-19 16:51:39.842072] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1258:spdk_lvol_create_esnap_clone: *ERROR*: Cannot create 'lvs/clone1': size 4198400 is not an integer multiple of cluster size 1048576
00:07:47.030  [2024-11-19 16:51:39.842214] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists
00:07:47.030  [2024-11-19 16:51:39.842380] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone1 already exists
00:07:47.030  passed
00:07:47.030    Test: lvol_esnap_create_delete ...passed
00:07:47.030    Test: lvol_esnap_load_esnaps ...passed
00:07:47.030    Test: lvol_esnap_missing ...[2024-11-19 16:51:39.842728] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1832:lvs_esnap_bs_dev_create: *ERROR*: Blob 0x2a: no lvs context nor lvol context
00:07:47.030  [2024-11-19 16:51:39.842905] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists
00:07:47.030  [2024-11-19 16:51:39.842964] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists
00:07:47.030  passed
00:07:47.030    Test: lvol_esnap_hotplug ...
00:07:47.030  	lvol_esnap_hotplug scenario 0: PASS - one missing, happy path
00:07:47.030  	lvol_esnap_hotplug scenario 1: PASS - one missing, cb registers degraded_set
00:07:47.030  	lvol_esnap_hotplug scenario 2: PASS - one missing, cb retuns -ENOMEM
00:07:47.030  [2024-11-19 16:51:39.843657] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol 44131ee6-f908-4aa4-aa3d-3a013baf07e5: failed to create esnap bs_dev: error -12
00:07:47.030  	lvol_esnap_hotplug scenario 3: PASS - two missing with same esnap, happy path
00:07:47.030  	lvol_esnap_hotplug scenario 4: PASS - two missing with same esnap, first -ENOMEM
00:07:47.030  	lvol_esnap_hotplug scenario 5: PASS - two missing with same esnap, second -ENOMEM
00:07:47.030  [2024-11-19 16:51:39.843885] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol 9433a69f-fe70-4bf1-8342-06a00cfe6037: failed to create esnap bs_dev: error -12
00:07:47.030  [2024-11-19 16:51:39.844007] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol 510edff4-aa65-464d-8b93-572ac14fdcc0: failed to create esnap bs_dev: error -12
00:07:47.030  	lvol_esnap_hotplug scenario 6: PASS - two missing with different esnaps, happy path
00:07:47.030  	lvol_esnap_hotplug scenario 7: PASS - two missing with different esnaps, first still missing
00:07:47.030  	lvol_esnap_hotplug scenario 8: PASS - three missing with same esnap, happy path
00:07:47.030  	lvol_esnap_hotplug scenario 9: PASS - three missing with same esnap, first still missing
00:07:47.030  	lvol_esnap_hotplug scenario 10: PASS - three missing with same esnap, first two still missing
00:07:47.030  	lvol_esnap_hotplug scenario 11: PASS - three missing with same esnap, middle still missing
00:07:47.030  	lvol_esnap_hotplug scenario 12: PASS - three missing with same esnap, last still missing
00:07:47.030  passed
00:07:47.030    Test: lvol_get_by ...passed
00:07:47.030  
00:07:47.030  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:47.030                suites      1      1    n/a      0        0
00:07:47.030                 tests     34     34     34      0        0
00:07:47.030               asserts   1439   1439   1439      0      n/a
00:07:47.030  
00:07:47.030  Elapsed time =    0.012 seconds
00:07:47.030  
00:07:47.030  real	0m0.064s
00:07:47.030  user	0m0.029s
00:07:47.030  sys	0m0.036s
00:07:47.030   16:51:39	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:47.030   16:51:39	-- common/autotest_common.sh@10 -- # set +x
00:07:47.030  ************************************
00:07:47.030  END TEST unittest_lvol
00:07:47.030  ************************************
00:07:47.290   16:51:39	-- unit/unittest.sh@225 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h
00:07:47.290   16:51:39	-- unit/unittest.sh@226 -- # run_test unittest_nvme_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut
00:07:47.290   16:51:39	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:07:47.290   16:51:39	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:47.290   16:51:39	-- common/autotest_common.sh@10 -- # set +x
00:07:47.290  ************************************
00:07:47.290  START TEST unittest_nvme_rdma
00:07:47.290  ************************************
00:07:47.290   16:51:39	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut
00:07:47.290  
00:07:47.290  
00:07:47.290       CUnit - A unit testing framework for C - Version 2.1-3
00:07:47.290       http://cunit.sourceforge.net/
00:07:47.290  
00:07:47.290  
00:07:47.290  Suite: nvme_rdma
00:07:47.290    Test: test_nvme_rdma_build_sgl_request ...[2024-11-19 16:51:39.965650] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1455:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -34
00:07:47.290  [2024-11-19 16:51:39.966092] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1628:nvme_rdma_build_sgl_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215
00:07:47.290  [2024-11-19 16:51:39.966219] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1684:nvme_rdma_build_sgl_request: *ERROR*: Size of SGL descriptors (64) exceeds ICD (60)
00:07:47.290  passed
00:07:47.290    Test: test_nvme_rdma_build_sgl_inline_request ...passed
00:07:47.290    Test: test_nvme_rdma_build_contig_request ...passed
00:07:47.290    Test: test_nvme_rdma_build_contig_inline_request ...passed
00:07:47.291    Test: test_nvme_rdma_create_reqs ...[2024-11-19 16:51:39.966328] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1565:nvme_rdma_build_contig_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215
00:07:47.291  [2024-11-19 16:51:39.966501] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1007:nvme_rdma_create_reqs: *ERROR*: Failed to allocate rdma_reqs
00:07:47.291  passed
00:07:47.291    Test: test_nvme_rdma_create_rsps ...[2024-11-19 16:51:39.966961] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 925:nvme_rdma_create_rsps: *ERROR*: Failed to allocate rsp_sgls
00:07:47.291  passed
00:07:47.291    Test: test_nvme_rdma_ctrlr_create_qpair ...[2024-11-19 16:51:39.967232] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1822:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2.
00:07:47.291  passed
00:07:47.291    Test: test_nvme_rdma_poller_create ...[2024-11-19 16:51:39.967315] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1822:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2.
00:07:47.291  passed
00:07:47.291    Test: test_nvme_rdma_qpair_process_cm_event ...passed
00:07:47.291    Test: test_nvme_rdma_ctrlr_construct ...[2024-11-19 16:51:39.967544] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 526:nvme_rdma_qpair_process_cm_event: *ERROR*: Unexpected Acceptor Event [255]
00:07:47.291  passed
00:07:47.291    Test: test_nvme_rdma_req_put_and_get ...passed
00:07:47.291    Test: test_nvme_rdma_req_init ...passed
00:07:47.291    Test: test_nvme_rdma_validate_cm_event ...[2024-11-19 16:51:39.967894] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_CONNECT_RESPONSE (5) from CM event channel (status = 0)
00:07:47.291  passed
00:07:47.291    Test: test_nvme_rdma_qpair_init ...[2024-11-19 16:51:39.967957] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 10)
00:07:47.291  passed
00:07:47.291    Test: test_nvme_rdma_qpair_submit_request ...passed
00:07:47.291    Test: test_nvme_rdma_memory_domain ...passed
00:07:47.291    Test: test_rdma_ctrlr_get_memory_domains ...passed
00:07:47.291    Test: test_rdma_get_memory_translation ...[2024-11-19 16:51:39.968198] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 352:nvme_rdma_get_memory_domain: *ERROR*: Failed to create memory domain
00:07:47.291  [2024-11-19 16:51:39.968319] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1444:nvme_rdma_get_memory_translation: *ERROR*: DMA memory translation failed, rc -1, iov count 0
00:07:47.291  passed
00:07:47.291    Test: test_get_rdma_qpair_from_wc ...[2024-11-19 16:51:39.968392] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1455:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -1
00:07:47.291  passed
00:07:47.291    Test: test_nvme_rdma_ctrlr_get_max_sges ...passed
00:07:47.291    Test: test_nvme_rdma_poll_group_get_stats ...[2024-11-19 16:51:39.968512] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3239:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer
00:07:47.291  [2024-11-19 16:51:39.968570] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3239:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer
00:07:47.291  passed
00:07:47.291    Test: test_nvme_rdma_qpair_set_poller ...[2024-11-19 16:51:39.968726] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2972:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2.
00:07:47.291  [2024-11-19 16:51:39.968795] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3018:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device 0xfeedbeef
00:07:47.291  [2024-11-19 16:51:39.968847] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 723:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x7fff36b21c80 on poll group 0x60b0000001a0
00:07:47.291  [2024-11-19 16:51:39.968919] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2972:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2.
00:07:47.291  [2024-11-19 16:51:39.968998] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3018:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device (nil)
00:07:47.291  [2024-11-19 16:51:39.969051] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 723:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x7fff36b21c80 on poll group 0x60b0000001a0
00:07:47.291  [2024-11-19 16:51:39.969151] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 701:nvme_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory
00:07:47.291  passed
00:07:47.291  
00:07:47.291  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:47.291                suites      1      1    n/a      0        0
00:07:47.291                 tests     22     22     22      0        0
00:07:47.291               asserts    412    412    412      0      n/a
00:07:47.291  
00:07:47.291  Elapsed time =    0.004 seconds
00:07:47.291  
00:07:47.291  real	0m0.045s
00:07:47.291  user	0m0.023s
00:07:47.291  sys	0m0.022s
00:07:47.291   16:51:39	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:47.291   16:51:39	-- common/autotest_common.sh@10 -- # set +x
00:07:47.291  ************************************
00:07:47.291  END TEST unittest_nvme_rdma
00:07:47.291  ************************************
00:07:47.291   16:51:40	-- unit/unittest.sh@227 -- # run_test unittest_nvmf_transport /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut
00:07:47.291   16:51:40	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:07:47.291   16:51:40	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:47.291   16:51:40	-- common/autotest_common.sh@10 -- # set +x
00:07:47.291  ************************************
00:07:47.291  START TEST unittest_nvmf_transport
00:07:47.291  ************************************
00:07:47.291   16:51:40	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut
00:07:47.291  
00:07:47.291  
00:07:47.291       CUnit - A unit testing framework for C - Version 2.1-3
00:07:47.291       http://cunit.sourceforge.net/
00:07:47.291  
00:07:47.291  
00:07:47.291  Suite: nvmf
00:07:47.291    Test: test_spdk_nvmf_transport_create ...[2024-11-19 16:51:40.080345] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 247:nvmf_transport_create: *ERROR*: Transport type 'new_ops' unavailable.
00:07:47.291  [2024-11-19 16:51:40.080747] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 267:nvmf_transport_create: *ERROR*: io_unit_size cannot be 0
00:07:47.291  [2024-11-19 16:51:40.080831] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 271:nvmf_transport_create: *ERROR*: io_unit_size 131072 is larger than iobuf pool large buffer size 65536
00:07:47.291  [2024-11-19 16:51:40.080981] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 254:nvmf_transport_create: *ERROR*: max_io_size 4096 must be a power of 2 and be greater than or equal 8KB
00:07:47.291  passed
00:07:47.291    Test: test_nvmf_transport_poll_group_create ...passed
00:07:47.291    Test: test_spdk_nvmf_transport_opts_init ...[2024-11-19 16:51:40.081294] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 788:spdk_nvmf_transport_opts_init: *ERROR*: Transport type invalid_ops unavailable.
00:07:47.291  passed
00:07:47.291    Test: test_spdk_nvmf_transport_listen_ext ...[2024-11-19 16:51:40.081402] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 793:spdk_nvmf_transport_opts_init: *ERROR*: opts should not be NULL
00:07:47.291  [2024-11-19 16:51:40.081453] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 798:spdk_nvmf_transport_opts_init: *ERROR*: opts_size inside opts should not be zero value
00:07:47.291  passed
00:07:47.291  
00:07:47.291  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:47.291                suites      1      1    n/a      0        0
00:07:47.291                 tests      4      4      4      0        0
00:07:47.291               asserts     49     49     49      0      n/a
00:07:47.291  
00:07:47.291  Elapsed time =    0.001 seconds
00:07:47.291  
00:07:47.291  real	0m0.048s
00:07:47.291  user	0m0.014s
00:07:47.291  sys	0m0.035s
00:07:47.291   16:51:40	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:47.291   16:51:40	-- common/autotest_common.sh@10 -- # set +x
00:07:47.291  ************************************
00:07:47.291  END TEST unittest_nvmf_transport
00:07:47.291  ************************************
00:07:47.551   16:51:40	-- unit/unittest.sh@228 -- # run_test unittest_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut
00:07:47.551   16:51:40	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:07:47.551   16:51:40	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:47.551   16:51:40	-- common/autotest_common.sh@10 -- # set +x
00:07:47.551  ************************************
00:07:47.551  START TEST unittest_rdma
00:07:47.551  ************************************
00:07:47.551   16:51:40	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut
00:07:47.551  
00:07:47.551  
00:07:47.551       CUnit - A unit testing framework for C - Version 2.1-3
00:07:47.551       http://cunit.sourceforge.net/
00:07:47.551  
00:07:47.551  
00:07:47.551  Suite: rdma_common
00:07:47.551    Test: test_spdk_rdma_pd ...[2024-11-19 16:51:40.193962] /home/vagrant/spdk_repo/spdk/lib/rdma/common.c: 533:spdk_rdma_get_pd: *ERROR*: Failed to get PD
00:07:47.551  [2024-11-19 16:51:40.194425] /home/vagrant/spdk_repo/spdk/lib/rdma/common.c: 533:spdk_rdma_get_pd: *ERROR*: Failed to get PD
00:07:47.551  passed
00:07:47.551  
00:07:47.551  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:47.551                suites      1      1    n/a      0        0
00:07:47.551                 tests      1      1      1      0        0
00:07:47.551               asserts     31     31     31      0      n/a
00:07:47.551  
00:07:47.551  Elapsed time =    0.001 seconds
00:07:47.551  
00:07:47.551  real	0m0.044s
00:07:47.551  user	0m0.026s
00:07:47.551  sys	0m0.018s
00:07:47.551   16:51:40	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:47.551   16:51:40	-- common/autotest_common.sh@10 -- # set +x
00:07:47.551  ************************************
00:07:47.551  END TEST unittest_rdma
00:07:47.551  ************************************
00:07:47.551   16:51:40	-- unit/unittest.sh@231 -- # grep -q '#define SPDK_CONFIG_NVME_CUSE 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h
00:07:47.551   16:51:40	-- unit/unittest.sh@232 -- # run_test unittest_nvme_cuse /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut
00:07:47.551   16:51:40	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:07:47.551   16:51:40	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:47.551   16:51:40	-- common/autotest_common.sh@10 -- # set +x
00:07:47.551  ************************************
00:07:47.551  START TEST unittest_nvme_cuse
00:07:47.551  ************************************
00:07:47.551   16:51:40	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut
00:07:47.551  
00:07:47.551  
00:07:47.551       CUnit - A unit testing framework for C - Version 2.1-3
00:07:47.551       http://cunit.sourceforge.net/
00:07:47.551  
00:07:47.551  
00:07:47.551  Suite: nvme_cuse
00:07:47.551    Test: test_cuse_nvme_submit_io_read_write ...passed
00:07:47.551    Test: test_cuse_nvme_submit_io_read_write_with_md ...passed
00:07:47.551    Test: test_cuse_nvme_submit_passthru_cmd ...passed
00:07:47.551    Test: test_cuse_nvme_submit_passthru_cmd_with_md ...passed
00:07:47.551    Test: test_nvme_cuse_get_cuse_ns_device ...passed
00:07:47.551    Test: test_cuse_nvme_submit_io ...[2024-11-19 16:51:40.310867] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 656:cuse_nvme_submit_io: *ERROR*: SUBMIT_IO: opc:0 not valid
00:07:47.551  passed
00:07:47.551    Test: test_cuse_nvme_reset ...[2024-11-19 16:51:40.311568] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 341:cuse_nvme_reset: *ERROR*: Namespace reset not supported
00:07:47.551  passed
00:07:47.551    Test: test_nvme_cuse_stop ...passed
00:07:47.551    Test: test_spdk_nvme_cuse_get_ctrlr_name ...passed
00:07:47.551  
00:07:47.551  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:47.551                suites      1      1    n/a      0        0
00:07:47.551                 tests      9      9      9      0        0
00:07:47.551               asserts    121    121    121      0      n/a
00:07:47.551  
00:07:47.551  Elapsed time =    0.001 seconds
00:07:47.551  
00:07:47.551  real	0m0.038s
00:07:47.551  user	0m0.017s
00:07:47.551  sys	0m0.020s
00:07:47.551   16:51:40	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:47.551   16:51:40	-- common/autotest_common.sh@10 -- # set +x
00:07:47.551  ************************************
00:07:47.551  END TEST unittest_nvme_cuse
00:07:47.551  ************************************
00:07:47.551   16:51:40	-- unit/unittest.sh@235 -- # run_test unittest_nvmf unittest_nvmf
00:07:47.551   16:51:40	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:07:47.551   16:51:40	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:47.551   16:51:40	-- common/autotest_common.sh@10 -- # set +x
00:07:47.551  ************************************
00:07:47.551  START TEST unittest_nvmf
00:07:47.551  ************************************
00:07:47.551   16:51:40	-- common/autotest_common.sh@1114 -- # unittest_nvmf
00:07:47.551   16:51:40	-- unit/unittest.sh@106 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr.c/ctrlr_ut
00:07:47.811  
00:07:47.812  
00:07:47.812       CUnit - A unit testing framework for C - Version 2.1-3
00:07:47.812       http://cunit.sourceforge.net/
00:07:47.812  
00:07:47.812  
00:07:47.812  Suite: nvmf
00:07:47.812    Test: test_get_log_page ...passed
00:07:47.812    Test: test_process_fabrics_cmd ...[2024-11-19 16:51:40.430628] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2
00:07:47.812  passed
00:07:47.812    Test: test_connect ...[2024-11-19 16:51:40.431528] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 905:nvmf_ctrlr_cmd_connect: *ERROR*: Connect command data length 0x3ff too small
00:07:47.812  [2024-11-19 16:51:40.431656] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 768:_nvmf_ctrlr_connect: *ERROR*: Connect command unsupported RECFMT 1234
00:07:47.812  [2024-11-19 16:51:40.431720] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 944:nvmf_ctrlr_cmd_connect: *ERROR*: Connect HOSTNQN is not null terminated
00:07:47.812  [2024-11-19 16:51:40.431748] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:subsystem1' does not allow host 'nqn.2016-06.io.spdk:host1'
00:07:47.812  [2024-11-19 16:51:40.431855] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 779:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE = 0
00:07:47.812  [2024-11-19 16:51:40.431900] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 786:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE for admin queue 32 (min 1, max 31)
00:07:47.812  [2024-11-19 16:51:40.432018] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 792:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE 64 (min 1, max 63)
00:07:47.812  [2024-11-19 16:51:40.432063] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 819:_nvmf_ctrlr_connect: *ERROR*: The NVMf target only supports dynamic mode (CNTLID = 0x1234).
00:07:47.812  [2024-11-19 16:51:40.432183] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0xffff
00:07:47.812  [2024-11-19 16:51:40.432265] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 587:nvmf_ctrlr_add_io_qpair: *ERROR*: I/O connect not allowed on discovery controller
00:07:47.812  [2024-11-19 16:51:40.432556] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 593:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect before ctrlr was enabled
00:07:47.812  [2024-11-19 16:51:40.432637] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 599:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOSQES 3
00:07:47.812  [2024-11-19 16:51:40.432725] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 606:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOCQES 3
00:07:47.812  [2024-11-19 16:51:40.432795] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 623:nvmf_ctrlr_add_io_qpair: *ERROR*: Requested QID 3 but Max QID is 2
00:07:47.812  [2024-11-19 16:51:40.432908] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 232:ctrlr_add_qpair_and_send_rsp: *ERROR*: Got I/O connect with duplicate QID 1
00:07:47.812  passed
00:07:47.812    Test: test_get_ns_id_desc_list ...[2024-11-19 16:51:40.433064] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 699:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 2, group (nil))
00:07:47.812  passed
00:07:47.812    Test: test_identify_ns ...[2024-11-19 16:51:40.433306] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0
00:07:47.812  [2024-11-19 16:51:40.433523] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4
00:07:47.812  [2024-11-19 16:51:40.433674] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295
00:07:47.812  passed
00:07:47.812    Test: test_identify_ns_iocs_specific ...[2024-11-19 16:51:40.433818] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0
00:07:47.812  [2024-11-19 16:51:40.434125] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0
00:07:47.812  passed
00:07:47.812    Test: test_reservation_write_exclusive ...passed
00:07:47.812    Test: test_reservation_exclusive_access ...passed
00:07:47.812    Test: test_reservation_write_exclusive_regs_only_and_all_regs ...passed
00:07:47.812    Test: test_reservation_exclusive_access_regs_only_and_all_regs ...passed
00:07:47.812    Test: test_reservation_notification_log_page ...passed
00:07:47.812    Test: test_get_dif_ctx ...passed
00:07:47.812    Test: test_set_get_features ...[2024-11-19 16:51:40.434759] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1534:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9
00:07:47.812  [2024-11-19 16:51:40.434819] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1534:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9
00:07:47.812  [2024-11-19 16:51:40.434886] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1545:temp_threshold_opts_valid: *ERROR*: Invalid THSEL 3
00:07:47.812  passed
00:07:47.812    Test: test_identify_ctrlr ...passed
00:07:47.812    Test: test_identify_ctrlr_iocs_specific ...[2024-11-19 16:51:40.434959] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1621:nvmf_ctrlr_set_features_error_recovery: *ERROR*: Host set unsupported DULBE bit
00:07:47.812  passed
00:07:47.812    Test: test_custom_admin_cmd ...passed
00:07:47.812    Test: test_fused_compare_and_write ...[2024-11-19 16:51:40.435435] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4105:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong sequence of fused operations
00:07:47.812  [2024-11-19 16:51:40.435485] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4094:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations
00:07:47.812  passed
00:07:47.812    Test: test_multi_async_event_reqs ...passed
00:07:47.812    Test: test_get_ana_log_page_one_ns_per_anagrp ...passed
00:07:47.812    Test: test_get_ana_log_page_multi_ns_per_anagrp ...passed
00:07:47.812    Test: test_multi_async_events ...[2024-11-19 16:51:40.435541] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4112:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations
00:07:47.812  passed
00:07:47.812    Test: test_rae ...passed
00:07:47.812    Test: test_nvmf_ctrlr_create_destruct ...passed
00:07:47.812    Test: test_nvmf_ctrlr_use_zcopy ...passed
00:07:47.812    Test: test_spdk_nvmf_request_zcopy_start ...passed
00:07:47.812    Test: test_zcopy_read ...passed
00:07:47.812    Test: test_zcopy_write ...passed
00:07:47.812    Test: test_nvmf_property_set ...passed
00:07:47.812    Test: test_nvmf_ctrlr_get_features_host_behavior_support ...[2024-11-19 16:51:40.436042] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4232:nvmf_ctrlr_process_io_cmd: *ERROR*: I/O command sent before CONNECT
00:07:47.812  [2024-11-19 16:51:40.436191] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1832:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support
00:07:47.812  [2024-11-19 16:51:40.436277] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1832:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support
00:07:47.812  passed
00:07:47.812    Test: test_nvmf_ctrlr_set_features_host_behavior_support ...[2024-11-19 16:51:40.436332] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1855:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iovcnt: 0
00:07:47.812  passed
00:07:47.812  
00:07:47.812  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:47.812                suites      1      1    n/a      0        0
00:07:47.812                 tests     30     30     30      0        0
00:07:47.812               asserts    885    885    885      0      n/a
00:07:47.812  
00:07:47.812  Elapsed time =    0.006 seconds
00:07:47.812  [2024-11-19 16:51:40.436391] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1861:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iov_len: 0
00:07:47.812  [2024-11-19 16:51:40.436434] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1873:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid acre: 0x02
00:07:47.812   16:51:40	-- unit/unittest.sh@107 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut
00:07:47.812  
00:07:47.812  
00:07:47.812       CUnit - A unit testing framework for C - Version 2.1-3
00:07:47.812       http://cunit.sourceforge.net/
00:07:47.812  
00:07:47.812  
00:07:47.812  Suite: nvmf
00:07:47.812    Test: test_get_rw_params ...passed
00:07:47.812    Test: test_lba_in_range ...passed
00:07:47.812    Test: test_get_dif_ctx ...passed
00:07:47.812    Test: test_nvmf_bdev_ctrlr_identify_ns ...passed
00:07:47.812    Test: test_spdk_nvmf_bdev_ctrlr_compare_and_write_cmd ...[2024-11-19 16:51:40.476820] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 435:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Fused command start lba / num blocks mismatch
00:07:47.812  [2024-11-19 16:51:40.477127] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 443:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: end of media
00:07:47.812  [2024-11-19 16:51:40.477227] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 450:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Write NLB 2 * block size 512 > SGL length 1023
00:07:47.812  passed
00:07:47.812    Test: test_nvmf_bdev_ctrlr_zcopy_start ...[2024-11-19 16:51:40.477292] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 946:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: end of media
00:07:47.812  [2024-11-19 16:51:40.477388] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 953:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: Read NLB 2 * block size 512 > SGL length 1023
00:07:47.812  passed
00:07:47.812    Test: test_nvmf_bdev_ctrlr_cmd ...[2024-11-19 16:51:40.477508] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 389:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: end of media
00:07:47.812  passed
00:07:47.812    Test: test_nvmf_bdev_ctrlr_read_write_cmd ...passed
00:07:47.812    Test: test_nvmf_bdev_ctrlr_nvme_passthru ...passed
00:07:47.812  
00:07:47.812  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:47.812                suites      1      1    n/a      0        0
00:07:47.812                 tests      9      9      9      0        0
00:07:47.812               asserts    157    157    157      0      n/a
00:07:47.812  
00:07:47.812  Elapsed time =    0.001 seconds
00:07:47.812  [2024-11-19 16:51:40.477553] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 396:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: Compare NLB 3 * block size 512 > SGL length 512
00:07:47.812  [2024-11-19 16:51:40.477622] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 488:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: invalid write zeroes size, should not exceed 1Kib
00:07:47.812  [2024-11-19 16:51:40.477664] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 495:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: end of media
00:07:47.812   16:51:40	-- unit/unittest.sh@108 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut
00:07:47.812  
00:07:47.812  
00:07:47.812       CUnit - A unit testing framework for C - Version 2.1-3
00:07:47.812       http://cunit.sourceforge.net/
00:07:47.812  
00:07:47.812  
00:07:47.812  Suite: nvmf
00:07:47.812    Test: test_discovery_log ...passed
00:07:47.812    Test: test_discovery_log_with_filters ...passed
00:07:47.812  
00:07:47.812  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:47.812                suites      1      1    n/a      0        0
00:07:47.812                 tests      2      2      2      0        0
00:07:47.812               asserts    238    238    238      0      n/a
00:07:47.812  
00:07:47.812  Elapsed time =    0.003 seconds
00:07:47.812   16:51:40	-- unit/unittest.sh@109 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/subsystem.c/subsystem_ut
00:07:47.812  
00:07:47.812  
00:07:47.813       CUnit - A unit testing framework for C - Version 2.1-3
00:07:47.813       http://cunit.sourceforge.net/
00:07:47.813  
00:07:47.813  
00:07:47.813  Suite: nvmf
00:07:47.813    Test: nvmf_test_create_subsystem ...[2024-11-19 16:51:40.579165] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 125:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:". NQN must contain user specified name with a ':' as a prefix.
00:07:47.813  [2024-11-19 16:51:40.579576] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 134:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub". At least one Label is too long.
00:07:47.813  [2024-11-19 16:51:40.579690] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.3spdk:sub". Label names must start with a letter.
00:07:47.813  [2024-11-19 16:51:40.579743] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.-spdk:subsystem1". Label names must start with a letter.
00:07:47.813  [2024-11-19 16:51:40.579786] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 183:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk-:subsystem1". Label names must end with an alphanumeric symbol.
00:07:47.813  [2024-11-19 16:51:40.579841] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io..spdk:subsystem1". Label names must start with a letter.
00:07:47.813  [2024-11-19 16:51:40.579980] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:  79:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa": length 224 > max 223
00:07:47.813  [2024-11-19 16:51:40.580192] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 207:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk:�subsystem1". Label names must contain only valid utf-8.
00:07:47.813  [2024-11-19 16:51:40.580314] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:  97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa": uuid is not the correct length
00:07:47.813  [2024-11-19 16:51:40.580364] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly
00:07:47.813  passed
00:07:47.813    Test: test_spdk_nvmf_subsystem_add_ns ...[2024-11-19 16:51:40.580405] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly
00:07:47.813  [2024-11-19 16:51:40.580637] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 5 already in use
00:07:47.813  passed
00:07:47.813    Test: test_spdk_nvmf_subsystem_set_sn ...passed
00:07:47.813    Test: test_reservation_register ...[2024-11-19 16:51:40.580763] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1774:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Invalid NSID 4294967295
00:07:47.813  [2024-11-19 16:51:40.581082] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1
00:07:47.813  [2024-11-19 16:51:40.581229] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2881:nvmf_ns_reservation_register: *ERROR*: No registrant
00:07:47.813  passed
00:07:47.813    Test: test_reservation_register_with_ptpl ...passed
00:07:47.813    Test: test_reservation_acquire_preempt_1 ...[2024-11-19 16:51:40.582414] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1
00:07:47.813  passed
00:07:47.813    Test: test_reservation_acquire_release_with_ptpl ...passed
00:07:47.813    Test: test_reservation_release ...[2024-11-19 16:51:40.584324] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1
00:07:47.813  passed
00:07:47.813    Test: test_reservation_unregister_notification ...[2024-11-19 16:51:40.584615] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1
00:07:47.813  passed
00:07:47.813    Test: test_reservation_release_notification ...[2024-11-19 16:51:40.584944] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1
00:07:47.813  passed
00:07:47.813    Test: test_reservation_release_notification_write_exclusive ...[2024-11-19 16:51:40.585229] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1
00:07:47.813  passed
00:07:47.813    Test: test_reservation_clear_notification ...passed
00:07:47.813    Test: test_reservation_preempt_notification ...[2024-11-19 16:51:40.585498] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1
00:07:47.813  passed
00:07:47.813    Test: test_spdk_nvmf_ns_event ...[2024-11-19 16:51:40.585742] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1
00:07:47.813  passed
00:07:47.813    Test: test_nvmf_ns_reservation_add_remove_registrant ...passed
00:07:47.813    Test: test_nvmf_subsystem_add_ctrlr ...passed
00:07:47.813    Test: test_spdk_nvmf_subsystem_add_host ...[2024-11-19 16:51:40.586646] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 260:nvmf_transport_create: *ERROR*: max_aq_depth 0 is less than minimum defined by NVMf spec, use min value
00:07:47.813  passed
00:07:47.813    Test: test_nvmf_ns_reservation_report ...passed
00:07:47.813    Test: test_nvmf_nqn_is_valid ...[2024-11-19 16:51:40.586755] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to transport_ut transport
00:07:47.813  [2024-11-19 16:51:40.586929] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3186:nvmf_ns_reservation_report: *ERROR*: NVMeoF uses extended controller data structure, please set EDS bit in cdw11 and try again
00:07:47.813  [2024-11-19 16:51:40.587027] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:  85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.": length 4 < min 11
00:07:47.813  passed
00:07:47.813    Test: test_nvmf_ns_reservation_restore ...passed
00:07:47.813    Test: test_nvmf_subsystem_state_change ...[2024-11-19 16:51:40.587093] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:  97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:514a80d7-a0d0-40b9-bdc1-429e7eb6e70": uuid is not the correct length
00:07:47.813  [2024-11-19 16:51:40.587149] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io...spdk:cnode1". Label names must start with a letter.
00:07:47.813  [2024-11-19 16:51:40.587271] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2380:nvmf_ns_reservation_restore: *ERROR*: Existing bdev UUID is not same with configuration file
00:07:47.813  passed
00:07:47.813    Test: test_nvmf_reservation_custom_ops ...passed
00:07:47.813  
00:07:47.813  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:47.813                suites      1      1    n/a      0        0
00:07:47.813                 tests     22     22     22      0        0
00:07:47.813               asserts    407    407    407      0      n/a
00:07:47.813  
00:07:47.813  Elapsed time =    0.009 seconds
00:07:47.813   16:51:40	-- unit/unittest.sh@110 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/tcp.c/tcp_ut
00:07:47.813  
00:07:47.813  
00:07:47.813       CUnit - A unit testing framework for C - Version 2.1-3
00:07:47.813       http://cunit.sourceforge.net/
00:07:47.813  
00:07:47.813  
00:07:47.813  Suite: nvmf
00:07:47.813    Test: test_nvmf_tcp_create ...[2024-11-19 16:51:40.670348] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c: 732:nvmf_tcp_create: *ERROR*: Unsupported IO Unit size specified, 16 bytes
00:07:47.813  passed
00:07:48.073    Test: test_nvmf_tcp_destroy ...passed
00:07:48.073    Test: test_nvmf_tcp_poll_group_create ...passed
00:07:48.073    Test: test_nvmf_tcp_send_c2h_data ...passed
00:07:48.073    Test: test_nvmf_tcp_h2c_data_hdr_handle ...passed
00:07:48.073    Test: test_nvmf_tcp_in_capsule_data_handle ...passed
00:07:48.073    Test: test_nvmf_tcp_qpair_init_mem_resource ...passed
00:07:48.073    Test: test_nvmf_tcp_send_c2h_term_req ...[2024-11-19 16:51:40.794360] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2
00:07:48.073  passed
00:07:48.073    Test: test_nvmf_tcp_send_capsule_resp_pdu ...passed
00:07:48.073    Test: test_nvmf_tcp_icreq_handle ...[2024-11-19 16:51:40.794486] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fffad1a6eb0 is same with the state(5) to be set
00:07:48.073  [2024-11-19 16:51:40.794596] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fffad1a6eb0 is same with the state(5) to be set
00:07:48.073  [2024-11-19 16:51:40.794651] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2
00:07:48.073  [2024-11-19 16:51:40.794692] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fffad1a6eb0 is same with the state(5) to be set
00:07:48.073  [2024-11-19 16:51:40.794809] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2091:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1
00:07:48.073  [2024-11-19 16:51:40.794939] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2
00:07:48.073  [2024-11-19 16:51:40.795007] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fffad1a6eb0 is same with the state(5) to be set
00:07:48.073  [2024-11-19 16:51:40.795049] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2091:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1
00:07:48.073  [2024-11-19 16:51:40.795099] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fffad1a6eb0 is same with the state(5) to be set
00:07:48.073  [2024-11-19 16:51:40.795141] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2
00:07:48.073  [2024-11-19 16:51:40.795181] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fffad1a6eb0 is same with the state(5) to be set
00:07:48.073  passed
00:07:48.073    Test: test_nvmf_tcp_check_xfer_type ...passed
00:07:48.073    Test: test_nvmf_tcp_invalid_sgl ...[2024-11-19 16:51:40.795228] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write IC_RESP to socket: rc=0, errno=2
00:07:48.073  [2024-11-19 16:51:40.795301] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fffad1a6eb0 is same with the state(5) to be set
00:07:48.073  [2024-11-19 16:51:40.795382] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2486:nvmf_tcp_req_parse_sgl: *ERROR*: SGL length 0x1001 exceeds max io size 0x1000
00:07:48.073  [2024-11-19 16:51:40.795427] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2
00:07:48.073  [2024-11-19 16:51:40.795463] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fffad1a6eb0 is same with the state(5) to be set
00:07:48.073  passed
00:07:48.073    Test: test_nvmf_tcp_pdu_ch_handle ...[2024-11-19 16:51:40.795519] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2218:nvmf_tcp_pdu_ch_handle: *ERROR*: Already received ICreq PDU, and reject this pdu=0x7fffad1a7c10
00:07:48.073  [2024-11-19 16:51:40.795617] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2
00:07:48.073  [2024-11-19 16:51:40.795681] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fffad1a7370 is same with the state(5) to be set
00:07:48.073  [2024-11-19 16:51:40.795738] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2275:nvmf_tcp_pdu_ch_handle: *ERROR*: PDU type=0x00, Expected ICReq header length 128, got 0 on tqpair=0x7fffad1a7370
00:07:48.073  [2024-11-19 16:51:40.795785] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2
00:07:48.073  [2024-11-19 16:51:40.795833] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fffad1a7370 is same with the state(5) to be set
00:07:48.073  [2024-11-19 16:51:40.795875] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2228:nvmf_tcp_pdu_ch_handle: *ERROR*: The TCP/IP connection is not negotiated
00:07:48.073  [2024-11-19 16:51:40.795920] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2
00:07:48.073  [2024-11-19 16:51:40.795986] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fffad1a7370 is same with the state(5) to be set
00:07:48.073  [2024-11-19 16:51:40.796044] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2267:nvmf_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x05
00:07:48.073  [2024-11-19 16:51:40.796091] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2
00:07:48.073  [2024-11-19 16:51:40.796132] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fffad1a7370 is same with the state(5) to be set
00:07:48.073  [2024-11-19 16:51:40.796175] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2
00:07:48.073  [2024-11-19 16:51:40.796223] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fffad1a7370 is same with the state(5) to be set
00:07:48.073  [2024-11-19 16:51:40.796298] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2
00:07:48.073  [2024-11-19 16:51:40.796345] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fffad1a7370 is same with the state(5) to be set
00:07:48.073  [2024-11-19 16:51:40.796410] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2
00:07:48.073  [2024-11-19 16:51:40.796451] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fffad1a7370 is same with the state(5) to be set
00:07:48.073  [2024-11-19 16:51:40.796496] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2
00:07:48.073  [2024-11-19 16:51:40.796537] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fffad1a7370 is same with the state(5) to be set
00:07:48.073  [2024-11-19 16:51:40.796596] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2
00:07:48.073  [2024-11-19 16:51:40.796637] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fffad1a7370 is same with the state(5) to be set
00:07:48.073  passed
00:07:48.073    Test: test_nvmf_tcp_tls_add_remove_credentials ...[2024-11-19 16:51:40.796694] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2
00:07:48.073  [2024-11-19 16:51:40.796735] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fffad1a7370 is same with the state(5) to be set
00:07:48.073  passed
00:07:48.073    Test: test_nvmf_tcp_tls_generate_psk_id ...[2024-11-19 16:51:40.824497] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 591:nvme_tcp_generate_psk_identity: *ERROR*: Out buffer too small!
00:07:48.073  passed
00:07:48.073    Test: test_nvmf_tcp_tls_generate_retained_psk ...[2024-11-19 16:51:40.824615] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 602:nvme_tcp_generate_psk_identity: *ERROR*: Unknown cipher suite requested!
00:07:48.073  [2024-11-19 16:51:40.825053] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 658:nvme_tcp_derive_retained_psk: *ERROR*: Unknown PSK hash requested!
00:07:48.074  [2024-11-19 16:51:40.825115] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 663:nvme_tcp_derive_retained_psk: *ERROR*: Insufficient buffer size for out key!
00:07:48.074  passed
00:07:48.074    Test: test_nvmf_tcp_tls_generate_tls_psk ...[2024-11-19 16:51:40.825362] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 732:nvme_tcp_derive_tls_psk: *ERROR*: Unknown cipher suite requested!
00:07:48.074  [2024-11-19 16:51:40.825422] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 756:nvme_tcp_derive_tls_psk: *ERROR*: Insufficient buffer size for out key!
00:07:48.074  passed
00:07:48.074  
00:07:48.074  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:48.074                suites      1      1    n/a      0        0
00:07:48.074                 tests     17     17     17      0        0
00:07:48.074               asserts    222    222    222      0      n/a
00:07:48.074  
00:07:48.074  Elapsed time =    0.184 seconds
00:07:48.074   16:51:40	-- unit/unittest.sh@111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/nvmf.c/nvmf_ut
00:07:48.333  
00:07:48.333  
00:07:48.333       CUnit - A unit testing framework for C - Version 2.1-3
00:07:48.333       http://cunit.sourceforge.net/
00:07:48.333  
00:07:48.333  
00:07:48.333  Suite: nvmf
00:07:48.333    Test: test_nvmf_tgt_create_poll_group ...passed
00:07:48.333  
00:07:48.333  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:48.333                suites      1      1    n/a      0        0
00:07:48.333                 tests      1      1      1      0        0
00:07:48.333               asserts     17     17     17      0      n/a
00:07:48.333  
00:07:48.333  Elapsed time =    0.029 seconds
00:07:48.333  
00:07:48.333  real	0m0.641s
00:07:48.333  user	0m0.290s
00:07:48.333  sys	0m0.354s
00:07:48.333   16:51:41	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:48.333   16:51:41	-- common/autotest_common.sh@10 -- # set +x
00:07:48.333  ************************************
00:07:48.333  END TEST unittest_nvmf
00:07:48.333  ************************************
00:07:48.333   16:51:41	-- unit/unittest.sh@236 -- # grep -q '#define SPDK_CONFIG_FC 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h
00:07:48.333   16:51:41	-- unit/unittest.sh@241 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h
00:07:48.333   16:51:41	-- unit/unittest.sh@242 -- # run_test unittest_nvmf_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut
00:07:48.333   16:51:41	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:07:48.333   16:51:41	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:48.333   16:51:41	-- common/autotest_common.sh@10 -- # set +x
00:07:48.333  ************************************
00:07:48.333  START TEST unittest_nvmf_rdma
00:07:48.333  ************************************
00:07:48.333   16:51:41	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut
00:07:48.333  
00:07:48.333  
00:07:48.333       CUnit - A unit testing framework for C - Version 2.1-3
00:07:48.333       http://cunit.sourceforge.net/
00:07:48.333  
00:07:48.333  
00:07:48.333  Suite: nvmf
00:07:48.333    Test: test_spdk_nvmf_rdma_request_parse_sgl ...[2024-11-19 16:51:41.149969] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1916:nvmf_rdma_request_parse_sgl: *ERROR*: SGL length 0x40000 exceeds max io size 0x20000
00:07:48.333  [2024-11-19 16:51:41.150376] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1966:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x1000 exceeds capsule length 0x0
00:07:48.333  [2024-11-19 16:51:41.150435] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1966:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x2000 exceeds capsule length 0x1000
00:07:48.333  passed
00:07:48.333    Test: test_spdk_nvmf_rdma_request_process ...passed
00:07:48.333    Test: test_nvmf_rdma_get_optimal_poll_group ...passed
00:07:48.333    Test: test_spdk_nvmf_rdma_request_parse_sgl_with_md ...passed
00:07:48.333    Test: test_nvmf_rdma_opts_init ...passed
00:07:48.333    Test: test_nvmf_rdma_request_free_data ...passed
00:07:48.333    Test: test_nvmf_rdma_update_ibv_state ...[2024-11-19 16:51:41.151923] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 616:nvmf_rdma_update_ibv_state: *ERROR*: Failed to get updated RDMA queue pair state!
00:07:48.333  [2024-11-19 16:51:41.151988] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 627:nvmf_rdma_update_ibv_state: *ERROR*: QP#0: bad state updated: 10, maybe hardware issue
00:07:48.333  passed
00:07:48.333    Test: test_nvmf_rdma_resources_create ...passed
00:07:48.333    Test: test_nvmf_rdma_qpair_compare ...passed
00:07:48.333    Test: test_nvmf_rdma_resize_cq ...[2024-11-19 16:51:41.153537] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1008:nvmf_rdma_resize_cq: *ERROR*: iWARP doesn't support CQ resize. Current capacity 20, required 0
00:07:48.333  Using CQ of insufficient size may lead to CQ overrun
00:07:48.333  passed
00:07:48.333  
00:07:48.333  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:48.333                suites      1      1    n/a      0        0
00:07:48.333                 tests     10     10     10      0        0
00:07:48.333               asserts    584    584    584      0      n/a
00:07:48.333  
00:07:48.333  Elapsed time =    0.004 seconds
00:07:48.333  [2024-11-19 16:51:41.153686] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1013:nvmf_rdma_resize_cq: *ERROR*: RDMA CQE requirement (26) exceeds device max_cqe limitation (3)
00:07:48.333  [2024-11-19 16:51:41.153744] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1021:nvmf_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory
00:07:48.333  
00:07:48.333  real	0m0.053s
00:07:48.333  user	0m0.033s
00:07:48.333  sys	0m0.020s
00:07:48.333   16:51:41	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:48.333   16:51:41	-- common/autotest_common.sh@10 -- # set +x
00:07:48.333  ************************************
00:07:48.333  END TEST unittest_nvmf_rdma
00:07:48.333  ************************************
00:07:48.594   16:51:41	-- unit/unittest.sh@245 -- # grep -q '#define SPDK_CONFIG_VFIO_USER 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h
00:07:48.594   16:51:41	-- unit/unittest.sh@249 -- # run_test unittest_scsi unittest_scsi
00:07:48.595   16:51:41	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:07:48.595   16:51:41	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:48.595   16:51:41	-- common/autotest_common.sh@10 -- # set +x
00:07:48.595  ************************************
00:07:48.595  START TEST unittest_scsi
00:07:48.595  ************************************
00:07:48.595   16:51:41	-- common/autotest_common.sh@1114 -- # unittest_scsi
00:07:48.595   16:51:41	-- unit/unittest.sh@115 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/dev.c/dev_ut
00:07:48.595  
00:07:48.595  
00:07:48.595       CUnit - A unit testing framework for C - Version 2.1-3
00:07:48.595       http://cunit.sourceforge.net/
00:07:48.595  
00:07:48.595  
00:07:48.595  Suite: dev_suite
00:07:48.595    Test: dev_destruct_null_dev ...passed
00:07:48.595    Test: dev_destruct_zero_luns ...passed
00:07:48.595    Test: dev_destruct_null_lun ...passed
00:07:48.595    Test: dev_destruct_success ...passed
00:07:48.595    Test: dev_construct_num_luns_zero ...passed
00:07:48.595    Test: dev_construct_no_lun_zero ...[2024-11-19 16:51:41.266154] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 228:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUNs specified
00:07:48.595  [2024-11-19 16:51:41.266558] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 241:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUN 0 specified
00:07:48.595  passed
00:07:48.595    Test: dev_construct_null_lun ...passed
00:07:48.595    Test: dev_construct_name_too_long ...[2024-11-19 16:51:41.266649] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 247:spdk_scsi_dev_construct_ext: *ERROR*: NULL spdk_scsi_lun for LUN 0
00:07:48.595  [2024-11-19 16:51:41.266722] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 222:spdk_scsi_dev_construct_ext: *ERROR*: device xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx: name longer than maximum allowed length 255
00:07:48.595  passed
00:07:48.595    Test: dev_construct_success ...passed
00:07:48.595    Test: dev_construct_success_lun_zero_not_first ...passed
00:07:48.595    Test: dev_queue_mgmt_task_success ...passed
00:07:48.595    Test: dev_queue_task_success ...passed
00:07:48.595    Test: dev_stop_success ...passed
00:07:48.595    Test: dev_add_port_max_ports ...[2024-11-19 16:51:41.267203] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 315:spdk_scsi_dev_add_port: *ERROR*: device already has 4 ports
00:07:48.595  passed
00:07:48.595    Test: dev_add_port_construct_failure1 ...[2024-11-19 16:51:41.267362] /home/vagrant/spdk_repo/spdk/lib/scsi/port.c:  49:scsi_port_construct: *ERROR*: port name too long
00:07:48.595  passed
00:07:48.595    Test: dev_add_port_construct_failure2 ...passed
00:07:48.595    Test: dev_add_port_success1 ...[2024-11-19 16:51:41.267507] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 321:spdk_scsi_dev_add_port: *ERROR*: device already has port(1)
00:07:48.595  passed
00:07:48.595    Test: dev_add_port_success2 ...passed
00:07:48.595    Test: dev_add_port_success3 ...passed
00:07:48.595    Test: dev_find_port_by_id_num_ports_zero ...passed
00:07:48.595    Test: dev_find_port_by_id_id_not_found_failure ...passed
00:07:48.595    Test: dev_find_port_by_id_success ...passed
00:07:48.595    Test: dev_add_lun_bdev_not_found ...passed
00:07:48.595    Test: dev_add_lun_no_free_lun_id ...[2024-11-19 16:51:41.268156] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 159:spdk_scsi_dev_add_lun_ext: *ERROR*: Free LUN ID is not found
00:07:48.595  passed
00:07:48.595    Test: dev_add_lun_success1 ...passed
00:07:48.595    Test: dev_add_lun_success2 ...passed
00:07:48.595    Test: dev_check_pending_tasks ...passed
00:07:48.595    Test: dev_iterate_luns ...passed
00:07:48.595    Test: dev_find_free_lun ...passed
00:07:48.595  
00:07:48.595  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:48.595                suites      1      1    n/a      0        0
00:07:48.595                 tests     29     29     29      0        0
00:07:48.595               asserts     97     97     97      0      n/a
00:07:48.595  
00:07:48.595  Elapsed time =    0.003 seconds
00:07:48.595   16:51:41	-- unit/unittest.sh@116 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/lun.c/lun_ut
00:07:48.595  
00:07:48.595  
00:07:48.595       CUnit - A unit testing framework for C - Version 2.1-3
00:07:48.595       http://cunit.sourceforge.net/
00:07:48.595  
00:07:48.595  
00:07:48.595  Suite: lun_suite
00:07:48.595    Test: lun_task_mgmt_execute_abort_task_not_supported ...passed
00:07:48.595    Test: lun_task_mgmt_execute_abort_task_all_not_supported ...[2024-11-19 16:51:41.313380] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task not supported
00:07:48.595  [2024-11-19 16:51:41.313784] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task set not supported
00:07:48.595  passed
00:07:48.595    Test: lun_task_mgmt_execute_lun_reset ...passed
00:07:48.595    Test: lun_task_mgmt_execute_target_reset ...passed
00:07:48.595    Test: lun_task_mgmt_execute_invalid_case ...passed
00:07:48.595    Test: lun_append_task_null_lun_task_cdb_spc_inquiry ...passed
00:07:48.595    Test: lun_append_task_null_lun_alloc_len_lt_4096 ...passed
00:07:48.595    Test: lun_append_task_null_lun_not_supported ...[2024-11-19 16:51:41.313977] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: unknown task not supported
00:07:48.595  passed
00:07:48.595    Test: lun_execute_scsi_task_pending ...passed
00:07:48.595    Test: lun_execute_scsi_task_complete ...passed
00:07:48.595    Test: lun_execute_scsi_task_resize ...passed
00:07:48.595    Test: lun_destruct_success ...passed
00:07:48.595    Test: lun_construct_null_ctx ...passed
00:07:48.595    Test: lun_construct_success ...passed
00:07:48.595    Test: lun_reset_task_wait_scsi_task_complete ...[2024-11-19 16:51:41.314190] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 432:scsi_lun_construct: *ERROR*: bdev_name must be non-NULL
00:07:48.595  passed
00:07:48.595    Test: lun_reset_task_suspend_scsi_task ...passed
00:07:48.595    Test: lun_check_pending_tasks_only_for_specific_initiator ...passed
00:07:48.595    Test: abort_pending_mgmt_tasks_when_lun_is_removed ...passed
00:07:48.595  
00:07:48.595  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:48.595                suites      1      1    n/a      0        0
00:07:48.595                 tests     18     18     18      0        0
00:07:48.595               asserts    153    153    153      0      n/a
00:07:48.595  
00:07:48.595  Elapsed time =    0.001 seconds
00:07:48.595   16:51:41	-- unit/unittest.sh@117 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi.c/scsi_ut
00:07:48.595  
00:07:48.595  
00:07:48.595       CUnit - A unit testing framework for C - Version 2.1-3
00:07:48.595       http://cunit.sourceforge.net/
00:07:48.595  
00:07:48.595  
00:07:48.595  Suite: scsi_suite
00:07:48.595    Test: scsi_init ...passed
00:07:48.595  
00:07:48.595  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:48.595                suites      1      1    n/a      0        0
00:07:48.595                 tests      1      1      1      0        0
00:07:48.595               asserts      1      1      1      0      n/a
00:07:48.595  
00:07:48.595  Elapsed time =    0.000 seconds
00:07:48.595   16:51:41	-- unit/unittest.sh@118 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut
00:07:48.595  
00:07:48.595  
00:07:48.595       CUnit - A unit testing framework for C - Version 2.1-3
00:07:48.595       http://cunit.sourceforge.net/
00:07:48.595  
00:07:48.595  
00:07:48.595  Suite: translation_suite
00:07:48.595    Test: mode_select_6_test ...passed
00:07:48.595    Test: mode_select_6_test2 ...passed
00:07:48.595    Test: mode_sense_6_test ...passed
00:07:48.595    Test: mode_sense_10_test ...passed
00:07:48.595    Test: inquiry_evpd_test ...passed
00:07:48.595    Test: inquiry_standard_test ...passed
00:07:48.595    Test: inquiry_overflow_test ...passed
00:07:48.595    Test: task_complete_test ...passed
00:07:48.595    Test: lba_range_test ...passed
00:07:48.595    Test: xfer_len_test ...[2024-11-19 16:51:41.397182] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_bdev.c:1270:bdev_scsi_readwrite: *ERROR*: xfer_len 8193 > maximum transfer length 8192
00:07:48.595  passed
00:07:48.595    Test: xfer_test ...passed
00:07:48.595    Test: scsi_name_padding_test ...passed
00:07:48.595    Test: get_dif_ctx_test ...passed
00:07:48.595    Test: unmap_split_test ...passed
00:07:48.595  
00:07:48.595  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:48.595                suites      1      1    n/a      0        0
00:07:48.595                 tests     14     14     14      0        0
00:07:48.595               asserts   1200   1200   1200      0      n/a
00:07:48.595  
00:07:48.595  Elapsed time =    0.004 seconds
00:07:48.595   16:51:41	-- unit/unittest.sh@119 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut
00:07:48.595  
00:07:48.595  
00:07:48.595       CUnit - A unit testing framework for C - Version 2.1-3
00:07:48.595       http://cunit.sourceforge.net/
00:07:48.595  
00:07:48.595  
00:07:48.595  Suite: reservation_suite
00:07:48.595    Test: test_reservation_register ...[2024-11-19 16:51:41.438479] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa
00:07:48.595  passed
00:07:48.595    Test: test_reservation_reserve ...[2024-11-19 16:51:41.439096] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa
00:07:48.595  [2024-11-19 16:51:41.439216] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 209:scsi_pr_out_reserve: *ERROR*: Only 1 holder is allowed for type 1
00:07:48.595  passed
00:07:48.595    Test: test_reservation_preempt_non_all_regs ...[2024-11-19 16:51:41.439384] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 204:scsi_pr_out_reserve: *ERROR*: Reservation type doesn't match
00:07:48.595  [2024-11-19 16:51:41.439503] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa
00:07:48.595  [2024-11-19 16:51:41.439620] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 458:scsi_pr_out_preempt: *ERROR*: Zeroed sa_rkey
00:07:48.595  passed
00:07:48.595    Test: test_reservation_preempt_all_regs ...[2024-11-19 16:51:41.439826] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa
00:07:48.595  passed
00:07:48.595    Test: test_reservation_cmds_conflict ...[2024-11-19 16:51:41.440053] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa
00:07:48.595  [2024-11-19 16:51:41.440169] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 851:scsi_pr_check: *ERROR*: CHECK: Registrants only reservation type  reject command 0x2a
00:07:48.595  [2024-11-19 16:51:41.440260] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28
00:07:48.596  [2024-11-19 16:51:41.440336] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a
00:07:48.596  [2024-11-19 16:51:41.440414] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28
00:07:48.596  [2024-11-19 16:51:41.440474] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a
00:07:48.596  passed
00:07:48.596    Test: test_scsi2_reserve_release ...passed
00:07:48.596    Test: test_pr_with_scsi2_reserve_release ...[2024-11-19 16:51:41.440655] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa
00:07:48.596  passed
00:07:48.596  
00:07:48.596  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:48.596                suites      1      1    n/a      0        0
00:07:48.596                 tests      7      7      7      0        0
00:07:48.596               asserts    257    257    257      0      n/a
00:07:48.596  
00:07:48.596  Elapsed time =    0.002 seconds
00:07:48.855  
00:07:48.855  real	0m0.214s
00:07:48.855  user	0m0.071s
00:07:48.855  sys	0m0.145s
00:07:48.855   16:51:41	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:48.855  ************************************
00:07:48.855  END TEST unittest_scsi
00:07:48.855  ************************************
00:07:48.855   16:51:41	-- common/autotest_common.sh@10 -- # set +x
00:07:48.855    16:51:41	-- unit/unittest.sh@252 -- # uname -s
00:07:48.855   16:51:41	-- unit/unittest.sh@252 -- # '[' Linux = Linux ']'
00:07:48.855   16:51:41	-- unit/unittest.sh@253 -- # run_test unittest_sock unittest_sock
00:07:48.855   16:51:41	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:07:48.855   16:51:41	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:48.855   16:51:41	-- common/autotest_common.sh@10 -- # set +x
00:07:48.855  ************************************
00:07:48.855  START TEST unittest_sock
00:07:48.855  ************************************
00:07:48.855   16:51:41	-- common/autotest_common.sh@1114 -- # unittest_sock
00:07:48.855   16:51:41	-- unit/unittest.sh@123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/sock.c/sock_ut
00:07:48.855  
00:07:48.855  
00:07:48.855       CUnit - A unit testing framework for C - Version 2.1-3
00:07:48.855       http://cunit.sourceforge.net/
00:07:48.855  
00:07:48.855  
00:07:48.855  Suite: sock
00:07:48.855    Test: posix_sock ...passed
00:07:48.855    Test: ut_sock ...passed
00:07:48.855    Test: posix_sock_group ...passed
00:07:48.855    Test: ut_sock_group ...passed
00:07:48.855    Test: posix_sock_group_fairness ...passed
00:07:48.855    Test: _posix_sock_close ...passed
00:07:48.855    Test: sock_get_default_opts ...passed
00:07:48.855    Test: ut_sock_impl_get_set_opts ...passed
00:07:48.855    Test: posix_sock_impl_get_set_opts ...passed
00:07:48.855    Test: ut_sock_map ...passed
00:07:48.855    Test: override_impl_opts ...passed
00:07:48.855    Test: ut_sock_group_get_ctx ...passed
00:07:48.855  
00:07:48.855  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:48.855                suites      1      1    n/a      0        0
00:07:48.855                 tests     12     12     12      0        0
00:07:48.855               asserts    349    349    349      0      n/a
00:07:48.855  
00:07:48.855  Elapsed time =    0.008 seconds
00:07:48.855   16:51:41	-- unit/unittest.sh@124 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/posix.c/posix_ut
00:07:48.855  
00:07:48.855  
00:07:48.855       CUnit - A unit testing framework for C - Version 2.1-3
00:07:48.855       http://cunit.sourceforge.net/
00:07:48.855  
00:07:48.855  
00:07:48.855  Suite: posix
00:07:48.855    Test: flush ...passed
00:07:48.855  
00:07:48.855  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:48.855                suites      1      1    n/a      0        0
00:07:48.855                 tests      1      1      1      0        0
00:07:48.855               asserts     28     28     28      0      n/a
00:07:48.855  
00:07:48.855  Elapsed time =    0.000 seconds
00:07:48.855   16:51:41	-- unit/unittest.sh@126 -- # grep -q '#define SPDK_CONFIG_URING 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h
00:07:48.855  
00:07:48.855  real	0m0.120s
00:07:48.855  user	0m0.056s
00:07:48.855  sys	0m0.041s
00:07:48.855   16:51:41	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:48.855   16:51:41	-- common/autotest_common.sh@10 -- # set +x
00:07:48.855  ************************************
00:07:48.855  END TEST unittest_sock
00:07:48.855  ************************************
00:07:48.855   16:51:41	-- unit/unittest.sh@255 -- # run_test unittest_thread /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut
00:07:48.855   16:51:41	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:07:48.855   16:51:41	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:48.855   16:51:41	-- common/autotest_common.sh@10 -- # set +x
00:07:48.855  ************************************
00:07:48.855  START TEST unittest_thread
00:07:48.855  ************************************
00:07:48.855   16:51:41	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut
00:07:49.116  
00:07:49.116  
00:07:49.116       CUnit - A unit testing framework for C - Version 2.1-3
00:07:49.116       http://cunit.sourceforge.net/
00:07:49.116  
00:07:49.116  
00:07:49.116  Suite: io_channel
00:07:49.116    Test: thread_alloc ...passed
00:07:49.116    Test: thread_send_msg ...passed
00:07:49.116    Test: thread_poller ...passed
00:07:49.116    Test: poller_pause ...passed
00:07:49.116    Test: thread_for_each ...passed
00:07:49.116    Test: for_each_channel_remove ...passed
00:07:49.116    Test: for_each_channel_unreg ...[2024-11-19 16:51:41.751026] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2165:spdk_io_device_register: *ERROR*: io_device 0x7fff58709260 already registered (old:0x613000000200 new:0x6130000003c0)
00:07:49.116  passed
00:07:49.116    Test: thread_name ...passed
00:07:49.116    Test: channel ...[2024-11-19 16:51:41.755432] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2299:spdk_get_io_channel: *ERROR*: could not find io_device 0x555a619780e0
00:07:49.116  passed
00:07:49.116    Test: channel_destroy_races ...passed
00:07:49.116    Test: thread_exit_test ...[2024-11-19 16:51:41.760848] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 631:thread_exit: *ERROR*: thread 0x618000005c80 got timeout, and move it to the exited state forcefully
00:07:49.116  passed
00:07:49.116    Test: thread_update_stats_test ...passed
00:07:49.116    Test: nested_channel ...passed
00:07:49.116    Test: device_unregister_and_thread_exit_race ...passed
00:07:49.116    Test: cache_closest_timed_poller ...passed
00:07:49.116    Test: multi_timed_pollers_have_same_expiration ...passed
00:07:49.116    Test: io_device_lookup ...passed
00:07:49.116    Test: spdk_spin ...[2024-11-19 16:51:41.772313] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3063:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0))
00:07:49.116  [2024-11-19 16:51:41.772401] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3019:sspin_stacks_print: *ERROR*: spinlock 0x7fff58709250
00:07:49.116  [2024-11-19 16:51:41.772513] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3101:spdk_spin_held: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0))
00:07:49.116  [2024-11-19 16:51:41.774277] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3064:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread)
00:07:49.116  [2024-11-19 16:51:41.774382] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3019:sspin_stacks_print: *ERROR*: spinlock 0x7fff58709250
00:07:49.116  [2024-11-19 16:51:41.774419] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3084:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread)
00:07:49.116  [2024-11-19 16:51:41.774461] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3019:sspin_stacks_print: *ERROR*: spinlock 0x7fff58709250
00:07:49.116  [2024-11-19 16:51:41.774505] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3084:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread)
00:07:49.116  [2024-11-19 16:51:41.774557] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3019:sspin_stacks_print: *ERROR*: spinlock 0x7fff58709250
00:07:49.116  [2024-11-19 16:51:41.774594] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3045:spdk_spin_destroy: *ERROR*: unrecoverable spinlock error 5: Destroying a held spinlock (sspin->thread == ((void *)0))
00:07:49.116  [2024-11-19 16:51:41.774649] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3019:sspin_stacks_print: *ERROR*: spinlock 0x7fff58709250
00:07:49.116  passed
00:07:49.116    Test: for_each_channel_and_thread_exit_race ...passed
00:07:49.116    Test: for_each_thread_and_thread_exit_race ...passed
00:07:49.116  
00:07:49.116  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:49.116                suites      1      1    n/a      0        0
00:07:49.116                 tests     20     20     20      0        0
00:07:49.116               asserts    409    409    409      0      n/a
00:07:49.116  
00:07:49.116  Elapsed time =    0.053 seconds
00:07:49.116  
00:07:49.116  real	0m0.111s
00:07:49.116  user	0m0.079s
00:07:49.116  sys	0m0.032s
00:07:49.116   16:51:41	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:49.116   16:51:41	-- common/autotest_common.sh@10 -- # set +x
00:07:49.116  ************************************
00:07:49.116  END TEST unittest_thread
00:07:49.116  ************************************
00:07:49.116   16:51:41	-- unit/unittest.sh@256 -- # run_test unittest_iobuf /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut
00:07:49.116   16:51:41	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:07:49.116   16:51:41	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:49.116   16:51:41	-- common/autotest_common.sh@10 -- # set +x
00:07:49.116  ************************************
00:07:49.116  START TEST unittest_iobuf
00:07:49.116  ************************************
00:07:49.116   16:51:41	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut
00:07:49.116  
00:07:49.116  
00:07:49.116       CUnit - A unit testing framework for C - Version 2.1-3
00:07:49.116       http://cunit.sourceforge.net/
00:07:49.116  
00:07:49.116  
00:07:49.116  Suite: io_channel
00:07:49.116    Test: iobuf ...passed
00:07:49.116    Test: iobuf_cache ...[2024-11-19 16:51:41.908578] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 302:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf small buffer cache. You may need to increase spdk_iobuf_opts.small_pool_count (4)
00:07:49.116  [2024-11-19 16:51:41.909041] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 305:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value.
00:07:49.116  [2024-11-19 16:51:41.909222] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 314:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf large buffer cache. You may need to increase spdk_iobuf_opts.large_pool_count (4)
00:07:49.116  [2024-11-19 16:51:41.909287] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 317:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value.
00:07:49.116  [2024-11-19 16:51:41.909397] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 302:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf small buffer cache. You may need to increase spdk_iobuf_opts.small_pool_count (4)
00:07:49.116  [2024-11-19 16:51:41.909458] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 305:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value.
00:07:49.116  passed
00:07:49.116  
00:07:49.116  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:49.116                suites      1      1    n/a      0        0
00:07:49.116                 tests      2      2      2      0        0
00:07:49.116               asserts    107    107    107      0      n/a
00:07:49.116  
00:07:49.116  Elapsed time =    0.007 seconds
00:07:49.116  
00:07:49.116  real	0m0.053s
00:07:49.116  user	0m0.021s
00:07:49.116  sys	0m0.033s
00:07:49.116   16:51:41	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:49.116   16:51:41	-- common/autotest_common.sh@10 -- # set +x
00:07:49.116  ************************************
00:07:49.116  END TEST unittest_iobuf
00:07:49.116  ************************************
00:07:49.381   16:51:41	-- unit/unittest.sh@257 -- # run_test unittest_util unittest_util
00:07:49.381   16:51:41	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:07:49.381   16:51:41	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:49.381   16:51:41	-- common/autotest_common.sh@10 -- # set +x
00:07:49.381  ************************************
00:07:49.381  START TEST unittest_util
00:07:49.381  ************************************
00:07:49.381   16:51:41	-- common/autotest_common.sh@1114 -- # unittest_util
00:07:49.381   16:51:41	-- unit/unittest.sh@132 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/base64.c/base64_ut
00:07:49.381  
00:07:49.381  
00:07:49.381       CUnit - A unit testing framework for C - Version 2.1-3
00:07:49.381       http://cunit.sourceforge.net/
00:07:49.381  
00:07:49.381  
00:07:49.381  Suite: base64
00:07:49.381    Test: test_base64_get_encoded_strlen ...passed
00:07:49.381    Test: test_base64_get_decoded_len ...passed
00:07:49.381    Test: test_base64_encode ...passed
00:07:49.381    Test: test_base64_decode ...passed
00:07:49.381    Test: test_base64_urlsafe_encode ...passed
00:07:49.381    Test: test_base64_urlsafe_decode ...passed
00:07:49.381  
00:07:49.381  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:49.381                suites      1      1    n/a      0        0
00:07:49.381                 tests      6      6      6      0        0
00:07:49.381               asserts    112    112    112      0      n/a
00:07:49.381  
00:07:49.381  Elapsed time =    0.000 seconds
00:07:49.381   16:51:42	-- unit/unittest.sh@133 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/bit_array.c/bit_array_ut
00:07:49.381  
00:07:49.381  
00:07:49.381       CUnit - A unit testing framework for C - Version 2.1-3
00:07:49.381       http://cunit.sourceforge.net/
00:07:49.381  
00:07:49.381  
00:07:49.381  Suite: bit_array
00:07:49.381    Test: test_1bit ...passed
00:07:49.381    Test: test_64bit ...passed
00:07:49.381    Test: test_find ...passed
00:07:49.381    Test: test_resize ...passed
00:07:49.381    Test: test_errors ...passed
00:07:49.381    Test: test_count ...passed
00:07:49.381    Test: test_mask_store_load ...passed
00:07:49.381    Test: test_mask_clear ...passed
00:07:49.381  
00:07:49.381  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:49.381                suites      1      1    n/a      0        0
00:07:49.381                 tests      8      8      8      0        0
00:07:49.381               asserts   5075   5075   5075      0      n/a
00:07:49.381  
00:07:49.381  Elapsed time =    0.002 seconds
00:07:49.381   16:51:42	-- unit/unittest.sh@134 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/cpuset.c/cpuset_ut
00:07:49.381  
00:07:49.381  
00:07:49.381       CUnit - A unit testing framework for C - Version 2.1-3
00:07:49.381       http://cunit.sourceforge.net/
00:07:49.381  
00:07:49.381  
00:07:49.381  Suite: cpuset
00:07:49.381    Test: test_cpuset ...passed
00:07:49.381    Test: test_cpuset_parse ...[2024-11-19 16:51:42.074446] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 239:parse_list: *ERROR*: Unexpected end of core list '['
00:07:49.381  [2024-11-19 16:51:42.074996] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[]' failed on character ']'
00:07:49.381  [2024-11-19 16:51:42.075113] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[10--11]' failed on character '-'
00:07:49.381  [2024-11-19 16:51:42.075487] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 219:parse_list: *ERROR*: Invalid range of CPUs (11 > 10)
00:07:49.381  [2024-11-19 16:51:42.075542] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[10-11,]' failed on character ','
00:07:49.381  [2024-11-19 16:51:42.075744] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[,10-11]' failed on character ','
00:07:49.381  [2024-11-19 16:51:42.076059] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 203:parse_list: *ERROR*: Core number 1025 is out of range in '[1025]'
00:07:49.381  [2024-11-19 16:51:42.076138] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 198:parse_list: *ERROR*: Conversion of core mask in '[184467440737095516150]' failed
00:07:49.381  passed
00:07:49.381    Test: test_cpuset_fmt ...passed
00:07:49.381  
00:07:49.381  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:49.381                suites      1      1    n/a      0        0
00:07:49.381                 tests      3      3      3      0        0
00:07:49.381               asserts     65     65     65      0      n/a
00:07:49.381  
00:07:49.381  Elapsed time =    0.004 seconds
00:07:49.381   16:51:42	-- unit/unittest.sh@135 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc16.c/crc16_ut
00:07:49.381  
00:07:49.381  
00:07:49.381       CUnit - A unit testing framework for C - Version 2.1-3
00:07:49.381       http://cunit.sourceforge.net/
00:07:49.381  
00:07:49.381  
00:07:49.381  Suite: crc16
00:07:49.381    Test: test_crc16_t10dif ...passed
00:07:49.381    Test: test_crc16_t10dif_seed ...passed
00:07:49.381    Test: test_crc16_t10dif_copy ...passed
00:07:49.381  
00:07:49.381  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:49.381                suites      1      1    n/a      0        0
00:07:49.381                 tests      3      3      3      0        0
00:07:49.381               asserts      5      5      5      0      n/a
00:07:49.381  
00:07:49.381  Elapsed time =    0.000 seconds
00:07:49.381   16:51:42	-- unit/unittest.sh@136 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut
00:07:49.381  
00:07:49.381  
00:07:49.381       CUnit - A unit testing framework for C - Version 2.1-3
00:07:49.381       http://cunit.sourceforge.net/
00:07:49.381  
00:07:49.381  
00:07:49.381  Suite: crc32_ieee
00:07:49.381    Test: test_crc32_ieee ...passed
00:07:49.381  
00:07:49.381  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:49.381                suites      1      1    n/a      0        0
00:07:49.381                 tests      1      1      1      0        0
00:07:49.381               asserts      1      1      1      0      n/a
00:07:49.381  
00:07:49.381  Elapsed time =    0.000 seconds
00:07:49.381   16:51:42	-- unit/unittest.sh@137 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32c.c/crc32c_ut
00:07:49.381  
00:07:49.381  
00:07:49.381       CUnit - A unit testing framework for C - Version 2.1-3
00:07:49.381       http://cunit.sourceforge.net/
00:07:49.381  
00:07:49.381  
00:07:49.381  Suite: crc32c
00:07:49.381    Test: test_crc32c ...passed
00:07:49.381    Test: test_crc32c_nvme ...passed
00:07:49.381  
00:07:49.381  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:49.381                suites      1      1    n/a      0        0
00:07:49.381                 tests      2      2      2      0        0
00:07:49.381               asserts     16     16     16      0      n/a
00:07:49.381  
00:07:49.381  Elapsed time =    0.001 seconds
00:07:49.381   16:51:42	-- unit/unittest.sh@138 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc64.c/crc64_ut
00:07:49.381  
00:07:49.381  
00:07:49.381       CUnit - A unit testing framework for C - Version 2.1-3
00:07:49.381       http://cunit.sourceforge.net/
00:07:49.381  
00:07:49.381  
00:07:49.381  Suite: crc64
00:07:49.381    Test: test_crc64_nvme ...passed
00:07:49.381  
00:07:49.381  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:49.381                suites      1      1    n/a      0        0
00:07:49.381                 tests      1      1      1      0        0
00:07:49.381               asserts      4      4      4      0      n/a
00:07:49.381  
00:07:49.381  Elapsed time =    0.000 seconds
00:07:49.381   16:51:42	-- unit/unittest.sh@139 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/string.c/string_ut
00:07:49.381  
00:07:49.381  
00:07:49.381       CUnit - A unit testing framework for C - Version 2.1-3
00:07:49.381       http://cunit.sourceforge.net/
00:07:49.381  
00:07:49.381  
00:07:49.381  Suite: string
00:07:49.381    Test: test_parse_ip_addr ...passed
00:07:49.381    Test: test_str_chomp ...passed
00:07:49.381    Test: test_parse_capacity ...passed
00:07:49.381    Test: test_sprintf_append_realloc ...passed
00:07:49.381    Test: test_strtol ...passed
00:07:49.381    Test: test_strtoll ...passed
00:07:49.381    Test: test_strarray ...passed
00:07:49.381    Test: test_strcpy_replace ...passed
00:07:49.381  
00:07:49.381  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:49.381                suites      1      1    n/a      0        0
00:07:49.381                 tests      8      8      8      0        0
00:07:49.381               asserts    161    161    161      0      n/a
00:07:49.381  
00:07:49.381  Elapsed time =    0.001 seconds
00:07:49.643   16:51:42	-- unit/unittest.sh@140 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/dif.c/dif_ut
00:07:49.643  
00:07:49.643  
00:07:49.643       CUnit - A unit testing framework for C - Version 2.1-3
00:07:49.643       http://cunit.sourceforge.net/
00:07:49.643  
00:07:49.643  
00:07:49.643  Suite: dif
00:07:49.643    Test: dif_generate_and_verify_test ...[2024-11-19 16:51:42.273602] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16
00:07:49.643  [2024-11-19 16:51:42.274227] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16
00:07:49.643  [2024-11-19 16:51:42.274543] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16
00:07:49.643  [2024-11-19 16:51:42.274844] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22,  Expected=23, Actual=22
00:07:49.643  [2024-11-19 16:51:42.275339] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22,  Expected=23, Actual=22
00:07:49.643  [2024-11-19 16:51:42.275665] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22,  Expected=23, Actual=22
00:07:49.643  passed
00:07:49.643    Test: dif_disable_check_test ...[2024-11-19 16:51:42.276705] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22,  Expected=22, Actual=ffff
00:07:49.643  [2024-11-19 16:51:42.277103] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22,  Expected=22, Actual=ffff
00:07:49.643  [2024-11-19 16:51:42.277407] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22,  Expected=22, Actual=ffff
00:07:49.643  passed
00:07:49.643    Test: dif_generate_and_verify_different_pi_formats_test ...[2024-11-19 16:51:42.278459] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12,  Expected=b0a80000, Actual=b9848de
00:07:49.643  [2024-11-19 16:51:42.278716] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12,  Expected=b98, Actual=b0a8
00:07:49.643  [2024-11-19 16:51:42.278980] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12,  Expected=b0a8000000000000, Actual=81039fcf5685d8d4
00:07:49.643  [2024-11-19 16:51:42.279278] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12,  Expected=b9848de00000000, Actual=81039fcf5685d8d4
00:07:49.643  [2024-11-19 16:51:42.279542] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12,  Expected=17, Actual=0
00:07:49.643  [2024-11-19 16:51:42.279800] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12,  Expected=17, Actual=0
00:07:49.643  [2024-11-19 16:51:42.280051] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12,  Expected=17, Actual=0
00:07:49.643  [2024-11-19 16:51:42.280294] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12,  Expected=17, Actual=0
00:07:49.643  [2024-11-19 16:51:42.280549] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0
00:07:49.643  [2024-11-19 16:51:42.280812] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0
00:07:49.643  [2024-11-19 16:51:42.281071] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0
00:07:49.643  passed
00:07:49.643    Test: dif_apptag_mask_test ...[2024-11-19 16:51:42.281329] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12,  Expected=1256, Actual=1234
00:07:49.643  [2024-11-19 16:51:42.281567] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12,  Expected=1256, Actual=1234
00:07:49.643  passed
00:07:49.643    Test: dif_sec_512_md_0_error_test ...[2024-11-19 16:51:42.281734] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size.
00:07:49.643  passed
00:07:49.643    Test: dif_sec_4096_md_0_error_test ...passed
00:07:49.643    Test: dif_sec_4100_md_128_error_test ...[2024-11-19 16:51:42.281790] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size.
00:07:49.643  [2024-11-19 16:51:42.281837] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size.
00:07:49.643  [2024-11-19 16:51:42.281886] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 497:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB
00:07:49.643  [2024-11-19 16:51:42.281927] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 497:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB
00:07:49.643  passed
00:07:49.643    Test: dif_guard_seed_test ...passed
00:07:49.643    Test: dif_guard_value_test ...passed
00:07:49.643    Test: dif_disable_sec_512_md_8_single_iov_test ...passed
00:07:49.643    Test: dif_sec_512_md_8_prchk_0_single_iov_test ...passed
00:07:49.643    Test: dif_sec_4096_md_128_prchk_0_single_iov_test ...passed
00:07:49.643    Test: dif_sec_512_md_8_prchk_0_1_2_4_multi_iovs_test ...passed
00:07:49.643    Test: dif_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed
00:07:49.643    Test: dif_sec_4096_md_128_prchk_7_multi_iovs_test ...passed
00:07:49.643    Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_and_md_test ...passed
00:07:49.643    Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_and_md_test ...passed
00:07:49.643    Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_test ...passed
00:07:49.643    Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed
00:07:49.643    Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_guard_test ...passed
00:07:49.643    Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_guard_test ...passed
00:07:49.643    Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_apptag_test ...passed
00:07:49.643    Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_apptag_test ...passed
00:07:49.643    Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_reftag_test ...passed
00:07:49.643    Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_reftag_test ...passed
00:07:49.643    Test: dif_sec_512_md_8_prchk_7_multi_iovs_complex_splits_test ...passed
00:07:49.643    Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed
00:07:49.643    Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-11-19 16:51:42.315116] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96,  Expected=7d4c, Actual=fd4c
00:07:49.643  [2024-11-19 16:51:42.316926] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96,  Expected=7e21, Actual=fe21
00:07:49.643  [2024-11-19 16:51:42.318792] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96,  Expected=88, Actual=8088
00:07:49.643  [2024-11-19 16:51:42.320662] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96,  Expected=88, Actual=8088
00:07:49.643  [2024-11-19 16:51:42.322567] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=8060
00:07:49.643  [2024-11-19 16:51:42.324419] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=8060
00:07:49.643  [2024-11-19 16:51:42.326280] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96,  Expected=fd4c, Actual=7609
00:07:49.643  [2024-11-19 16:51:42.328037] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96,  Expected=fe21, Actual=bfe9
00:07:49.643  [2024-11-19 16:51:42.329766] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96,  Expected=1ab7d3ed, Actual=1ab753ed
00:07:49.643  [2024-11-19 16:51:42.331652] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96,  Expected=3857c660, Actual=38574660
00:07:49.643  [2024-11-19 16:51:42.333531] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96,  Expected=88, Actual=8088
00:07:49.643  [2024-11-19 16:51:42.335413] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96,  Expected=88, Actual=8088
00:07:49.643  [2024-11-19 16:51:42.337286] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=8060
00:07:49.643  [2024-11-19 16:51:42.339148] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=8060
00:07:49.643  [2024-11-19 16:51:42.341001] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96,  Expected=1ab753ed, Actual=37f9797e
00:07:49.643  [2024-11-19 16:51:42.342739] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96,  Expected=38574660, Actual=6dbdce22
00:07:49.643  [2024-11-19 16:51:42.344499] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96,  Expected=a576a7728ecca0d3, Actual=a576a7728ecc20d3
00:07:49.643  [2024-11-19 16:51:42.346402] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96,  Expected=88010a2d48372266, Actual=88010a2d4837a266
00:07:49.643  [2024-11-19 16:51:42.348262] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96,  Expected=88, Actual=8088
00:07:49.643  [2024-11-19 16:51:42.350129] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96,  Expected=88, Actual=8088
00:07:49.643  [2024-11-19 16:51:42.352012] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=800000000060
00:07:49.643  [2024-11-19 16:51:42.353872] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=800000000060
00:07:49.643  [2024-11-19 16:51:42.355786] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96,  Expected=a576a7728ecc20d3, Actual=a785207e94962a0e
00:07:49.643  [2024-11-19 16:51:42.357518] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96,  Expected=88010a2d4837a266, Actual=448f535f510c459d
00:07:49.643  passed
00:07:49.644    Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_and_md_test ...[2024-11-19 16:51:42.358595] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=7d4c, Actual=fd4c
00:07:49.644  [2024-11-19 16:51:42.358830] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=7e21, Actual=fe21
00:07:49.644  [2024-11-19 16:51:42.359066] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=8088
00:07:49.644  [2024-11-19 16:51:42.359299] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=8088
00:07:49.644  [2024-11-19 16:51:42.359553] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058
00:07:49.644  [2024-11-19 16:51:42.359778] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058
00:07:49.644  [2024-11-19 16:51:42.360014] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fd4c, Actual=7609
00:07:49.644  [2024-11-19 16:51:42.360215] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fe21, Actual=bfe9
00:07:49.644  [2024-11-19 16:51:42.360428] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=1ab7d3ed, Actual=1ab753ed
00:07:49.644  [2024-11-19 16:51:42.360654] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=3857c660, Actual=38574660
00:07:49.644  [2024-11-19 16:51:42.360908] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=8088
00:07:49.644  [2024-11-19 16:51:42.361141] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=8088
00:07:49.644  [2024-11-19 16:51:42.361375] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058
00:07:49.644  [2024-11-19 16:51:42.361588] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058
00:07:49.644  [2024-11-19 16:51:42.361818] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=1ab753ed, Actual=37f9797e
00:07:49.644  [2024-11-19 16:51:42.362022] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=38574660, Actual=6dbdce22
00:07:49.644  [2024-11-19 16:51:42.362245] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=a576a7728ecca0d3, Actual=a576a7728ecc20d3
00:07:49.644  [2024-11-19 16:51:42.362483] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=88010a2d48372266, Actual=88010a2d4837a266
00:07:49.644  [2024-11-19 16:51:42.362719] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=8088
00:07:49.644  [2024-11-19 16:51:42.362955] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=8088
00:07:49.644  [2024-11-19 16:51:42.363185] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=800000000058
00:07:49.644  [2024-11-19 16:51:42.363412] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=800000000058
00:07:49.644  [2024-11-19 16:51:42.363657] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=a576a7728ecc20d3, Actual=a785207e94962a0e
00:07:49.644  [2024-11-19 16:51:42.363878] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=88010a2d4837a266, Actual=448f535f510c459d
00:07:49.644  passed
00:07:49.644    Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_test ...[2024-11-19 16:51:42.364127] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=7d4c, Actual=fd4c
00:07:49.644  [2024-11-19 16:51:42.364365] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=7e21, Actual=fe21
00:07:49.644  [2024-11-19 16:51:42.364596] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=8088
00:07:49.644  [2024-11-19 16:51:42.364829] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=8088
00:07:49.644  [2024-11-19 16:51:42.365069] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058
00:07:49.644  [2024-11-19 16:51:42.365304] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058
00:07:49.644  [2024-11-19 16:51:42.365537] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fd4c, Actual=7609
00:07:49.644  [2024-11-19 16:51:42.365748] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fe21, Actual=bfe9
00:07:49.644  [2024-11-19 16:51:42.365954] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=1ab7d3ed, Actual=1ab753ed
00:07:49.644  [2024-11-19 16:51:42.366187] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=3857c660, Actual=38574660
00:07:49.644  [2024-11-19 16:51:42.366431] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=8088
00:07:49.644  [2024-11-19 16:51:42.366663] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=8088
00:07:49.644  [2024-11-19 16:51:42.366905] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058
00:07:49.644  [2024-11-19 16:51:42.367140] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058
00:07:49.644  [2024-11-19 16:51:42.367363] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=1ab753ed, Actual=37f9797e
00:07:49.644  [2024-11-19 16:51:42.367572] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=38574660, Actual=6dbdce22
00:07:49.644  [2024-11-19 16:51:42.367804] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=a576a7728ecca0d3, Actual=a576a7728ecc20d3
00:07:49.644  [2024-11-19 16:51:42.368034] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=88010a2d48372266, Actual=88010a2d4837a266
00:07:49.644  [2024-11-19 16:51:42.368270] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=8088
00:07:49.644  [2024-11-19 16:51:42.368507] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=8088
00:07:49.644  [2024-11-19 16:51:42.368745] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=800000000058
00:07:49.644  [2024-11-19 16:51:42.368969] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=800000000058
00:07:49.644  [2024-11-19 16:51:42.369215] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=a576a7728ecc20d3, Actual=a785207e94962a0e
00:07:49.644  [2024-11-19 16:51:42.369430] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=88010a2d4837a266, Actual=448f535f510c459d
00:07:49.644  passed
00:07:49.644    Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_guard_test ...[2024-11-19 16:51:42.369674] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=7d4c, Actual=fd4c
00:07:49.644  [2024-11-19 16:51:42.369918] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=7e21, Actual=fe21
00:07:49.644  [2024-11-19 16:51:42.370152] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=8088
00:07:49.644  [2024-11-19 16:51:42.370389] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=8088
00:07:49.644  [2024-11-19 16:51:42.370638] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058
00:07:49.644  [2024-11-19 16:51:42.370878] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058
00:07:49.644  [2024-11-19 16:51:42.371111] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fd4c, Actual=7609
00:07:49.644  [2024-11-19 16:51:42.371320] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fe21, Actual=bfe9
00:07:49.644  [2024-11-19 16:51:42.371530] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=1ab7d3ed, Actual=1ab753ed
00:07:49.644  [2024-11-19 16:51:42.371751] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=3857c660, Actual=38574660
00:07:49.644  [2024-11-19 16:51:42.371999] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=8088
00:07:49.644  [2024-11-19 16:51:42.372239] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=8088
00:07:49.644  [2024-11-19 16:51:42.372462] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058
00:07:49.644  [2024-11-19 16:51:42.372699] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058
00:07:49.644  [2024-11-19 16:51:42.372931] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=1ab753ed, Actual=37f9797e
00:07:49.644  [2024-11-19 16:51:42.373142] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=38574660, Actual=6dbdce22
00:07:49.644  [2024-11-19 16:51:42.373357] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=a576a7728ecca0d3, Actual=a576a7728ecc20d3
00:07:49.644  [2024-11-19 16:51:42.373591] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=88010a2d48372266, Actual=88010a2d4837a266
00:07:49.644  [2024-11-19 16:51:42.373818] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=8088
00:07:49.644  [2024-11-19 16:51:42.374056] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=8088
00:07:49.644  [2024-11-19 16:51:42.374294] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=800000000058
00:07:49.644  [2024-11-19 16:51:42.374531] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=800000000058
00:07:49.644  [2024-11-19 16:51:42.374776] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=a576a7728ecc20d3, Actual=a785207e94962a0e
00:07:49.644  [2024-11-19 16:51:42.375006] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=88010a2d4837a266, Actual=448f535f510c459d
00:07:49.644  passed
00:07:49.644    Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_pi_16_test ...[2024-11-19 16:51:42.375253] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=7d4c, Actual=fd4c
00:07:49.644  [2024-11-19 16:51:42.375482] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=7e21, Actual=fe21
00:07:49.644  [2024-11-19 16:51:42.375710] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=8088
00:07:49.645  [2024-11-19 16:51:42.375946] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=8088
00:07:49.645  [2024-11-19 16:51:42.376191] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058
00:07:49.645  [2024-11-19 16:51:42.376413] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058
00:07:49.645  [2024-11-19 16:51:42.376646] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fd4c, Actual=7609
00:07:49.645  [2024-11-19 16:51:42.376851] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fe21, Actual=bfe9
00:07:49.645  passed
00:07:49.645    Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_test ...[2024-11-19 16:51:42.377106] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=1ab7d3ed, Actual=1ab753ed
00:07:49.645  [2024-11-19 16:51:42.377333] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=3857c660, Actual=38574660
00:07:49.645  [2024-11-19 16:51:42.377579] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=8088
00:07:49.645  [2024-11-19 16:51:42.377808] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=8088
00:07:49.645  [2024-11-19 16:51:42.378037] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058
00:07:49.645  [2024-11-19 16:51:42.378266] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058
00:07:49.645  [2024-11-19 16:51:42.378511] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=1ab753ed, Actual=37f9797e
00:07:49.645  [2024-11-19 16:51:42.378719] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=38574660, Actual=6dbdce22
00:07:49.645  [2024-11-19 16:51:42.378971] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=a576a7728ecca0d3, Actual=a576a7728ecc20d3
00:07:49.645  [2024-11-19 16:51:42.379200] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=88010a2d48372266, Actual=88010a2d4837a266
00:07:49.645  [2024-11-19 16:51:42.379429] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=8088
00:07:49.645  [2024-11-19 16:51:42.379663] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=8088
00:07:49.645  [2024-11-19 16:51:42.379888] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=800000000058
00:07:49.645  [2024-11-19 16:51:42.380124] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=800000000058
00:07:49.645  [2024-11-19 16:51:42.380374] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=a576a7728ecc20d3, Actual=a785207e94962a0e
00:07:49.645  [2024-11-19 16:51:42.380588] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=88010a2d4837a266, Actual=448f535f510c459d
00:07:49.645  passed
00:07:49.645    Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_pi_16_test ...[2024-11-19 16:51:42.380826] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=7d4c, Actual=fd4c
00:07:49.645  [2024-11-19 16:51:42.381066] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=7e21, Actual=fe21
00:07:49.645  [2024-11-19 16:51:42.381290] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=8088
00:07:49.645  [2024-11-19 16:51:42.381519] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=8088
00:07:49.645  [2024-11-19 16:51:42.381779] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058
00:07:49.645  [2024-11-19 16:51:42.382009] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058
00:07:49.645  [2024-11-19 16:51:42.382245] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fd4c, Actual=7609
00:07:49.645  [2024-11-19 16:51:42.382462] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fe21, Actual=bfe9
00:07:49.645  passed
00:07:49.645    Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_test ...[2024-11-19 16:51:42.382681] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=1ab7d3ed, Actual=1ab753ed
00:07:49.645  [2024-11-19 16:51:42.382915] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=3857c660, Actual=38574660
00:07:49.645  [2024-11-19 16:51:42.383163] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=8088
00:07:49.645  [2024-11-19 16:51:42.383398] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=8088
00:07:49.645  [2024-11-19 16:51:42.383625] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058
00:07:49.645  [2024-11-19 16:51:42.383852] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058
00:07:49.645  [2024-11-19 16:51:42.384078] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=1ab753ed, Actual=37f9797e
00:07:49.645  [2024-11-19 16:51:42.384277] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=38574660, Actual=6dbdce22
00:07:49.645  [2024-11-19 16:51:42.384521] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=a576a7728ecca0d3, Actual=a576a7728ecc20d3
00:07:49.645  [2024-11-19 16:51:42.384755] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=88010a2d48372266, Actual=88010a2d4837a266
00:07:49.645  [2024-11-19 16:51:42.384976] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=8088
00:07:49.645  [2024-11-19 16:51:42.385203] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=8088
00:07:49.645  [2024-11-19 16:51:42.385437] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=800000000058
00:07:49.645  [2024-11-19 16:51:42.385669] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=800000000058
00:07:49.645  [2024-11-19 16:51:42.385914] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=a576a7728ecc20d3, Actual=a785207e94962a0e
00:07:49.645  [2024-11-19 16:51:42.386121] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=88010a2d4837a266, Actual=448f535f510c459d
00:07:49.645  passed
00:07:49.645    Test: dif_copy_sec_512_md_8_prchk_0_single_iov ...passed
00:07:49.645    Test: dif_copy_sec_4096_md_128_prchk_0_single_iov_test ...passed
00:07:49.645    Test: dif_copy_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed
00:07:49.645    Test: dif_copy_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed
00:07:49.645    Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs ...passed
00:07:49.645    Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed
00:07:49.645    Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed
00:07:49.645    Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed
00:07:49.645    Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed
00:07:49.645    Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-11-19 16:51:42.419380] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96,  Expected=7d4c, Actual=fd4c
00:07:49.645  [2024-11-19 16:51:42.420237] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96,  Expected=efd4, Actual=6fd4
00:07:49.645  [2024-11-19 16:51:42.421071] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96,  Expected=88, Actual=8088
00:07:49.645  [2024-11-19 16:51:42.421902] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96,  Expected=88, Actual=8088
00:07:49.645  [2024-11-19 16:51:42.422750] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=8060
00:07:49.645  [2024-11-19 16:51:42.423593] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=8060
00:07:49.645  [2024-11-19 16:51:42.424426] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96,  Expected=fd4c, Actual=7609
00:07:49.645  [2024-11-19 16:51:42.425256] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96,  Expected=e4c1, Actual=a509
00:07:49.645  [2024-11-19 16:51:42.426084] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96,  Expected=1ab7d3ed, Actual=1ab753ed
00:07:49.645  [2024-11-19 16:51:42.426924] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96,  Expected=4a0fa2c5, Actual=4a0f22c5
00:07:49.645  [2024-11-19 16:51:42.427778] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96,  Expected=88, Actual=8088
00:07:49.645  [2024-11-19 16:51:42.428632] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96,  Expected=88, Actual=8088
00:07:49.645  [2024-11-19 16:51:42.429465] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=8060
00:07:49.645  [2024-11-19 16:51:42.430315] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=8060
00:07:49.645  [2024-11-19 16:51:42.431164] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96,  Expected=1ab753ed, Actual=37f9797e
00:07:49.645  [2024-11-19 16:51:42.432008] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96,  Expected=741688fe, Actual=21fc00bc
00:07:49.645  [2024-11-19 16:51:42.432845] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96,  Expected=a576a7728ecca0d3, Actual=a576a7728ecc20d3
00:07:49.645  [2024-11-19 16:51:42.433697] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96,  Expected=72d437e3cffee8d7, Actual=72d437e3cffe68d7
00:07:49.645  [2024-11-19 16:51:42.434541] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96,  Expected=88, Actual=8088
00:07:49.645  [2024-11-19 16:51:42.435389] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96,  Expected=88, Actual=8088
00:07:49.645  [2024-11-19 16:51:42.436225] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=800000000060
00:07:49.645  [2024-11-19 16:51:42.437066] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=800000000060
00:07:49.645  [2024-11-19 16:51:42.437904] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96,  Expected=a576a7728ecc20d3, Actual=a785207e94962a0e
00:07:49.645  [2024-11-19 16:51:42.438771] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96,  Expected=4c87c8f68d0ed55f, Actual=80099184943532a4
00:07:49.646  passed
00:07:49.646    Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-11-19 16:51:42.439054] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=7d4c, Actual=fd4c
00:07:49.646  [2024-11-19 16:51:42.439251] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=2603, Actual=a603
00:07:49.646  [2024-11-19 16:51:42.439461] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=8088
00:07:49.646  [2024-11-19 16:51:42.439668] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=8088
00:07:49.646  [2024-11-19 16:51:42.439893] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058
00:07:49.646  [2024-11-19 16:51:42.440116] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058
00:07:49.646  [2024-11-19 16:51:42.440305] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fd4c, Actual=7609
00:07:49.646  [2024-11-19 16:51:42.440512] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=2d16, Actual=6cde
00:07:49.646  [2024-11-19 16:51:42.440712] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=1ab7d3ed, Actual=1ab753ed
00:07:49.646  [2024-11-19 16:51:42.440922] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=d4bf6a97, Actual=d4bfea97
00:07:49.646  [2024-11-19 16:51:42.441137] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=8088
00:07:49.646  [2024-11-19 16:51:42.441355] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=8088
00:07:49.646  [2024-11-19 16:51:42.441565] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058
00:07:49.646  [2024-11-19 16:51:42.441770] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058
00:07:49.646  [2024-11-19 16:51:42.441973] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=1ab753ed, Actual=37f9797e
00:07:49.646  [2024-11-19 16:51:42.442176] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=eaa640ac, Actual=bf4cc8ee
00:07:49.646  [2024-11-19 16:51:42.442405] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=a576a7728ecca0d3, Actual=a576a7728ecc20d3
00:07:49.646  [2024-11-19 16:51:42.442604] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=635d50e142a0d2a, Actual=635d50e142a8d2a
00:07:49.646  [2024-11-19 16:51:42.442817] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=8088
00:07:49.646  [2024-11-19 16:51:42.443028] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=8088
00:07:49.646  [2024-11-19 16:51:42.443233] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=800000000058
00:07:49.646  [2024-11-19 16:51:42.443431] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=800000000058
00:07:49.646  [2024-11-19 16:51:42.443646] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=a576a7728ecc20d3, Actual=a785207e94962a0e
00:07:49.646  [2024-11-19 16:51:42.443857] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=38662a1b56da30a2, Actual=f4e873694fe1d759
00:07:49.646  passed
00:07:49.646    Test: dix_sec_512_md_0_error ...[2024-11-19 16:51:42.443932] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size.
00:07:49.646  passed
00:07:49.646    Test: dix_sec_512_md_8_prchk_0_single_iov ...passed
00:07:49.646    Test: dix_sec_4096_md_128_prchk_0_single_iov_test ...passed
00:07:49.646    Test: dix_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed
00:07:49.646    Test: dix_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed
00:07:49.646    Test: dix_sec_4096_md_128_prchk_7_multi_iovs ...passed
00:07:49.646    Test: dix_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed
00:07:49.646    Test: dix_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed
00:07:49.646    Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed
00:07:49.646    Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed
00:07:49.646    Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-11-19 16:51:42.476836] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96,  Expected=7d4c, Actual=fd4c
00:07:49.646  [2024-11-19 16:51:42.477712] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96,  Expected=efd4, Actual=6fd4
00:07:49.646  [2024-11-19 16:51:42.478554] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96,  Expected=88, Actual=8088
00:07:49.646  [2024-11-19 16:51:42.479387] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96,  Expected=88, Actual=8088
00:07:49.646  [2024-11-19 16:51:42.480224] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=8060
00:07:49.646  [2024-11-19 16:51:42.481068] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=8060
00:07:49.646  [2024-11-19 16:51:42.481899] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96,  Expected=fd4c, Actual=7609
00:07:49.646  [2024-11-19 16:51:42.482742] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96,  Expected=e4c1, Actual=a509
00:07:49.646  [2024-11-19 16:51:42.483581] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96,  Expected=1ab7d3ed, Actual=1ab753ed
00:07:49.646  [2024-11-19 16:51:42.484412] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96,  Expected=4a0fa2c5, Actual=4a0f22c5
00:07:49.646  [2024-11-19 16:51:42.485249] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96,  Expected=88, Actual=8088
00:07:49.646  [2024-11-19 16:51:42.486079] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96,  Expected=88, Actual=8088
00:07:49.646  [2024-11-19 16:51:42.486929] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=8060
00:07:49.646  [2024-11-19 16:51:42.487768] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=8060
00:07:49.646  [2024-11-19 16:51:42.488600] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96,  Expected=1ab753ed, Actual=37f9797e
00:07:49.646  [2024-11-19 16:51:42.489435] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96,  Expected=741688fe, Actual=21fc00bc
00:07:49.646  [2024-11-19 16:51:42.490282] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96,  Expected=a576a7728ecca0d3, Actual=a576a7728ecc20d3
00:07:49.646  [2024-11-19 16:51:42.491130] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96,  Expected=72d437e3cffee8d7, Actual=72d437e3cffe68d7
00:07:49.646  [2024-11-19 16:51:42.491959] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96,  Expected=88, Actual=8088
00:07:49.646  [2024-11-19 16:51:42.492786] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96,  Expected=88, Actual=8088
00:07:49.646  [2024-11-19 16:51:42.493614] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=800000000060
00:07:49.646  [2024-11-19 16:51:42.494445] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=800000000060
00:07:49.646  [2024-11-19 16:51:42.495302] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96,  Expected=a576a7728ecc20d3, Actual=a785207e94962a0e
00:07:49.646  [2024-11-19 16:51:42.496132] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96,  Expected=4c87c8f68d0ed55f, Actual=80099184943532a4
00:07:49.646  passed
00:07:49.646    Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-11-19 16:51:42.496437] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=7d4c, Actual=fd4c
00:07:49.646  [2024-11-19 16:51:42.496638] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=2603, Actual=a603
00:07:49.646  [2024-11-19 16:51:42.496854] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=8088
00:07:49.646  [2024-11-19 16:51:42.497066] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=8088
00:07:49.646  [2024-11-19 16:51:42.497283] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058
00:07:49.646  [2024-11-19 16:51:42.497486] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058
00:07:49.646  [2024-11-19 16:51:42.497689] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fd4c, Actual=7609
00:07:49.646  [2024-11-19 16:51:42.497894] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=2d16, Actual=6cde
00:07:49.646  [2024-11-19 16:51:42.498096] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=1ab7d3ed, Actual=1ab753ed
00:07:49.646  [2024-11-19 16:51:42.498296] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=d4bf6a97, Actual=d4bfea97
00:07:49.646  [2024-11-19 16:51:42.498516] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=8088
00:07:49.646  [2024-11-19 16:51:42.498712] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=8088
00:07:49.646  [2024-11-19 16:51:42.498918] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058
00:07:49.646  [2024-11-19 16:51:42.499128] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058
00:07:49.646  [2024-11-19 16:51:42.499325] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=1ab753ed, Actual=37f9797e
00:07:49.646  [2024-11-19 16:51:42.499536] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=eaa640ac, Actual=bf4cc8ee
00:07:49.646  [2024-11-19 16:51:42.499743] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=a576a7728ecca0d3, Actual=a576a7728ecc20d3
00:07:49.646  [2024-11-19 16:51:42.499951] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=635d50e142a0d2a, Actual=635d50e142a8d2a
00:07:49.646  [2024-11-19 16:51:42.500152] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=8088
00:07:49.646  [2024-11-19 16:51:42.500356] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=8088
00:07:49.646  [2024-11-19 16:51:42.500553] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=800000000058
00:07:49.647  [2024-11-19 16:51:42.500761] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=800000000058
00:07:49.905  [2024-11-19 16:51:42.500967] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=a576a7728ecc20d3, Actual=a785207e94962a0e
00:07:49.905  [2024-11-19 16:51:42.501172] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=38662a1b56da30a2, Actual=f4e873694fe1d759
00:07:49.905  passed
00:07:49.905    Test: set_md_interleave_iovs_test ...passed
00:07:49.905    Test: set_md_interleave_iovs_split_test ...passed
00:07:49.905    Test: dif_generate_stream_pi_16_test ...passed
00:07:49.905    Test: dif_generate_stream_test ...passed
00:07:49.905    Test: set_md_interleave_iovs_alignment_test ...passed
00:07:49.905    Test: dif_generate_split_test ...[2024-11-19 16:51:42.507054] /home/vagrant/spdk_repo/spdk/lib/util/dif.c:1799:spdk_dif_set_md_interleave_iovs: *ERROR*: Buffer overflow will occur.
00:07:49.905  passed
00:07:49.905    Test: set_md_interleave_iovs_multi_segments_test ...passed
00:07:49.905    Test: dif_verify_split_test ...passed
00:07:49.905    Test: dif_verify_stream_multi_segments_test ...passed
00:07:49.905    Test: update_crc32c_pi_16_test ...passed
00:07:49.905    Test: update_crc32c_test ...passed
00:07:49.905    Test: dif_update_crc32c_split_test ...passed
00:07:49.905    Test: dif_update_crc32c_stream_multi_segments_test ...passed
00:07:49.905    Test: get_range_with_md_test ...passed
00:07:49.905    Test: dif_sec_512_md_8_prchk_7_multi_iovs_remap_pi_16_test ...passed
00:07:49.905    Test: dif_sec_4096_md_128_prchk_7_multi_iovs_remap_test ...passed
00:07:49.905    Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed
00:07:49.905    Test: dix_sec_4096_md_128_prchk_7_multi_iovs_remap ...passed
00:07:49.905    Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits_remap_pi_16_test ...passed
00:07:49.905    Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed
00:07:49.905    Test: dif_generate_and_verify_unmap_test ...passed
00:07:49.905  
00:07:49.905  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:49.905                suites      1      1    n/a      0        0
00:07:49.905                 tests     79     79     79      0        0
00:07:49.905               asserts   3584   3584   3584      0      n/a
00:07:49.905  
00:07:49.905  Elapsed time =    0.269 seconds
00:07:49.905   16:51:42	-- unit/unittest.sh@141 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/iov.c/iov_ut
00:07:49.905  
00:07:49.905  
00:07:49.905       CUnit - A unit testing framework for C - Version 2.1-3
00:07:49.905       http://cunit.sourceforge.net/
00:07:49.905  
00:07:49.905  
00:07:49.905  Suite: iov
00:07:49.905    Test: test_single_iov ...passed
00:07:49.905    Test: test_simple_iov ...passed
00:07:49.905    Test: test_complex_iov ...passed
00:07:49.905    Test: test_iovs_to_buf ...passed
00:07:49.905    Test: test_buf_to_iovs ...passed
00:07:49.905    Test: test_memset ...passed
00:07:49.905    Test: test_iov_one ...passed
00:07:49.905    Test: test_iov_xfer ...passed
00:07:49.905  
00:07:49.905  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:49.905                suites      1      1    n/a      0        0
00:07:49.905                 tests      8      8      8      0        0
00:07:49.905               asserts    156    156    156      0      n/a
00:07:49.905  
00:07:49.905  Elapsed time =    0.000 seconds
00:07:49.905   16:51:42	-- unit/unittest.sh@142 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/math.c/math_ut
00:07:49.905  
00:07:49.905  
00:07:49.905       CUnit - A unit testing framework for C - Version 2.1-3
00:07:49.905       http://cunit.sourceforge.net/
00:07:49.905  
00:07:49.905  
00:07:49.905  Suite: math
00:07:49.905    Test: test_serial_number_arithmetic ...passed
00:07:49.905  Suite: erase
00:07:49.905    Test: test_memset_s ...passed
00:07:49.905  
00:07:49.905  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:49.905                suites      2      2    n/a      0        0
00:07:49.905                 tests      2      2      2      0        0
00:07:49.905               asserts     18     18     18      0      n/a
00:07:49.905  
00:07:49.905  Elapsed time =    0.000 seconds
00:07:49.905   16:51:42	-- unit/unittest.sh@143 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/pipe.c/pipe_ut
00:07:49.905  
00:07:49.905  
00:07:49.905       CUnit - A unit testing framework for C - Version 2.1-3
00:07:49.905       http://cunit.sourceforge.net/
00:07:49.905  
00:07:49.905  
00:07:49.905  Suite: pipe
00:07:49.905    Test: test_create_destroy ...passed
00:07:49.905    Test: test_write_get_buffer ...passed
00:07:49.905    Test: test_write_advance ...passed
00:07:49.905    Test: test_read_get_buffer ...passed
00:07:49.905    Test: test_read_advance ...passed
00:07:49.905    Test: test_data ...passed
00:07:49.905  
00:07:49.905  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:49.905                suites      1      1    n/a      0        0
00:07:49.905                 tests      6      6      6      0        0
00:07:49.905               asserts    250    250    250      0      n/a
00:07:49.905  
00:07:49.905  Elapsed time =    0.000 seconds
00:07:49.906   16:51:42	-- unit/unittest.sh@144 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/xor.c/xor_ut
00:07:49.906  
00:07:49.906  
00:07:49.906       CUnit - A unit testing framework for C - Version 2.1-3
00:07:49.906       http://cunit.sourceforge.net/
00:07:49.906  
00:07:49.906  
00:07:49.906  Suite: xor
00:07:49.906    Test: test_xor_gen ...passed
00:07:49.906  
00:07:49.906  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:49.906                suites      1      1    n/a      0        0
00:07:49.906                 tests      1      1      1      0        0
00:07:49.906               asserts     17     17     17      0      n/a
00:07:49.906  
00:07:49.906  Elapsed time =    0.005 seconds
00:07:49.906  
00:07:49.906  real	0m0.709s
00:07:49.906  user	0m0.534s
00:07:49.906  sys	0m0.180s
00:07:49.906   16:51:42	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:49.906  ************************************
00:07:49.906   16:51:42	-- common/autotest_common.sh@10 -- # set +x
00:07:49.906  END TEST unittest_util
00:07:49.906  ************************************
00:07:49.906   16:51:42	-- unit/unittest.sh@258 -- # grep -q '#define SPDK_CONFIG_VHOST 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h
00:07:49.906   16:51:42	-- unit/unittest.sh@259 -- # run_test unittest_vhost /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut
00:07:49.906   16:51:42	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:07:49.906   16:51:42	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:49.906   16:51:42	-- common/autotest_common.sh@10 -- # set +x
00:07:49.906  ************************************
00:07:49.906  START TEST unittest_vhost
00:07:49.906  ************************************
00:07:49.906   16:51:42	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut
00:07:50.164  
00:07:50.164  
00:07:50.164       CUnit - A unit testing framework for C - Version 2.1-3
00:07:50.164       http://cunit.sourceforge.net/
00:07:50.164  
00:07:50.164  
00:07:50.164  Suite: vhost_suite
00:07:50.164    Test: desc_to_iov_test ...[2024-11-19 16:51:42.779091] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c: 647:vhost_vring_desc_payload_to_iov: *ERROR*: SPDK_VHOST_IOVS_MAX(129) reached
00:07:50.164  passed
00:07:50.164    Test: create_controller_test ...[2024-11-19 16:51:42.783543] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c:  80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f)
00:07:50.164  [2024-11-19 16:51:42.783670] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xf0 is invalid (core mask is 0xf)
00:07:50.164  [2024-11-19 16:51:42.783779] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c:  80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f)
00:07:50.164  [2024-11-19 16:51:42.783862] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xff is invalid (core mask is 0xf)
00:07:50.164  [2024-11-19 16:51:42.783911] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 121:vhost_dev_register: *ERROR*: Can't register controller with no name
00:07:50.164  [2024-11-19 16:51:42.784005] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1798:vhost_user_dev_init: *ERROR*: Resulting socket path for controller xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx[2024-11-19 16:51:42.784832] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 133:vhost_dev_register: *ERROR*: vhost controller vdev_name_0 already exists.
00:07:50.164  passed
00:07:50.164    Test: session_find_by_vid_test ...passed
00:07:50.164    Test: remove_controller_test ...[2024-11-19 16:51:42.786489] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1883:vhost_user_dev_unregister: *ERROR*: Controller vdev_name_0 has still valid connection.
00:07:50.164  passed
00:07:50.164    Test: vq_avail_ring_get_test ...passed
00:07:50.164    Test: vq_packed_ring_test ...passed
00:07:50.164    Test: vhost_blk_construct_test ...passed
00:07:50.164  
00:07:50.164  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:50.164                suites      1      1    n/a      0        0
00:07:50.164                 tests      7      7      7      0        0
00:07:50.164               asserts    145    145    145      0      n/a
00:07:50.164  
00:07:50.164  Elapsed time =    0.011 seconds
00:07:50.165  
00:07:50.165  real	0m0.053s
00:07:50.165  user	0m0.032s
00:07:50.165  sys	0m0.020s
00:07:50.165   16:51:42	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:50.165   16:51:42	-- common/autotest_common.sh@10 -- # set +x
00:07:50.165  ************************************
00:07:50.165  END TEST unittest_vhost
00:07:50.165  ************************************
00:07:50.165   16:51:42	-- unit/unittest.sh@261 -- # run_test unittest_dma /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut
00:07:50.165   16:51:42	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:07:50.165   16:51:42	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:50.165   16:51:42	-- common/autotest_common.sh@10 -- # set +x
00:07:50.165  ************************************
00:07:50.165  START TEST unittest_dma
00:07:50.165  ************************************
00:07:50.165   16:51:42	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut
00:07:50.165  
00:07:50.165  
00:07:50.165       CUnit - A unit testing framework for C - Version 2.1-3
00:07:50.165       http://cunit.sourceforge.net/
00:07:50.165  
00:07:50.165  
00:07:50.165  Suite: dma_suite
00:07:50.165    Test: test_dma ...[2024-11-19 16:51:42.883783] /home/vagrant/spdk_repo/spdk/lib/dma/dma.c:  37:spdk_memory_domain_create: *ERROR*: Context size can't be 0
00:07:50.165  passed
00:07:50.165  
00:07:50.165  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:50.165                suites      1      1    n/a      0        0
00:07:50.165                 tests      1      1      1      0        0
00:07:50.165               asserts     50     50     50      0      n/a
00:07:50.165  
00:07:50.165  Elapsed time =    0.000 seconds
00:07:50.165  
00:07:50.165  real	0m0.034s
00:07:50.165  user	0m0.019s
00:07:50.165  sys	0m0.015s
00:07:50.165   16:51:42	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:50.165   16:51:42	-- common/autotest_common.sh@10 -- # set +x
00:07:50.165  ************************************
00:07:50.165  END TEST unittest_dma
00:07:50.165  ************************************
00:07:50.165   16:51:42	-- unit/unittest.sh@263 -- # run_test unittest_init unittest_init
00:07:50.165   16:51:42	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:07:50.165   16:51:42	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:50.165   16:51:42	-- common/autotest_common.sh@10 -- # set +x
00:07:50.165  ************************************
00:07:50.165  START TEST unittest_init
00:07:50.165  ************************************
00:07:50.165   16:51:42	-- common/autotest_common.sh@1114 -- # unittest_init
00:07:50.165   16:51:42	-- unit/unittest.sh@148 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/init/subsystem.c/subsystem_ut
00:07:50.165  
00:07:50.165  
00:07:50.165       CUnit - A unit testing framework for C - Version 2.1-3
00:07:50.165       http://cunit.sourceforge.net/
00:07:50.165  
00:07:50.165  
00:07:50.165  Suite: subsystem_suite
00:07:50.165    Test: subsystem_sort_test_depends_on_single ...passed
00:07:50.165    Test: subsystem_sort_test_depends_on_multiple ...passed
00:07:50.165    Test: subsystem_sort_test_missing_dependency ...[2024-11-19 16:51:42.974113] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 190:spdk_subsystem_init: *ERROR*: subsystem A dependency B is missing
00:07:50.165  [2024-11-19 16:51:42.974475] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 185:spdk_subsystem_init: *ERROR*: subsystem C is missing
00:07:50.165  passed
00:07:50.165  
00:07:50.165  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:50.165                suites      1      1    n/a      0        0
00:07:50.165                 tests      3      3      3      0        0
00:07:50.165               asserts     20     20     20      0      n/a
00:07:50.165  
00:07:50.165  Elapsed time =    0.001 seconds
00:07:50.165  
00:07:50.165  real	0m0.037s
00:07:50.165  user	0m0.017s
00:07:50.165  sys	0m0.021s
00:07:50.165   16:51:42	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:50.165   16:51:42	-- common/autotest_common.sh@10 -- # set +x
00:07:50.165  ************************************
00:07:50.165  END TEST unittest_init
00:07:50.165  ************************************
00:07:50.422   16:51:43	-- unit/unittest.sh@265 -- # [[ y == y ]]
00:07:50.422    16:51:43	-- unit/unittest.sh@266 -- # hostname
00:07:50.422   16:51:43	-- unit/unittest.sh@266 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -d . -c --no-external -t ubuntu2204-cloud-1711172311-2200 -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info
00:07:50.422  geninfo: WARNING: invalid characters removed from testname!
00:08:16.961   16:52:08	-- unit/unittest.sh@267 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info
00:08:20.250   16:52:12	-- unit/unittest.sh@268 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info
00:08:22.784   16:52:15	-- unit/unittest.sh@269 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/app/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info
00:08:25.340   16:52:17	-- unit/unittest.sh@270 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info
00:08:27.870   16:52:20	-- unit/unittest.sh@271 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/examples/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info
00:08:31.150   16:52:23	-- unit/unittest.sh@272 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/test/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info
00:08:33.048   16:52:25	-- unit/unittest.sh@273 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info
00:08:33.049   16:52:25	-- unit/unittest.sh@274 -- # genhtml /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info --output-directory /home/vagrant/spdk_repo/spdk/../output/ut_coverage
00:08:33.983  Reading data file /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info
00:08:33.983  Found 309 entries.
00:08:33.983  Found common filename prefix "/home/vagrant/spdk_repo/spdk"
00:08:33.983  Writing .css and .png files.
00:08:33.983  Generating output.
00:08:33.983  Processing file include/linux/virtio_ring.h
00:08:33.983  Processing file include/spdk/nvme_spec.h
00:08:33.983  Processing file include/spdk/bdev_module.h
00:08:33.983  Processing file include/spdk/base64.h
00:08:33.983  Processing file include/spdk/util.h
00:08:33.983  Processing file include/spdk/thread.h
00:08:33.983  Processing file include/spdk/mmio.h
00:08:33.983  Processing file include/spdk/histogram_data.h
00:08:33.983  Processing file include/spdk/nvmf_transport.h
00:08:33.983  Processing file include/spdk/trace.h
00:08:33.983  Processing file include/spdk/endian.h
00:08:33.983  Processing file include/spdk/nvme.h
00:08:34.252  Processing file include/spdk_internal/sgl.h
00:08:34.252  Processing file include/spdk_internal/sock.h
00:08:34.252  Processing file include/spdk_internal/nvme_tcp.h
00:08:34.252  Processing file include/spdk_internal/virtio.h
00:08:34.252  Processing file include/spdk_internal/utf.h
00:08:34.252  Processing file include/spdk_internal/rdma.h
00:08:34.252  Processing file lib/accel/accel_sw.c
00:08:34.252  Processing file lib/accel/accel_rpc.c
00:08:34.252  Processing file lib/accel/accel.c
00:08:34.526  Processing file lib/bdev/part.c
00:08:34.526  Processing file lib/bdev/bdev.c
00:08:34.526  Processing file lib/bdev/bdev_zone.c
00:08:34.526  Processing file lib/bdev/bdev_rpc.c
00:08:34.526  Processing file lib/bdev/scsi_nvme.c
00:08:35.093  Processing file lib/blob/blob_bs_dev.c
00:08:35.093  Processing file lib/blob/blobstore.h
00:08:35.093  Processing file lib/blob/zeroes.c
00:08:35.093  Processing file lib/blob/blobstore.c
00:08:35.093  Processing file lib/blob/request.c
00:08:35.093  Processing file lib/blobfs/tree.c
00:08:35.093  Processing file lib/blobfs/blobfs.c
00:08:35.093  Processing file lib/conf/conf.c
00:08:35.093  Processing file lib/dma/dma.c
00:08:35.658  Processing file lib/env_dpdk/memory.c
00:08:35.658  Processing file lib/env_dpdk/pci_dpdk_2207.c
00:08:35.658  Processing file lib/env_dpdk/init.c
00:08:35.658  Processing file lib/env_dpdk/pci_idxd.c
00:08:35.658  Processing file lib/env_dpdk/pci.c
00:08:35.658  Processing file lib/env_dpdk/pci_virtio.c
00:08:35.658  Processing file lib/env_dpdk/pci_vmd.c
00:08:35.658  Processing file lib/env_dpdk/env.c
00:08:35.658  Processing file lib/env_dpdk/threads.c
00:08:35.659  Processing file lib/env_dpdk/pci_dpdk_2211.c
00:08:35.659  Processing file lib/env_dpdk/pci_event.c
00:08:35.659  Processing file lib/env_dpdk/pci_ioat.c
00:08:35.659  Processing file lib/env_dpdk/sigbus_handler.c
00:08:35.659  Processing file lib/env_dpdk/pci_dpdk.c
00:08:35.659  Processing file lib/event/reactor.c
00:08:35.659  Processing file lib/event/app.c
00:08:35.659  Processing file lib/event/log_rpc.c
00:08:35.659  Processing file lib/event/app_rpc.c
00:08:35.659  Processing file lib/event/scheduler_static.c
00:08:36.226  Processing file lib/ftl/ftl_reloc.c
00:08:36.226  Processing file lib/ftl/ftl_writer.c
00:08:36.226  Processing file lib/ftl/ftl_writer.h
00:08:36.226  Processing file lib/ftl/ftl_init.c
00:08:36.226  Processing file lib/ftl/ftl_nv_cache.c
00:08:36.226  Processing file lib/ftl/ftl_l2p_flat.c
00:08:36.226  Processing file lib/ftl/ftl_nv_cache_io.h
00:08:36.226  Processing file lib/ftl/ftl_sb.c
00:08:36.226  Processing file lib/ftl/ftl_rq.c
00:08:36.226  Processing file lib/ftl/ftl_nv_cache.h
00:08:36.226  Processing file lib/ftl/ftl_io.h
00:08:36.226  Processing file lib/ftl/ftl_layout.c
00:08:36.226  Processing file lib/ftl/ftl_band_ops.c
00:08:36.226  Processing file lib/ftl/ftl_io.c
00:08:36.226  Processing file lib/ftl/ftl_p2l.c
00:08:36.226  Processing file lib/ftl/ftl_core.h
00:08:36.226  Processing file lib/ftl/ftl_trace.c
00:08:36.226  Processing file lib/ftl/ftl_core.c
00:08:36.226  Processing file lib/ftl/ftl_l2p.c
00:08:36.226  Processing file lib/ftl/ftl_debug.c
00:08:36.226  Processing file lib/ftl/ftl_band.h
00:08:36.226  Processing file lib/ftl/ftl_l2p_cache.c
00:08:36.226  Processing file lib/ftl/ftl_debug.h
00:08:36.226  Processing file lib/ftl/ftl_band.c
00:08:36.226  Processing file lib/ftl/base/ftl_base_bdev.c
00:08:36.226  Processing file lib/ftl/base/ftl_base_dev.c
00:08:36.484  Processing file lib/ftl/mngt/ftl_mngt_shutdown.c
00:08:36.484  Processing file lib/ftl/mngt/ftl_mngt.c
00:08:36.484  Processing file lib/ftl/mngt/ftl_mngt_upgrade.c
00:08:36.484  Processing file lib/ftl/mngt/ftl_mngt_recovery.c
00:08:36.484  Processing file lib/ftl/mngt/ftl_mngt_startup.c
00:08:36.484  Processing file lib/ftl/mngt/ftl_mngt_ioch.c
00:08:36.484  Processing file lib/ftl/mngt/ftl_mngt_l2p.c
00:08:36.484  Processing file lib/ftl/mngt/ftl_mngt_p2l.c
00:08:36.484  Processing file lib/ftl/mngt/ftl_mngt_band.c
00:08:36.484  Processing file lib/ftl/mngt/ftl_mngt_bdev.c
00:08:36.484  Processing file lib/ftl/mngt/ftl_mngt_misc.c
00:08:36.484  Processing file lib/ftl/mngt/ftl_mngt_md.c
00:08:36.484  Processing file lib/ftl/mngt/ftl_mngt_self_test.c
00:08:36.484  Processing file lib/ftl/nvc/ftl_nvc_bdev_vss.c
00:08:36.484  Processing file lib/ftl/nvc/ftl_nvc_dev.c
00:08:36.741  Processing file lib/ftl/upgrade/ftl_layout_upgrade.c
00:08:36.741  Processing file lib/ftl/upgrade/ftl_sb_v3.c
00:08:36.741  Processing file lib/ftl/upgrade/ftl_sb_v5.c
00:08:36.741  Processing file lib/ftl/upgrade/ftl_sb_upgrade.c
00:08:36.741  Processing file lib/ftl/utils/ftl_mempool.c
00:08:36.741  Processing file lib/ftl/utils/ftl_property.c
00:08:36.741  Processing file lib/ftl/utils/ftl_layout_tracker_bdev.c
00:08:36.741  Processing file lib/ftl/utils/ftl_df.h
00:08:36.741  Processing file lib/ftl/utils/ftl_md.c
00:08:36.741  Processing file lib/ftl/utils/ftl_bitmap.c
00:08:36.741  Processing file lib/ftl/utils/ftl_addr_utils.h
00:08:36.741  Processing file lib/ftl/utils/ftl_conf.c
00:08:36.742  Processing file lib/ftl/utils/ftl_property.h
00:08:37.000  Processing file lib/idxd/idxd_internal.h
00:08:37.000  Processing file lib/idxd/idxd_user.c
00:08:37.000  Processing file lib/idxd/idxd.c
00:08:37.000  Processing file lib/init/rpc.c
00:08:37.000  Processing file lib/init/json_config.c
00:08:37.000  Processing file lib/init/subsystem.c
00:08:37.000  Processing file lib/init/subsystem_rpc.c
00:08:37.000  Processing file lib/ioat/ioat_internal.h
00:08:37.000  Processing file lib/ioat/ioat.c
00:08:37.566  Processing file lib/iscsi/iscsi.h
00:08:37.566  Processing file lib/iscsi/iscsi_rpc.c
00:08:37.566  Processing file lib/iscsi/conn.c
00:08:37.566  Processing file lib/iscsi/param.c
00:08:37.566  Processing file lib/iscsi/init_grp.c
00:08:37.566  Processing file lib/iscsi/iscsi_subsystem.c
00:08:37.566  Processing file lib/iscsi/tgt_node.c
00:08:37.566  Processing file lib/iscsi/task.h
00:08:37.566  Processing file lib/iscsi/portal_grp.c
00:08:37.566  Processing file lib/iscsi/md5.c
00:08:37.566  Processing file lib/iscsi/task.c
00:08:37.566  Processing file lib/iscsi/iscsi.c
00:08:37.566  Processing file lib/json/json_util.c
00:08:37.566  Processing file lib/json/json_write.c
00:08:37.566  Processing file lib/json/json_parse.c
00:08:37.566  Processing file lib/jsonrpc/jsonrpc_server.c
00:08:37.566  Processing file lib/jsonrpc/jsonrpc_server_tcp.c
00:08:37.566  Processing file lib/jsonrpc/jsonrpc_client.c
00:08:37.566  Processing file lib/jsonrpc/jsonrpc_client_tcp.c
00:08:37.566  Processing file lib/log/log.c
00:08:37.566  Processing file lib/log/log_deprecated.c
00:08:37.566  Processing file lib/log/log_flags.c
00:08:37.824  Processing file lib/lvol/lvol.c
00:08:37.824  Processing file lib/nbd/nbd_rpc.c
00:08:37.824  Processing file lib/nbd/nbd.c
00:08:38.083  Processing file lib/notify/notify.c
00:08:38.083  Processing file lib/notify/notify_rpc.c
00:08:38.649  Processing file lib/nvme/nvme_pcie_internal.h
00:08:38.649  Processing file lib/nvme/nvme_zns.c
00:08:38.649  Processing file lib/nvme/nvme_ctrlr_cmd.c
00:08:38.649  Processing file lib/nvme/nvme_ctrlr.c
00:08:38.649  Processing file lib/nvme/nvme_qpair.c
00:08:38.649  Processing file lib/nvme/nvme_discovery.c
00:08:38.649  Processing file lib/nvme/nvme_quirks.c
00:08:38.649  Processing file lib/nvme/nvme_ns_ocssd_cmd.c
00:08:38.649  Processing file lib/nvme/nvme_internal.h
00:08:38.649  Processing file lib/nvme/nvme_cuse.c
00:08:38.649  Processing file lib/nvme/nvme_ns_cmd.c
00:08:38.649  Processing file lib/nvme/nvme_vfio_user.c
00:08:38.649  Processing file lib/nvme/nvme_opal.c
00:08:38.649  Processing file lib/nvme/nvme_poll_group.c
00:08:38.649  Processing file lib/nvme/nvme_rdma.c
00:08:38.649  Processing file lib/nvme/nvme_ctrlr_ocssd_cmd.c
00:08:38.649  Processing file lib/nvme/nvme_pcie.c
00:08:38.649  Processing file lib/nvme/nvme_tcp.c
00:08:38.649  Processing file lib/nvme/nvme_io_msg.c
00:08:38.649  Processing file lib/nvme/nvme_transport.c
00:08:38.649  Processing file lib/nvme/nvme.c
00:08:38.649  Processing file lib/nvme/nvme_pcie_common.c
00:08:38.649  Processing file lib/nvme/nvme_ns.c
00:08:38.649  Processing file lib/nvme/nvme_fabric.c
00:08:38.908  Processing file lib/nvmf/transport.c
00:08:38.908  Processing file lib/nvmf/ctrlr_bdev.c
00:08:38.908  Processing file lib/nvmf/ctrlr.c
00:08:38.908  Processing file lib/nvmf/rdma.c
00:08:38.908  Processing file lib/nvmf/nvmf_rpc.c
00:08:38.908  Processing file lib/nvmf/subsystem.c
00:08:38.908  Processing file lib/nvmf/nvmf.c
00:08:38.908  Processing file lib/nvmf/ctrlr_discovery.c
00:08:38.908  Processing file lib/nvmf/tcp.c
00:08:38.908  Processing file lib/nvmf/nvmf_internal.h
00:08:39.166  Processing file lib/rdma/common.c
00:08:39.166  Processing file lib/rdma/rdma_verbs.c
00:08:39.166  Processing file lib/rpc/rpc.c
00:08:39.424  Processing file lib/scsi/scsi_bdev.c
00:08:39.424  Processing file lib/scsi/scsi_rpc.c
00:08:39.424  Processing file lib/scsi/port.c
00:08:39.424  Processing file lib/scsi/task.c
00:08:39.424  Processing file lib/scsi/scsi.c
00:08:39.424  Processing file lib/scsi/scsi_pr.c
00:08:39.424  Processing file lib/scsi/lun.c
00:08:39.424  Processing file lib/scsi/dev.c
00:08:39.424  Processing file lib/sock/sock.c
00:08:39.424  Processing file lib/sock/sock_rpc.c
00:08:39.424  Processing file lib/thread/iobuf.c
00:08:39.424  Processing file lib/thread/thread.c
00:08:39.683  Processing file lib/trace/trace_flags.c
00:08:39.683  Processing file lib/trace/trace_rpc.c
00:08:39.683  Processing file lib/trace/trace.c
00:08:39.683  Processing file lib/trace_parser/trace.cpp
00:08:39.683  Processing file lib/ut/ut.c
00:08:39.683  Processing file lib/ut_mock/mock.c
00:08:40.249  Processing file lib/util/crc32_ieee.c
00:08:40.249  Processing file lib/util/zipf.c
00:08:40.249  Processing file lib/util/pipe.c
00:08:40.249  Processing file lib/util/base64.c
00:08:40.249  Processing file lib/util/bit_array.c
00:08:40.249  Processing file lib/util/crc32.c
00:08:40.249  Processing file lib/util/hexlify.c
00:08:40.249  Processing file lib/util/crc64.c
00:08:40.249  Processing file lib/util/file.c
00:08:40.249  Processing file lib/util/crc32c.c
00:08:40.249  Processing file lib/util/uuid.c
00:08:40.249  Processing file lib/util/cpuset.c
00:08:40.249  Processing file lib/util/xor.c
00:08:40.249  Processing file lib/util/fd.c
00:08:40.249  Processing file lib/util/math.c
00:08:40.249  Processing file lib/util/fd_group.c
00:08:40.249  Processing file lib/util/dif.c
00:08:40.249  Processing file lib/util/crc16.c
00:08:40.249  Processing file lib/util/strerror_tls.c
00:08:40.249  Processing file lib/util/string.c
00:08:40.249  Processing file lib/util/iov.c
00:08:40.249  Processing file lib/vfio_user/host/vfio_user.c
00:08:40.249  Processing file lib/vfio_user/host/vfio_user_pci.c
00:08:40.249  Processing file lib/vhost/vhost_internal.h
00:08:40.249  Processing file lib/vhost/vhost_blk.c
00:08:40.249  Processing file lib/vhost/vhost_scsi.c
00:08:40.249  Processing file lib/vhost/rte_vhost_user.c
00:08:40.249  Processing file lib/vhost/vhost.c
00:08:40.249  Processing file lib/vhost/vhost_rpc.c
00:08:40.507  Processing file lib/virtio/virtio_vfio_user.c
00:08:40.507  Processing file lib/virtio/virtio.c
00:08:40.507  Processing file lib/virtio/virtio_vhost_user.c
00:08:40.507  Processing file lib/virtio/virtio_pci.c
00:08:40.507  Processing file lib/vmd/led.c
00:08:40.507  Processing file lib/vmd/vmd.c
00:08:40.507  Processing file module/accel/dsa/accel_dsa.c
00:08:40.507  Processing file module/accel/dsa/accel_dsa_rpc.c
00:08:40.766  Processing file module/accel/error/accel_error.c
00:08:40.766  Processing file module/accel/error/accel_error_rpc.c
00:08:40.766  Processing file module/accel/iaa/accel_iaa_rpc.c
00:08:40.766  Processing file module/accel/iaa/accel_iaa.c
00:08:40.766  Processing file module/accel/ioat/accel_ioat_rpc.c
00:08:40.766  Processing file module/accel/ioat/accel_ioat.c
00:08:40.766  Processing file module/bdev/aio/bdev_aio_rpc.c
00:08:40.766  Processing file module/bdev/aio/bdev_aio.c
00:08:41.024  Processing file module/bdev/delay/vbdev_delay.c
00:08:41.024  Processing file module/bdev/delay/vbdev_delay_rpc.c
00:08:41.024  Processing file module/bdev/error/vbdev_error.c
00:08:41.024  Processing file module/bdev/error/vbdev_error_rpc.c
00:08:41.024  Processing file module/bdev/ftl/bdev_ftl.c
00:08:41.024  Processing file module/bdev/ftl/bdev_ftl_rpc.c
00:08:41.283  Processing file module/bdev/gpt/vbdev_gpt.c
00:08:41.283  Processing file module/bdev/gpt/gpt.c
00:08:41.283  Processing file module/bdev/gpt/gpt.h
00:08:41.283  Processing file module/bdev/iscsi/bdev_iscsi.c
00:08:41.283  Processing file module/bdev/iscsi/bdev_iscsi_rpc.c
00:08:41.283  Processing file module/bdev/lvol/vbdev_lvol_rpc.c
00:08:41.283  Processing file module/bdev/lvol/vbdev_lvol.c
00:08:41.540  Processing file module/bdev/malloc/bdev_malloc_rpc.c
00:08:41.540  Processing file module/bdev/malloc/bdev_malloc.c
00:08:41.540  Processing file module/bdev/null/bdev_null_rpc.c
00:08:41.540  Processing file module/bdev/null/bdev_null.c
00:08:41.798  Processing file module/bdev/nvme/vbdev_opal.c
00:08:41.798  Processing file module/bdev/nvme/nvme_rpc.c
00:08:41.798  Processing file module/bdev/nvme/bdev_nvme.c
00:08:41.798  Processing file module/bdev/nvme/bdev_mdns_client.c
00:08:41.798  Processing file module/bdev/nvme/bdev_nvme_rpc.c
00:08:41.798  Processing file module/bdev/nvme/vbdev_opal_rpc.c
00:08:41.798  Processing file module/bdev/nvme/bdev_nvme_cuse_rpc.c
00:08:41.798  Processing file module/bdev/passthru/vbdev_passthru_rpc.c
00:08:41.798  Processing file module/bdev/passthru/vbdev_passthru.c
00:08:42.057  Processing file module/bdev/raid/concat.c
00:08:42.057  Processing file module/bdev/raid/bdev_raid_sb.c
00:08:42.057  Processing file module/bdev/raid/raid0.c
00:08:42.057  Processing file module/bdev/raid/bdev_raid_rpc.c
00:08:42.057  Processing file module/bdev/raid/raid1.c
00:08:42.057  Processing file module/bdev/raid/raid5f.c
00:08:42.057  Processing file module/bdev/raid/bdev_raid.c
00:08:42.057  Processing file module/bdev/raid/bdev_raid.h
00:08:42.314  Processing file module/bdev/split/vbdev_split_rpc.c
00:08:42.314  Processing file module/bdev/split/vbdev_split.c
00:08:42.314  Processing file module/bdev/virtio/bdev_virtio_blk.c
00:08:42.314  Processing file module/bdev/virtio/bdev_virtio_rpc.c
00:08:42.314  Processing file module/bdev/virtio/bdev_virtio_scsi.c
00:08:42.314  Processing file module/bdev/zone_block/vbdev_zone_block.c
00:08:42.314  Processing file module/bdev/zone_block/vbdev_zone_block_rpc.c
00:08:42.573  Processing file module/blob/bdev/blob_bdev.c
00:08:42.573  Processing file module/blobfs/bdev/blobfs_bdev_rpc.c
00:08:42.573  Processing file module/blobfs/bdev/blobfs_bdev.c
00:08:42.573  Processing file module/env_dpdk/env_dpdk_rpc.c
00:08:42.573  Processing file module/event/subsystems/accel/accel.c
00:08:42.832  Processing file module/event/subsystems/bdev/bdev.c
00:08:42.832  Processing file module/event/subsystems/iobuf/iobuf_rpc.c
00:08:42.832  Processing file module/event/subsystems/iobuf/iobuf.c
00:08:42.832  Processing file module/event/subsystems/iscsi/iscsi.c
00:08:42.832  Processing file module/event/subsystems/nbd/nbd.c
00:08:43.090  Processing file module/event/subsystems/nvmf/nvmf_tgt.c
00:08:43.090  Processing file module/event/subsystems/nvmf/nvmf_rpc.c
00:08:43.090  Processing file module/event/subsystems/scheduler/scheduler.c
00:08:43.090  Processing file module/event/subsystems/scsi/scsi.c
00:08:43.090  Processing file module/event/subsystems/sock/sock.c
00:08:43.090  Processing file module/event/subsystems/vhost_blk/vhost_blk.c
00:08:43.348  Processing file module/event/subsystems/vhost_scsi/vhost_scsi.c
00:08:43.348  Processing file module/event/subsystems/vmd/vmd.c
00:08:43.348  Processing file module/event/subsystems/vmd/vmd_rpc.c
00:08:43.348  Processing file module/scheduler/dpdk_governor/dpdk_governor.c
00:08:43.348  Processing file module/scheduler/dynamic/scheduler_dynamic.c
00:08:43.605  Processing file module/scheduler/gscheduler/gscheduler.c
00:08:43.605  Processing file module/sock/sock_kernel.h
00:08:43.605  Processing file module/sock/posix/posix.c
00:08:43.605  Writing directory view page.
00:08:43.605  Overall coverage rate:
00:08:43.605    lines......: 39.1% (39266 of 100435 lines)
00:08:43.605    functions..: 42.8% (3587 of 8384 functions)
00:08:43.605  
00:08:43.605  
00:08:43.605  =====================
00:08:43.605  All unit tests passed
00:08:43.605  =====================
00:08:43.605  WARN: lcov not installed or SPDK built without coverage!
00:08:43.605   16:52:36	-- unit/unittest.sh@277 -- # set +x
00:08:43.605  
00:08:43.605  
00:08:43.605  
00:08:43.605  real	3m7.004s
00:08:43.605  user	2m39.039s
00:08:43.605  sys	0m19.574s
00:08:43.606   16:52:36	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:08:43.606  ************************************
00:08:43.606  END TEST unittest
00:08:43.606   16:52:36	-- common/autotest_common.sh@10 -- # set +x
00:08:43.606  ************************************
00:08:43.864   16:52:36	-- spdk/autotest.sh@152 -- # '[' 1 -eq 1 ']'
00:08:43.864   16:52:36	-- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]]
00:08:43.864   16:52:36	-- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]]
00:08:43.864   16:52:36	-- spdk/autotest.sh@160 -- # timing_enter lib
00:08:43.864   16:52:36	-- common/autotest_common.sh@722 -- # xtrace_disable
00:08:43.864   16:52:36	-- common/autotest_common.sh@10 -- # set +x
00:08:43.864   16:52:36	-- spdk/autotest.sh@162 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh
00:08:43.864   16:52:36	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:08:43.864   16:52:36	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:08:43.864   16:52:36	-- common/autotest_common.sh@10 -- # set +x
00:08:43.864  ************************************
00:08:43.864  START TEST env
00:08:43.864  ************************************
00:08:43.864   16:52:36	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh
00:08:43.864  * Looking for test storage...
00:08:43.864  * Found test storage at /home/vagrant/spdk_repo/spdk/test/env
00:08:43.864    16:52:36	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:08:43.864     16:52:36	-- common/autotest_common.sh@1690 -- # lcov --version
00:08:43.864     16:52:36	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:08:43.864    16:52:36	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:08:43.864    16:52:36	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:08:43.864    16:52:36	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:08:43.864    16:52:36	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:08:43.864    16:52:36	-- scripts/common.sh@335 -- # IFS=.-:
00:08:43.864    16:52:36	-- scripts/common.sh@335 -- # read -ra ver1
00:08:43.864    16:52:36	-- scripts/common.sh@336 -- # IFS=.-:
00:08:43.865    16:52:36	-- scripts/common.sh@336 -- # read -ra ver2
00:08:43.865    16:52:36	-- scripts/common.sh@337 -- # local 'op=<'
00:08:43.865    16:52:36	-- scripts/common.sh@339 -- # ver1_l=2
00:08:43.865    16:52:36	-- scripts/common.sh@340 -- # ver2_l=1
00:08:43.865    16:52:36	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:08:43.865    16:52:36	-- scripts/common.sh@343 -- # case "$op" in
00:08:43.865    16:52:36	-- scripts/common.sh@344 -- # : 1
00:08:43.865    16:52:36	-- scripts/common.sh@363 -- # (( v = 0 ))
00:08:43.865    16:52:36	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:08:43.865     16:52:36	-- scripts/common.sh@364 -- # decimal 1
00:08:43.865     16:52:36	-- scripts/common.sh@352 -- # local d=1
00:08:43.865     16:52:36	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:43.865     16:52:36	-- scripts/common.sh@354 -- # echo 1
00:08:44.124    16:52:36	-- scripts/common.sh@364 -- # ver1[v]=1
00:08:44.124     16:52:36	-- scripts/common.sh@365 -- # decimal 2
00:08:44.124     16:52:36	-- scripts/common.sh@352 -- # local d=2
00:08:44.124     16:52:36	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:44.124     16:52:36	-- scripts/common.sh@354 -- # echo 2
00:08:44.124    16:52:36	-- scripts/common.sh@365 -- # ver2[v]=2
00:08:44.124    16:52:36	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:08:44.124    16:52:36	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:08:44.124    16:52:36	-- scripts/common.sh@367 -- # return 0
00:08:44.124    16:52:36	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:08:44.124    16:52:36	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:08:44.124  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:44.124  		--rc genhtml_branch_coverage=1
00:08:44.124  		--rc genhtml_function_coverage=1
00:08:44.124  		--rc genhtml_legend=1
00:08:44.124  		--rc geninfo_all_blocks=1
00:08:44.124  		--rc geninfo_unexecuted_blocks=1
00:08:44.124  		
00:08:44.124  		'
00:08:44.124    16:52:36	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:08:44.124  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:44.124  		--rc genhtml_branch_coverage=1
00:08:44.124  		--rc genhtml_function_coverage=1
00:08:44.124  		--rc genhtml_legend=1
00:08:44.124  		--rc geninfo_all_blocks=1
00:08:44.124  		--rc geninfo_unexecuted_blocks=1
00:08:44.124  		
00:08:44.124  		'
00:08:44.124    16:52:36	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:08:44.124  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:44.124  		--rc genhtml_branch_coverage=1
00:08:44.124  		--rc genhtml_function_coverage=1
00:08:44.124  		--rc genhtml_legend=1
00:08:44.124  		--rc geninfo_all_blocks=1
00:08:44.124  		--rc geninfo_unexecuted_blocks=1
00:08:44.124  		
00:08:44.124  		'
00:08:44.124    16:52:36	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:08:44.124  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:44.124  		--rc genhtml_branch_coverage=1
00:08:44.124  		--rc genhtml_function_coverage=1
00:08:44.124  		--rc genhtml_legend=1
00:08:44.124  		--rc geninfo_all_blocks=1
00:08:44.124  		--rc geninfo_unexecuted_blocks=1
00:08:44.124  		
00:08:44.124  		'
00:08:44.124   16:52:36	-- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut
00:08:44.124   16:52:36	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:08:44.124   16:52:36	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:08:44.124   16:52:36	-- common/autotest_common.sh@10 -- # set +x
00:08:44.124  ************************************
00:08:44.124  START TEST env_memory
00:08:44.124  ************************************
00:08:44.124   16:52:36	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut
00:08:44.124  
00:08:44.124  
00:08:44.124       CUnit - A unit testing framework for C - Version 2.1-3
00:08:44.124       http://cunit.sourceforge.net/
00:08:44.124  
00:08:44.124  
00:08:44.124  Suite: memory
00:08:44.124    Test: alloc and free memory map ...[2024-11-19 16:52:36.813593] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed
00:08:44.124  passed
00:08:44.124    Test: mem map translation ...[2024-11-19 16:52:36.868164] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234
00:08:44.124  [2024-11-19 16:52:36.868298] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152
00:08:44.124  [2024-11-19 16:52:36.868426] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656
00:08:44.124  [2024-11-19 16:52:36.868530] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map
00:08:44.124  passed
00:08:44.124    Test: mem map registration ...[2024-11-19 16:52:36.958298] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234
00:08:44.124  [2024-11-19 16:52:36.958461] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152
00:08:44.383  passed
00:08:44.383    Test: mem map adjacent registrations ...passed
00:08:44.383  
00:08:44.383  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:08:44.383                suites      1      1    n/a      0        0
00:08:44.383                 tests      4      4      4      0        0
00:08:44.383               asserts    152    152    152      0      n/a
00:08:44.383  
00:08:44.383  Elapsed time =    0.314 seconds
00:08:44.383  
00:08:44.383  real	0m0.365s
00:08:44.383  user	0m0.332s
00:08:44.383  sys	0m0.032s
00:08:44.383   16:52:37	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:08:44.383   16:52:37	-- common/autotest_common.sh@10 -- # set +x
00:08:44.383  ************************************
00:08:44.383  END TEST env_memory
00:08:44.383  ************************************
00:08:44.383   16:52:37	-- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys
00:08:44.383   16:52:37	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:08:44.383   16:52:37	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:08:44.383   16:52:37	-- common/autotest_common.sh@10 -- # set +x
00:08:44.383  ************************************
00:08:44.383  START TEST env_vtophys
00:08:44.383  ************************************
00:08:44.383   16:52:37	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys
00:08:44.383  EAL: lib.eal log level changed from notice to debug
00:08:44.383  EAL: Detected lcore 0 as core 0 on socket 0
00:08:44.383  EAL: Detected lcore 1 as core 0 on socket 0
00:08:44.383  EAL: Detected lcore 2 as core 0 on socket 0
00:08:44.383  EAL: Detected lcore 3 as core 0 on socket 0
00:08:44.383  EAL: Detected lcore 4 as core 0 on socket 0
00:08:44.383  EAL: Detected lcore 5 as core 0 on socket 0
00:08:44.383  EAL: Detected lcore 6 as core 0 on socket 0
00:08:44.383  EAL: Detected lcore 7 as core 0 on socket 0
00:08:44.383  EAL: Detected lcore 8 as core 0 on socket 0
00:08:44.383  EAL: Detected lcore 9 as core 0 on socket 0
00:08:44.383  EAL: Maximum logical cores by configuration: 128
00:08:44.383  EAL: Detected CPU lcores: 10
00:08:44.383  EAL: Detected NUMA nodes: 1
00:08:44.383  EAL: Checking presence of .so 'librte_eal.so.23.0'
00:08:44.383  EAL: Checking presence of .so 'librte_eal.so.23'
00:08:44.383  EAL: Checking presence of .so 'librte_eal.so'
00:08:44.383  EAL: Detected static linkage of DPDK
00:08:44.641  EAL: No shared files mode enabled, IPC will be disabled
00:08:44.641  EAL: Selected IOVA mode 'PA'
00:08:44.641  EAL: Probing VFIO support...
00:08:44.641  EAL: IOMMU type 1 (Type 1) is supported
00:08:44.641  EAL: IOMMU type 7 (sPAPR) is not supported
00:08:44.641  EAL: IOMMU type 8 (No-IOMMU) is not supported
00:08:44.641  EAL: VFIO support initialized
00:08:44.641  EAL: Ask a virtual area of 0x2e000 bytes
00:08:44.641  EAL: Virtual area found at 0x200000000000 (size = 0x2e000)
00:08:44.641  EAL: Setting up physically contiguous memory...
00:08:44.641  EAL: Setting maximum number of open files to 1048576
00:08:44.641  EAL: Detected memory type: socket_id:0 hugepage_sz:2097152
00:08:44.641  EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152
00:08:44.641  EAL: Ask a virtual area of 0x61000 bytes
00:08:44.641  EAL: Virtual area found at 0x20000002e000 (size = 0x61000)
00:08:44.641  EAL: Memseg list allocated at socket 0, page size 0x800kB
00:08:44.641  EAL: Ask a virtual area of 0x400000000 bytes
00:08:44.641  EAL: Virtual area found at 0x200000200000 (size = 0x400000000)
00:08:44.641  EAL: VA reserved for memseg list at 0x200000200000, size 400000000
00:08:44.641  EAL: Ask a virtual area of 0x61000 bytes
00:08:44.641  EAL: Virtual area found at 0x200400200000 (size = 0x61000)
00:08:44.641  EAL: Memseg list allocated at socket 0, page size 0x800kB
00:08:44.641  EAL: Ask a virtual area of 0x400000000 bytes
00:08:44.641  EAL: Virtual area found at 0x200400400000 (size = 0x400000000)
00:08:44.641  EAL: VA reserved for memseg list at 0x200400400000, size 400000000
00:08:44.641  EAL: Ask a virtual area of 0x61000 bytes
00:08:44.641  EAL: Virtual area found at 0x200800400000 (size = 0x61000)
00:08:44.641  EAL: Memseg list allocated at socket 0, page size 0x800kB
00:08:44.641  EAL: Ask a virtual area of 0x400000000 bytes
00:08:44.641  EAL: Virtual area found at 0x200800600000 (size = 0x400000000)
00:08:44.641  EAL: VA reserved for memseg list at 0x200800600000, size 400000000
00:08:44.641  EAL: Ask a virtual area of 0x61000 bytes
00:08:44.641  EAL: Virtual area found at 0x200c00600000 (size = 0x61000)
00:08:44.641  EAL: Memseg list allocated at socket 0, page size 0x800kB
00:08:44.641  EAL: Ask a virtual area of 0x400000000 bytes
00:08:44.641  EAL: Virtual area found at 0x200c00800000 (size = 0x400000000)
00:08:44.641  EAL: VA reserved for memseg list at 0x200c00800000, size 400000000
00:08:44.641  EAL: Hugepages will be freed exactly as allocated.
00:08:44.641  EAL: No shared files mode enabled, IPC is disabled
00:08:44.641  EAL: No shared files mode enabled, IPC is disabled
00:08:44.641  EAL: TSC frequency is ~2100000 KHz
00:08:44.641  EAL: Main lcore 0 is ready (tid=7fa5b9367a80;cpuset=[0])
00:08:44.641  EAL: Trying to obtain current memory policy.
00:08:44.641  EAL: Setting policy MPOL_PREFERRED for socket 0
00:08:44.641  EAL: Restoring previous memory policy: 0
00:08:44.641  EAL: request: mp_malloc_sync
00:08:44.641  EAL: No shared files mode enabled, IPC is disabled
00:08:44.642  EAL: Heap on socket 0 was expanded by 2MB
00:08:44.642  EAL: No shared files mode enabled, IPC is disabled
00:08:44.642  EAL: Mem event callback 'spdk:(nil)' registered
00:08:44.642  
00:08:44.642  
00:08:44.642       CUnit - A unit testing framework for C - Version 2.1-3
00:08:44.642       http://cunit.sourceforge.net/
00:08:44.642  
00:08:44.642  
00:08:44.642  Suite: components_suite
00:08:45.210    Test: vtophys_malloc_test ...passed
00:08:45.210    Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy.
00:08:45.210  EAL: Setting policy MPOL_PREFERRED for socket 0
00:08:45.210  EAL: Restoring previous memory policy: 0
00:08:45.210  EAL: Calling mem event callback 'spdk:(nil)'
00:08:45.210  EAL: request: mp_malloc_sync
00:08:45.210  EAL: No shared files mode enabled, IPC is disabled
00:08:45.210  EAL: Heap on socket 0 was expanded by 4MB
00:08:45.210  EAL: Calling mem event callback 'spdk:(nil)'
00:08:45.210  EAL: request: mp_malloc_sync
00:08:45.210  EAL: No shared files mode enabled, IPC is disabled
00:08:45.210  EAL: Heap on socket 0 was shrunk by 4MB
00:08:45.210  EAL: Trying to obtain current memory policy.
00:08:45.210  EAL: Setting policy MPOL_PREFERRED for socket 0
00:08:45.210  EAL: Restoring previous memory policy: 0
00:08:45.210  EAL: Calling mem event callback 'spdk:(nil)'
00:08:45.210  EAL: request: mp_malloc_sync
00:08:45.210  EAL: No shared files mode enabled, IPC is disabled
00:08:45.210  EAL: Heap on socket 0 was expanded by 6MB
00:08:45.210  EAL: Calling mem event callback 'spdk:(nil)'
00:08:45.210  EAL: request: mp_malloc_sync
00:08:45.210  EAL: No shared files mode enabled, IPC is disabled
00:08:45.210  EAL: Heap on socket 0 was shrunk by 6MB
00:08:45.210  EAL: Trying to obtain current memory policy.
00:08:45.210  EAL: Setting policy MPOL_PREFERRED for socket 0
00:08:45.210  EAL: Restoring previous memory policy: 0
00:08:45.210  EAL: Calling mem event callback 'spdk:(nil)'
00:08:45.210  EAL: request: mp_malloc_sync
00:08:45.210  EAL: No shared files mode enabled, IPC is disabled
00:08:45.210  EAL: Heap on socket 0 was expanded by 10MB
00:08:45.210  EAL: Calling mem event callback 'spdk:(nil)'
00:08:45.210  EAL: request: mp_malloc_sync
00:08:45.210  EAL: No shared files mode enabled, IPC is disabled
00:08:45.210  EAL: Heap on socket 0 was shrunk by 10MB
00:08:45.210  EAL: Trying to obtain current memory policy.
00:08:45.210  EAL: Setting policy MPOL_PREFERRED for socket 0
00:08:45.210  EAL: Restoring previous memory policy: 0
00:08:45.211  EAL: Calling mem event callback 'spdk:(nil)'
00:08:45.211  EAL: request: mp_malloc_sync
00:08:45.211  EAL: No shared files mode enabled, IPC is disabled
00:08:45.211  EAL: Heap on socket 0 was expanded by 18MB
00:08:45.211  EAL: Calling mem event callback 'spdk:(nil)'
00:08:45.211  EAL: request: mp_malloc_sync
00:08:45.211  EAL: No shared files mode enabled, IPC is disabled
00:08:45.211  EAL: Heap on socket 0 was shrunk by 18MB
00:08:45.211  EAL: Trying to obtain current memory policy.
00:08:45.211  EAL: Setting policy MPOL_PREFERRED for socket 0
00:08:45.211  EAL: Restoring previous memory policy: 0
00:08:45.211  EAL: Calling mem event callback 'spdk:(nil)'
00:08:45.211  EAL: request: mp_malloc_sync
00:08:45.211  EAL: No shared files mode enabled, IPC is disabled
00:08:45.211  EAL: Heap on socket 0 was expanded by 34MB
00:08:45.211  EAL: Calling mem event callback 'spdk:(nil)'
00:08:45.211  EAL: request: mp_malloc_sync
00:08:45.211  EAL: No shared files mode enabled, IPC is disabled
00:08:45.211  EAL: Heap on socket 0 was shrunk by 34MB
00:08:45.211  EAL: Trying to obtain current memory policy.
00:08:45.211  EAL: Setting policy MPOL_PREFERRED for socket 0
00:08:45.211  EAL: Restoring previous memory policy: 0
00:08:45.211  EAL: Calling mem event callback 'spdk:(nil)'
00:08:45.211  EAL: request: mp_malloc_sync
00:08:45.211  EAL: No shared files mode enabled, IPC is disabled
00:08:45.211  EAL: Heap on socket 0 was expanded by 66MB
00:08:45.211  EAL: Calling mem event callback 'spdk:(nil)'
00:08:45.211  EAL: request: mp_malloc_sync
00:08:45.211  EAL: No shared files mode enabled, IPC is disabled
00:08:45.211  EAL: Heap on socket 0 was shrunk by 66MB
00:08:45.211  EAL: Trying to obtain current memory policy.
00:08:45.211  EAL: Setting policy MPOL_PREFERRED for socket 0
00:08:45.211  EAL: Restoring previous memory policy: 0
00:08:45.211  EAL: Calling mem event callback 'spdk:(nil)'
00:08:45.211  EAL: request: mp_malloc_sync
00:08:45.211  EAL: No shared files mode enabled, IPC is disabled
00:08:45.211  EAL: Heap on socket 0 was expanded by 130MB
00:08:45.211  EAL: Calling mem event callback 'spdk:(nil)'
00:08:45.211  EAL: request: mp_malloc_sync
00:08:45.211  EAL: No shared files mode enabled, IPC is disabled
00:08:45.211  EAL: Heap on socket 0 was shrunk by 130MB
00:08:45.211  EAL: Trying to obtain current memory policy.
00:08:45.211  EAL: Setting policy MPOL_PREFERRED for socket 0
00:08:45.469  EAL: Restoring previous memory policy: 0
00:08:45.469  EAL: Calling mem event callback 'spdk:(nil)'
00:08:45.469  EAL: request: mp_malloc_sync
00:08:45.469  EAL: No shared files mode enabled, IPC is disabled
00:08:45.469  EAL: Heap on socket 0 was expanded by 258MB
00:08:45.469  EAL: Calling mem event callback 'spdk:(nil)'
00:08:45.469  EAL: request: mp_malloc_sync
00:08:45.469  EAL: No shared files mode enabled, IPC is disabled
00:08:45.469  EAL: Heap on socket 0 was shrunk by 258MB
00:08:45.469  EAL: Trying to obtain current memory policy.
00:08:45.469  EAL: Setting policy MPOL_PREFERRED for socket 0
00:08:45.469  EAL: Restoring previous memory policy: 0
00:08:45.469  EAL: Calling mem event callback 'spdk:(nil)'
00:08:45.469  EAL: request: mp_malloc_sync
00:08:45.469  EAL: No shared files mode enabled, IPC is disabled
00:08:45.469  EAL: Heap on socket 0 was expanded by 514MB
00:08:45.727  EAL: Calling mem event callback 'spdk:(nil)'
00:08:45.727  EAL: request: mp_malloc_sync
00:08:45.727  EAL: No shared files mode enabled, IPC is disabled
00:08:45.728  EAL: Heap on socket 0 was shrunk by 514MB
00:08:45.728  EAL: Trying to obtain current memory policy.
00:08:45.728  EAL: Setting policy MPOL_PREFERRED for socket 0
00:08:45.986  EAL: Restoring previous memory policy: 0
00:08:45.986  EAL: Calling mem event callback 'spdk:(nil)'
00:08:45.986  EAL: request: mp_malloc_sync
00:08:45.986  EAL: No shared files mode enabled, IPC is disabled
00:08:45.986  EAL: Heap on socket 0 was expanded by 1026MB
00:08:46.245  EAL: Calling mem event callback 'spdk:(nil)'
00:08:46.245  EAL: request: mp_malloc_sync
00:08:46.245  EAL: No shared files mode enabled, IPC is disabled
00:08:46.245  passed
00:08:46.245  
00:08:46.245  EAL: Heap on socket 0 was shrunk by 1026MB
00:08:46.245  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:08:46.245                suites      1      1    n/a      0        0
00:08:46.245                 tests      2      2      2      0        0
00:08:46.245               asserts   6275   6275   6275      0      n/a
00:08:46.245  
00:08:46.245  Elapsed time =    1.662 seconds
00:08:46.245  EAL: Calling mem event callback 'spdk:(nil)'
00:08:46.245  EAL: request: mp_malloc_sync
00:08:46.245  EAL: No shared files mode enabled, IPC is disabled
00:08:46.245  EAL: Heap on socket 0 was shrunk by 2MB
00:08:46.245  EAL: No shared files mode enabled, IPC is disabled
00:08:46.245  EAL: No shared files mode enabled, IPC is disabled
00:08:46.245  EAL: No shared files mode enabled, IPC is disabled
00:08:46.512  
00:08:46.512  real	0m1.945s
00:08:46.512  user	0m0.929s
00:08:46.512  sys	0m0.875s
00:08:46.512  ************************************
00:08:46.512  END TEST env_vtophys
00:08:46.512  ************************************
00:08:46.512   16:52:39	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:08:46.512   16:52:39	-- common/autotest_common.sh@10 -- # set +x
00:08:46.512   16:52:39	-- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut
00:08:46.512   16:52:39	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:08:46.512   16:52:39	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:08:46.512   16:52:39	-- common/autotest_common.sh@10 -- # set +x
00:08:46.512  ************************************
00:08:46.512  START TEST env_pci
00:08:46.512  ************************************
00:08:46.512   16:52:39	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut
00:08:46.512  
00:08:46.512  
00:08:46.512       CUnit - A unit testing framework for C - Version 2.1-3
00:08:46.512       http://cunit.sourceforge.net/
00:08:46.512  
00:08:46.512  
00:08:46.512  Suite: pci
00:08:46.512    Test: pci_hook ...[2024-11-19 16:52:39.228948] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 114675 has claimed it
00:08:46.512  passed
00:08:46.512  
00:08:46.512  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:08:46.512                suites      1      1    n/a      0        0
00:08:46.512                 tests      1      1      1      0        0
00:08:46.512               asserts     25     25     25      0      n/a
00:08:46.512  
00:08:46.512  Elapsed time =    0.007 seconds
00:08:46.512  EAL: Cannot find device (10000:00:01.0)
00:08:46.512  EAL: Failed to attach device on primary process
00:08:46.512  
00:08:46.512  real	0m0.081s
00:08:46.512  user	0m0.035s
00:08:46.512  sys	0m0.046s
00:08:46.512   16:52:39	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:08:46.512   16:52:39	-- common/autotest_common.sh@10 -- # set +x
00:08:46.512  ************************************
00:08:46.512  END TEST env_pci
00:08:46.512  ************************************
00:08:46.512   16:52:39	-- env/env.sh@14 -- # argv='-c 0x1 '
00:08:46.512    16:52:39	-- env/env.sh@15 -- # uname
00:08:46.512   16:52:39	-- env/env.sh@15 -- # '[' Linux = Linux ']'
00:08:46.512   16:52:39	-- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000
00:08:46.512   16:52:39	-- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000
00:08:46.512   16:52:39	-- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']'
00:08:46.512   16:52:39	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:08:46.512   16:52:39	-- common/autotest_common.sh@10 -- # set +x
00:08:46.512  ************************************
00:08:46.512  START TEST env_dpdk_post_init
00:08:46.512  ************************************
00:08:46.513   16:52:39	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000
00:08:46.775  EAL: Detected CPU lcores: 10
00:08:46.775  EAL: Detected NUMA nodes: 1
00:08:46.775  EAL: Detected static linkage of DPDK
00:08:46.775  EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
00:08:46.775  EAL: Selected IOVA mode 'PA'
00:08:46.775  EAL: VFIO support initialized
00:08:46.775  TELEMETRY: No legacy callbacks, legacy socket not created
00:08:46.775  EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1)
00:08:46.775  Starting DPDK initialization...
00:08:46.775  Starting SPDK post initialization...
00:08:46.775  SPDK NVMe probe
00:08:46.775  Attaching to 0000:00:06.0
00:08:46.775  Attached to 0000:00:06.0
00:08:46.775  Cleaning up...
00:08:47.034  
00:08:47.034  real	0m0.281s
00:08:47.034  user	0m0.075s
00:08:47.034  sys	0m0.108s
00:08:47.034   16:52:39	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:08:47.034  ************************************
00:08:47.034   16:52:39	-- common/autotest_common.sh@10 -- # set +x
00:08:47.034  END TEST env_dpdk_post_init
00:08:47.034  ************************************
00:08:47.034    16:52:39	-- env/env.sh@26 -- # uname
00:08:47.034   16:52:39	-- env/env.sh@26 -- # '[' Linux = Linux ']'
00:08:47.034   16:52:39	-- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks
00:08:47.034   16:52:39	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:08:47.034   16:52:39	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:08:47.034   16:52:39	-- common/autotest_common.sh@10 -- # set +x
00:08:47.034  ************************************
00:08:47.034  START TEST env_mem_callbacks
00:08:47.034  ************************************
00:08:47.034   16:52:39	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks
00:08:47.034  EAL: Detected CPU lcores: 10
00:08:47.034  EAL: Detected NUMA nodes: 1
00:08:47.034  EAL: Detected static linkage of DPDK
00:08:47.034  EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
00:08:47.034  EAL: Selected IOVA mode 'PA'
00:08:47.034  EAL: VFIO support initialized
00:08:47.034  TELEMETRY: No legacy callbacks, legacy socket not created
00:08:47.034  
00:08:47.034  
00:08:47.034       CUnit - A unit testing framework for C - Version 2.1-3
00:08:47.034       http://cunit.sourceforge.net/
00:08:47.034  
00:08:47.034  
00:08:47.034  Suite: memory
00:08:47.034    Test: test ...
00:08:47.034  register 0x200000200000 2097152
00:08:47.034  malloc 3145728
00:08:47.034  register 0x200000400000 4194304
00:08:47.034  buf 0x200000500000 len 3145728 PASSED
00:08:47.034  malloc 64
00:08:47.034  buf 0x2000004fff40 len 64 PASSED
00:08:47.034  malloc 4194304
00:08:47.034  register 0x200000800000 6291456
00:08:47.034  buf 0x200000a00000 len 4194304 PASSED
00:08:47.034  free 0x200000500000 3145728
00:08:47.034  free 0x2000004fff40 64
00:08:47.034  unregister 0x200000400000 4194304 PASSED
00:08:47.034  free 0x200000a00000 4194304
00:08:47.034  unregister 0x200000800000 6291456 PASSED
00:08:47.034  malloc 8388608
00:08:47.034  register 0x200000400000 10485760
00:08:47.034  buf 0x200000600000 len 8388608 PASSED
00:08:47.034  free 0x200000600000 8388608
00:08:47.034  unregister 0x200000400000 10485760 PASSED
00:08:47.034  passed
00:08:47.034  
00:08:47.034  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:08:47.034                suites      1      1    n/a      0        0
00:08:47.034                 tests      1      1      1      0        0
00:08:47.034               asserts     15     15     15      0      n/a
00:08:47.034  
00:08:47.034  Elapsed time =    0.009 seconds
00:08:47.293  
00:08:47.293  real	0m0.218s
00:08:47.293  user	0m0.044s
00:08:47.293  sys	0m0.075s
00:08:47.293   16:52:39	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:08:47.293   16:52:39	-- common/autotest_common.sh@10 -- # set +x
00:08:47.293  ************************************
00:08:47.293  END TEST env_mem_callbacks
00:08:47.293  ************************************
00:08:47.293  
00:08:47.293  real	0m3.466s
00:08:47.293  user	0m1.716s
00:08:47.293  sys	0m1.433s
00:08:47.293   16:52:39	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:08:47.293   16:52:39	-- common/autotest_common.sh@10 -- # set +x
00:08:47.293  ************************************
00:08:47.293  END TEST env
00:08:47.293  ************************************
00:08:47.293   16:52:40	-- spdk/autotest.sh@163 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh
00:08:47.293   16:52:40	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:08:47.293   16:52:40	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:08:47.293   16:52:40	-- common/autotest_common.sh@10 -- # set +x
00:08:47.293  ************************************
00:08:47.293  START TEST rpc
00:08:47.293  ************************************
00:08:47.293   16:52:40	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh
00:08:47.293  * Looking for test storage...
00:08:47.551  * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc
00:08:47.551    16:52:40	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:08:47.551     16:52:40	-- common/autotest_common.sh@1690 -- # lcov --version
00:08:47.551     16:52:40	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:08:47.551    16:52:40	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:08:47.551    16:52:40	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:08:47.551    16:52:40	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:08:47.551    16:52:40	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:08:47.551    16:52:40	-- scripts/common.sh@335 -- # IFS=.-:
00:08:47.551    16:52:40	-- scripts/common.sh@335 -- # read -ra ver1
00:08:47.551    16:52:40	-- scripts/common.sh@336 -- # IFS=.-:
00:08:47.551    16:52:40	-- scripts/common.sh@336 -- # read -ra ver2
00:08:47.551    16:52:40	-- scripts/common.sh@337 -- # local 'op=<'
00:08:47.551    16:52:40	-- scripts/common.sh@339 -- # ver1_l=2
00:08:47.551    16:52:40	-- scripts/common.sh@340 -- # ver2_l=1
00:08:47.551    16:52:40	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:08:47.551    16:52:40	-- scripts/common.sh@343 -- # case "$op" in
00:08:47.551    16:52:40	-- scripts/common.sh@344 -- # : 1
00:08:47.551    16:52:40	-- scripts/common.sh@363 -- # (( v = 0 ))
00:08:47.551    16:52:40	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:08:47.551     16:52:40	-- scripts/common.sh@364 -- # decimal 1
00:08:47.551     16:52:40	-- scripts/common.sh@352 -- # local d=1
00:08:47.551     16:52:40	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:47.551     16:52:40	-- scripts/common.sh@354 -- # echo 1
00:08:47.551    16:52:40	-- scripts/common.sh@364 -- # ver1[v]=1
00:08:47.551     16:52:40	-- scripts/common.sh@365 -- # decimal 2
00:08:47.551     16:52:40	-- scripts/common.sh@352 -- # local d=2
00:08:47.551     16:52:40	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:47.551     16:52:40	-- scripts/common.sh@354 -- # echo 2
00:08:47.552    16:52:40	-- scripts/common.sh@365 -- # ver2[v]=2
00:08:47.552    16:52:40	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:08:47.552    16:52:40	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:08:47.552    16:52:40	-- scripts/common.sh@367 -- # return 0
00:08:47.552    16:52:40	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:08:47.552    16:52:40	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:08:47.552  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:47.552  		--rc genhtml_branch_coverage=1
00:08:47.552  		--rc genhtml_function_coverage=1
00:08:47.552  		--rc genhtml_legend=1
00:08:47.552  		--rc geninfo_all_blocks=1
00:08:47.552  		--rc geninfo_unexecuted_blocks=1
00:08:47.552  		
00:08:47.552  		'
00:08:47.552    16:52:40	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:08:47.552  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:47.552  		--rc genhtml_branch_coverage=1
00:08:47.552  		--rc genhtml_function_coverage=1
00:08:47.552  		--rc genhtml_legend=1
00:08:47.552  		--rc geninfo_all_blocks=1
00:08:47.552  		--rc geninfo_unexecuted_blocks=1
00:08:47.552  		
00:08:47.552  		'
00:08:47.552    16:52:40	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:08:47.552  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:47.552  		--rc genhtml_branch_coverage=1
00:08:47.552  		--rc genhtml_function_coverage=1
00:08:47.552  		--rc genhtml_legend=1
00:08:47.552  		--rc geninfo_all_blocks=1
00:08:47.552  		--rc geninfo_unexecuted_blocks=1
00:08:47.552  		
00:08:47.552  		'
00:08:47.552    16:52:40	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:08:47.552  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:47.552  		--rc genhtml_branch_coverage=1
00:08:47.552  		--rc genhtml_function_coverage=1
00:08:47.552  		--rc genhtml_legend=1
00:08:47.552  		--rc geninfo_all_blocks=1
00:08:47.552  		--rc geninfo_unexecuted_blocks=1
00:08:47.552  		
00:08:47.552  		'
00:08:47.552   16:52:40	-- rpc/rpc.sh@65 -- # spdk_pid=114813
00:08:47.552   16:52:40	-- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT
00:08:47.552   16:52:40	-- rpc/rpc.sh@67 -- # waitforlisten 114813
00:08:47.552   16:52:40	-- common/autotest_common.sh@829 -- # '[' -z 114813 ']'
00:08:47.552   16:52:40	-- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev
00:08:47.552   16:52:40	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:08:47.552   16:52:40	-- common/autotest_common.sh@834 -- # local max_retries=100
00:08:47.552   16:52:40	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:08:47.552  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:08:47.552   16:52:40	-- common/autotest_common.sh@838 -- # xtrace_disable
00:08:47.552   16:52:40	-- common/autotest_common.sh@10 -- # set +x
00:08:47.552  [2024-11-19 16:52:40.377347] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:08:47.552  [2024-11-19 16:52:40.377606] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114813 ]
00:08:47.810  [2024-11-19 16:52:40.537991] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:47.810  [2024-11-19 16:52:40.596499] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:08:47.810  [2024-11-19 16:52:40.597054] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified.
00:08:47.810  [2024-11-19 16:52:40.597256] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 114813' to capture a snapshot of events at runtime.
00:08:47.810  [2024-11-19 16:52:40.597477] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid114813 for offline analysis/debug.
00:08:47.810  [2024-11-19 16:52:40.597760] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:08:48.744   16:52:41	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:08:48.744   16:52:41	-- common/autotest_common.sh@862 -- # return 0
00:08:48.744   16:52:41	-- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc
00:08:48.744   16:52:41	-- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc
00:08:48.744   16:52:41	-- rpc/rpc.sh@72 -- # rpc=rpc_cmd
00:08:48.744   16:52:41	-- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity
00:08:48.744   16:52:41	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:08:48.744   16:52:41	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:08:48.744   16:52:41	-- common/autotest_common.sh@10 -- # set +x
00:08:48.744  ************************************
00:08:48.744  START TEST rpc_integrity
00:08:48.744  ************************************
00:08:48.744   16:52:41	-- common/autotest_common.sh@1114 -- # rpc_integrity
00:08:48.744    16:52:41	-- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs
00:08:48.744    16:52:41	-- common/autotest_common.sh@561 -- # xtrace_disable
00:08:48.744    16:52:41	-- common/autotest_common.sh@10 -- # set +x
00:08:48.744    16:52:41	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:48.744   16:52:41	-- rpc/rpc.sh@12 -- # bdevs='[]'
00:08:48.744    16:52:41	-- rpc/rpc.sh@13 -- # jq length
00:08:48.744   16:52:41	-- rpc/rpc.sh@13 -- # '[' 0 == 0 ']'
00:08:48.744    16:52:41	-- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512
00:08:48.744    16:52:41	-- common/autotest_common.sh@561 -- # xtrace_disable
00:08:48.744    16:52:41	-- common/autotest_common.sh@10 -- # set +x
00:08:48.744    16:52:41	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:48.744   16:52:41	-- rpc/rpc.sh@15 -- # malloc=Malloc0
00:08:48.744    16:52:41	-- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs
00:08:48.744    16:52:41	-- common/autotest_common.sh@561 -- # xtrace_disable
00:08:48.744    16:52:41	-- common/autotest_common.sh@10 -- # set +x
00:08:48.744    16:52:41	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:48.744   16:52:41	-- rpc/rpc.sh@16 -- # bdevs='[
00:08:48.744  {
00:08:48.744  "name": "Malloc0",
00:08:48.744  "aliases": [
00:08:48.744  "94362c1d-8837-4831-98ab-9493d298798b"
00:08:48.744  ],
00:08:48.744  "product_name": "Malloc disk",
00:08:48.744  "block_size": 512,
00:08:48.744  "num_blocks": 16384,
00:08:48.744  "uuid": "94362c1d-8837-4831-98ab-9493d298798b",
00:08:48.744  "assigned_rate_limits": {
00:08:48.744  "rw_ios_per_sec": 0,
00:08:48.744  "rw_mbytes_per_sec": 0,
00:08:48.744  "r_mbytes_per_sec": 0,
00:08:48.744  "w_mbytes_per_sec": 0
00:08:48.744  },
00:08:48.744  "claimed": false,
00:08:48.744  "zoned": false,
00:08:48.744  "supported_io_types": {
00:08:48.744  "read": true,
00:08:48.744  "write": true,
00:08:48.744  "unmap": true,
00:08:48.744  "write_zeroes": true,
00:08:48.744  "flush": true,
00:08:48.744  "reset": true,
00:08:48.744  "compare": false,
00:08:48.744  "compare_and_write": false,
00:08:48.744  "abort": true,
00:08:48.744  "nvme_admin": false,
00:08:48.744  "nvme_io": false
00:08:48.744  },
00:08:48.744  "memory_domains": [
00:08:48.744  {
00:08:48.744  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:08:48.744  "dma_device_type": 2
00:08:48.744  }
00:08:48.744  ],
00:08:48.744  "driver_specific": {}
00:08:48.744  }
00:08:48.744  ]'
00:08:48.744    16:52:41	-- rpc/rpc.sh@17 -- # jq length
00:08:48.744   16:52:41	-- rpc/rpc.sh@17 -- # '[' 1 == 1 ']'
00:08:48.744   16:52:41	-- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0
00:08:48.744   16:52:41	-- common/autotest_common.sh@561 -- # xtrace_disable
00:08:48.744   16:52:41	-- common/autotest_common.sh@10 -- # set +x
00:08:48.744  [2024-11-19 16:52:41.573702] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0
00:08:48.744  [2024-11-19 16:52:41.574306] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:08:48.744  [2024-11-19 16:52:41.574484] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006080
00:08:48.744  [2024-11-19 16:52:41.574641] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:08:48.744  [2024-11-19 16:52:41.577689] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:08:48.744  [2024-11-19 16:52:41.577913] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0
00:08:48.744  Passthru0
00:08:48.744   16:52:41	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:48.744    16:52:41	-- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs
00:08:48.744    16:52:41	-- common/autotest_common.sh@561 -- # xtrace_disable
00:08:48.744    16:52:41	-- common/autotest_common.sh@10 -- # set +x
00:08:48.744    16:52:41	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:48.744   16:52:41	-- rpc/rpc.sh@20 -- # bdevs='[
00:08:48.744  {
00:08:48.744  "name": "Malloc0",
00:08:48.744  "aliases": [
00:08:48.744  "94362c1d-8837-4831-98ab-9493d298798b"
00:08:48.744  ],
00:08:48.744  "product_name": "Malloc disk",
00:08:48.744  "block_size": 512,
00:08:48.744  "num_blocks": 16384,
00:08:48.744  "uuid": "94362c1d-8837-4831-98ab-9493d298798b",
00:08:48.744  "assigned_rate_limits": {
00:08:48.744  "rw_ios_per_sec": 0,
00:08:48.744  "rw_mbytes_per_sec": 0,
00:08:48.744  "r_mbytes_per_sec": 0,
00:08:48.744  "w_mbytes_per_sec": 0
00:08:48.744  },
00:08:48.744  "claimed": true,
00:08:48.744  "claim_type": "exclusive_write",
00:08:48.744  "zoned": false,
00:08:48.744  "supported_io_types": {
00:08:48.744  "read": true,
00:08:48.744  "write": true,
00:08:48.744  "unmap": true,
00:08:48.744  "write_zeroes": true,
00:08:48.744  "flush": true,
00:08:48.744  "reset": true,
00:08:48.744  "compare": false,
00:08:48.744  "compare_and_write": false,
00:08:48.744  "abort": true,
00:08:48.744  "nvme_admin": false,
00:08:48.744  "nvme_io": false
00:08:48.744  },
00:08:48.744  "memory_domains": [
00:08:48.744  {
00:08:48.744  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:08:48.744  "dma_device_type": 2
00:08:48.744  }
00:08:48.744  ],
00:08:48.744  "driver_specific": {}
00:08:48.744  },
00:08:48.744  {
00:08:48.744  "name": "Passthru0",
00:08:48.744  "aliases": [
00:08:48.744  "d23e4d3b-1205-5ef2-a749-a13982d25c3a"
00:08:48.744  ],
00:08:48.744  "product_name": "passthru",
00:08:48.744  "block_size": 512,
00:08:48.744  "num_blocks": 16384,
00:08:48.744  "uuid": "d23e4d3b-1205-5ef2-a749-a13982d25c3a",
00:08:48.744  "assigned_rate_limits": {
00:08:48.744  "rw_ios_per_sec": 0,
00:08:48.744  "rw_mbytes_per_sec": 0,
00:08:48.744  "r_mbytes_per_sec": 0,
00:08:48.744  "w_mbytes_per_sec": 0
00:08:48.744  },
00:08:48.744  "claimed": false,
00:08:48.744  "zoned": false,
00:08:48.744  "supported_io_types": {
00:08:48.744  "read": true,
00:08:48.744  "write": true,
00:08:48.744  "unmap": true,
00:08:48.744  "write_zeroes": true,
00:08:48.744  "flush": true,
00:08:48.744  "reset": true,
00:08:48.744  "compare": false,
00:08:48.744  "compare_and_write": false,
00:08:48.744  "abort": true,
00:08:48.744  "nvme_admin": false,
00:08:48.744  "nvme_io": false
00:08:48.744  },
00:08:48.744  "memory_domains": [
00:08:48.744  {
00:08:48.744  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:08:48.744  "dma_device_type": 2
00:08:48.745  }
00:08:48.745  ],
00:08:48.745  "driver_specific": {
00:08:48.745  "passthru": {
00:08:48.745  "name": "Passthru0",
00:08:48.745  "base_bdev_name": "Malloc0"
00:08:48.745  }
00:08:48.745  }
00:08:48.745  }
00:08:48.745  ]'
00:08:48.745    16:52:41	-- rpc/rpc.sh@21 -- # jq length
00:08:49.003   16:52:41	-- rpc/rpc.sh@21 -- # '[' 2 == 2 ']'
00:08:49.003   16:52:41	-- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0
00:08:49.003   16:52:41	-- common/autotest_common.sh@561 -- # xtrace_disable
00:08:49.003   16:52:41	-- common/autotest_common.sh@10 -- # set +x
00:08:49.003   16:52:41	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:49.003   16:52:41	-- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0
00:08:49.003   16:52:41	-- common/autotest_common.sh@561 -- # xtrace_disable
00:08:49.003   16:52:41	-- common/autotest_common.sh@10 -- # set +x
00:08:49.003   16:52:41	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:49.003    16:52:41	-- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs
00:08:49.003    16:52:41	-- common/autotest_common.sh@561 -- # xtrace_disable
00:08:49.003    16:52:41	-- common/autotest_common.sh@10 -- # set +x
00:08:49.003    16:52:41	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:49.003   16:52:41	-- rpc/rpc.sh@25 -- # bdevs='[]'
00:08:49.003    16:52:41	-- rpc/rpc.sh@26 -- # jq length
00:08:49.003   16:52:41	-- rpc/rpc.sh@26 -- # '[' 0 == 0 ']'
00:08:49.003  
00:08:49.003  real	0m0.295s
00:08:49.003  user	0m0.180s
00:08:49.003  sys	0m0.049s
00:08:49.003   16:52:41	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:08:49.003   16:52:41	-- common/autotest_common.sh@10 -- # set +x
00:08:49.003  ************************************
00:08:49.003  END TEST rpc_integrity
00:08:49.003  ************************************
00:08:49.003   16:52:41	-- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins
00:08:49.003   16:52:41	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:08:49.003   16:52:41	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:08:49.003   16:52:41	-- common/autotest_common.sh@10 -- # set +x
00:08:49.003  ************************************
00:08:49.003  START TEST rpc_plugins
00:08:49.003  ************************************
00:08:49.003   16:52:41	-- common/autotest_common.sh@1114 -- # rpc_plugins
00:08:49.003    16:52:41	-- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc
00:08:49.003    16:52:41	-- common/autotest_common.sh@561 -- # xtrace_disable
00:08:49.003    16:52:41	-- common/autotest_common.sh@10 -- # set +x
00:08:49.003    16:52:41	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:49.003   16:52:41	-- rpc/rpc.sh@30 -- # malloc=Malloc1
00:08:49.003    16:52:41	-- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs
00:08:49.003    16:52:41	-- common/autotest_common.sh@561 -- # xtrace_disable
00:08:49.003    16:52:41	-- common/autotest_common.sh@10 -- # set +x
00:08:49.003    16:52:41	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:49.003   16:52:41	-- rpc/rpc.sh@31 -- # bdevs='[
00:08:49.003  {
00:08:49.003  "name": "Malloc1",
00:08:49.003  "aliases": [
00:08:49.003  "c373d033-b6e5-432a-8ce9-49211c7a1d60"
00:08:49.003  ],
00:08:49.003  "product_name": "Malloc disk",
00:08:49.003  "block_size": 4096,
00:08:49.003  "num_blocks": 256,
00:08:49.003  "uuid": "c373d033-b6e5-432a-8ce9-49211c7a1d60",
00:08:49.003  "assigned_rate_limits": {
00:08:49.003  "rw_ios_per_sec": 0,
00:08:49.003  "rw_mbytes_per_sec": 0,
00:08:49.003  "r_mbytes_per_sec": 0,
00:08:49.003  "w_mbytes_per_sec": 0
00:08:49.003  },
00:08:49.003  "claimed": false,
00:08:49.003  "zoned": false,
00:08:49.003  "supported_io_types": {
00:08:49.003  "read": true,
00:08:49.003  "write": true,
00:08:49.003  "unmap": true,
00:08:49.003  "write_zeroes": true,
00:08:49.003  "flush": true,
00:08:49.003  "reset": true,
00:08:49.003  "compare": false,
00:08:49.003  "compare_and_write": false,
00:08:49.003  "abort": true,
00:08:49.003  "nvme_admin": false,
00:08:49.003  "nvme_io": false
00:08:49.003  },
00:08:49.003  "memory_domains": [
00:08:49.003  {
00:08:49.003  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:08:49.003  "dma_device_type": 2
00:08:49.003  }
00:08:49.003  ],
00:08:49.003  "driver_specific": {}
00:08:49.003  }
00:08:49.003  ]'
00:08:49.003    16:52:41	-- rpc/rpc.sh@32 -- # jq length
00:08:49.262   16:52:41	-- rpc/rpc.sh@32 -- # '[' 1 == 1 ']'
00:08:49.262   16:52:41	-- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1
00:08:49.262   16:52:41	-- common/autotest_common.sh@561 -- # xtrace_disable
00:08:49.262   16:52:41	-- common/autotest_common.sh@10 -- # set +x
00:08:49.262   16:52:41	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:49.262    16:52:41	-- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs
00:08:49.262    16:52:41	-- common/autotest_common.sh@561 -- # xtrace_disable
00:08:49.262    16:52:41	-- common/autotest_common.sh@10 -- # set +x
00:08:49.262    16:52:41	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:49.262   16:52:41	-- rpc/rpc.sh@35 -- # bdevs='[]'
00:08:49.262    16:52:41	-- rpc/rpc.sh@36 -- # jq length
00:08:49.262   16:52:41	-- rpc/rpc.sh@36 -- # '[' 0 == 0 ']'
00:08:49.262  
00:08:49.262  real	0m0.142s
00:08:49.262  user	0m0.097s
00:08:49.262  sys	0m0.010s
00:08:49.262   16:52:41	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:08:49.262   16:52:41	-- common/autotest_common.sh@10 -- # set +x
00:08:49.262  ************************************
00:08:49.262  END TEST rpc_plugins
00:08:49.262  ************************************
00:08:49.262   16:52:41	-- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test
00:08:49.262   16:52:41	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:08:49.262   16:52:41	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:08:49.262   16:52:41	-- common/autotest_common.sh@10 -- # set +x
00:08:49.262  ************************************
00:08:49.262  START TEST rpc_trace_cmd_test
00:08:49.262  ************************************
00:08:49.262   16:52:42	-- common/autotest_common.sh@1114 -- # rpc_trace_cmd_test
00:08:49.262   16:52:42	-- rpc/rpc.sh@40 -- # local info
00:08:49.262    16:52:42	-- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info
00:08:49.262    16:52:42	-- common/autotest_common.sh@561 -- # xtrace_disable
00:08:49.262    16:52:42	-- common/autotest_common.sh@10 -- # set +x
00:08:49.262    16:52:42	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:49.262   16:52:42	-- rpc/rpc.sh@42 -- # info='{
00:08:49.262  "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid114813",
00:08:49.262  "tpoint_group_mask": "0x8",
00:08:49.262  "iscsi_conn": {
00:08:49.262  "mask": "0x2",
00:08:49.262  "tpoint_mask": "0x0"
00:08:49.262  },
00:08:49.262  "scsi": {
00:08:49.262  "mask": "0x4",
00:08:49.262  "tpoint_mask": "0x0"
00:08:49.262  },
00:08:49.262  "bdev": {
00:08:49.262  "mask": "0x8",
00:08:49.262  "tpoint_mask": "0xffffffffffffffff"
00:08:49.262  },
00:08:49.262  "nvmf_rdma": {
00:08:49.262  "mask": "0x10",
00:08:49.262  "tpoint_mask": "0x0"
00:08:49.262  },
00:08:49.262  "nvmf_tcp": {
00:08:49.262  "mask": "0x20",
00:08:49.262  "tpoint_mask": "0x0"
00:08:49.262  },
00:08:49.262  "ftl": {
00:08:49.262  "mask": "0x40",
00:08:49.262  "tpoint_mask": "0x0"
00:08:49.262  },
00:08:49.262  "blobfs": {
00:08:49.262  "mask": "0x80",
00:08:49.262  "tpoint_mask": "0x0"
00:08:49.262  },
00:08:49.262  "dsa": {
00:08:49.262  "mask": "0x200",
00:08:49.262  "tpoint_mask": "0x0"
00:08:49.262  },
00:08:49.262  "thread": {
00:08:49.262  "mask": "0x400",
00:08:49.262  "tpoint_mask": "0x0"
00:08:49.262  },
00:08:49.262  "nvme_pcie": {
00:08:49.262  "mask": "0x800",
00:08:49.262  "tpoint_mask": "0x0"
00:08:49.262  },
00:08:49.262  "iaa": {
00:08:49.262  "mask": "0x1000",
00:08:49.262  "tpoint_mask": "0x0"
00:08:49.262  },
00:08:49.262  "nvme_tcp": {
00:08:49.262  "mask": "0x2000",
00:08:49.262  "tpoint_mask": "0x0"
00:08:49.262  },
00:08:49.262  "bdev_nvme": {
00:08:49.262  "mask": "0x4000",
00:08:49.262  "tpoint_mask": "0x0"
00:08:49.262  }
00:08:49.262  }'
00:08:49.262    16:52:42	-- rpc/rpc.sh@43 -- # jq length
00:08:49.262   16:52:42	-- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']'
00:08:49.262    16:52:42	-- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")'
00:08:49.520   16:52:42	-- rpc/rpc.sh@44 -- # '[' true = true ']'
00:08:49.520    16:52:42	-- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")'
00:08:49.520   16:52:42	-- rpc/rpc.sh@45 -- # '[' true = true ']'
00:08:49.520    16:52:42	-- rpc/rpc.sh@46 -- # jq 'has("bdev")'
00:08:49.520   16:52:42	-- rpc/rpc.sh@46 -- # '[' true = true ']'
00:08:49.520    16:52:42	-- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask
00:08:49.520   16:52:42	-- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']'
00:08:49.520  
00:08:49.520  real	0m0.259s
00:08:49.520  user	0m0.222s
00:08:49.520  sys	0m0.030s
00:08:49.520   16:52:42	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:08:49.520   16:52:42	-- common/autotest_common.sh@10 -- # set +x
00:08:49.520  ************************************
00:08:49.520  END TEST rpc_trace_cmd_test
00:08:49.520  ************************************
00:08:49.520   16:52:42	-- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]]
00:08:49.520   16:52:42	-- rpc/rpc.sh@80 -- # rpc=rpc_cmd
00:08:49.520   16:52:42	-- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity
00:08:49.520   16:52:42	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:08:49.520   16:52:42	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:08:49.520   16:52:42	-- common/autotest_common.sh@10 -- # set +x
00:08:49.520  ************************************
00:08:49.520  START TEST rpc_daemon_integrity
00:08:49.520  ************************************
00:08:49.520   16:52:42	-- common/autotest_common.sh@1114 -- # rpc_integrity
00:08:49.520    16:52:42	-- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs
00:08:49.520    16:52:42	-- common/autotest_common.sh@561 -- # xtrace_disable
00:08:49.520    16:52:42	-- common/autotest_common.sh@10 -- # set +x
00:08:49.520    16:52:42	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:49.520   16:52:42	-- rpc/rpc.sh@12 -- # bdevs='[]'
00:08:49.520    16:52:42	-- rpc/rpc.sh@13 -- # jq length
00:08:49.778   16:52:42	-- rpc/rpc.sh@13 -- # '[' 0 == 0 ']'
00:08:49.778    16:52:42	-- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512
00:08:49.778    16:52:42	-- common/autotest_common.sh@561 -- # xtrace_disable
00:08:49.778    16:52:42	-- common/autotest_common.sh@10 -- # set +x
00:08:49.778    16:52:42	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:49.778   16:52:42	-- rpc/rpc.sh@15 -- # malloc=Malloc2
00:08:49.778    16:52:42	-- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs
00:08:49.778    16:52:42	-- common/autotest_common.sh@561 -- # xtrace_disable
00:08:49.778    16:52:42	-- common/autotest_common.sh@10 -- # set +x
00:08:49.778    16:52:42	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:49.778   16:52:42	-- rpc/rpc.sh@16 -- # bdevs='[
00:08:49.778  {
00:08:49.778  "name": "Malloc2",
00:08:49.778  "aliases": [
00:08:49.778  "a8b41762-a136-40ff-87b4-65795e673a3f"
00:08:49.778  ],
00:08:49.778  "product_name": "Malloc disk",
00:08:49.778  "block_size": 512,
00:08:49.778  "num_blocks": 16384,
00:08:49.778  "uuid": "a8b41762-a136-40ff-87b4-65795e673a3f",
00:08:49.778  "assigned_rate_limits": {
00:08:49.778  "rw_ios_per_sec": 0,
00:08:49.778  "rw_mbytes_per_sec": 0,
00:08:49.778  "r_mbytes_per_sec": 0,
00:08:49.778  "w_mbytes_per_sec": 0
00:08:49.778  },
00:08:49.778  "claimed": false,
00:08:49.778  "zoned": false,
00:08:49.778  "supported_io_types": {
00:08:49.778  "read": true,
00:08:49.778  "write": true,
00:08:49.778  "unmap": true,
00:08:49.778  "write_zeroes": true,
00:08:49.778  "flush": true,
00:08:49.778  "reset": true,
00:08:49.778  "compare": false,
00:08:49.778  "compare_and_write": false,
00:08:49.778  "abort": true,
00:08:49.778  "nvme_admin": false,
00:08:49.778  "nvme_io": false
00:08:49.778  },
00:08:49.778  "memory_domains": [
00:08:49.778  {
00:08:49.778  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:08:49.778  "dma_device_type": 2
00:08:49.778  }
00:08:49.778  ],
00:08:49.778  "driver_specific": {}
00:08:49.778  }
00:08:49.778  ]'
00:08:49.778    16:52:42	-- rpc/rpc.sh@17 -- # jq length
00:08:49.778   16:52:42	-- rpc/rpc.sh@17 -- # '[' 1 == 1 ']'
00:08:49.778   16:52:42	-- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0
00:08:49.778   16:52:42	-- common/autotest_common.sh@561 -- # xtrace_disable
00:08:49.778   16:52:42	-- common/autotest_common.sh@10 -- # set +x
00:08:49.778  [2024-11-19 16:52:42.500956] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2
00:08:49.778  [2024-11-19 16:52:42.501476] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:08:49.778  [2024-11-19 16:52:42.501788] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280
00:08:49.778  [2024-11-19 16:52:42.502047] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:08:49.778  [2024-11-19 16:52:42.506931] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:08:49.778  [2024-11-19 16:52:42.507289] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0
00:08:49.778  Passthru0
00:08:49.778   16:52:42	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:49.778    16:52:42	-- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs
00:08:49.778    16:52:42	-- common/autotest_common.sh@561 -- # xtrace_disable
00:08:49.778    16:52:42	-- common/autotest_common.sh@10 -- # set +x
00:08:49.778    16:52:42	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:49.778   16:52:42	-- rpc/rpc.sh@20 -- # bdevs='[
00:08:49.778  {
00:08:49.778  "name": "Malloc2",
00:08:49.778  "aliases": [
00:08:49.778  "a8b41762-a136-40ff-87b4-65795e673a3f"
00:08:49.778  ],
00:08:49.778  "product_name": "Malloc disk",
00:08:49.778  "block_size": 512,
00:08:49.779  "num_blocks": 16384,
00:08:49.779  "uuid": "a8b41762-a136-40ff-87b4-65795e673a3f",
00:08:49.779  "assigned_rate_limits": {
00:08:49.779  "rw_ios_per_sec": 0,
00:08:49.779  "rw_mbytes_per_sec": 0,
00:08:49.779  "r_mbytes_per_sec": 0,
00:08:49.779  "w_mbytes_per_sec": 0
00:08:49.779  },
00:08:49.779  "claimed": true,
00:08:49.779  "claim_type": "exclusive_write",
00:08:49.779  "zoned": false,
00:08:49.779  "supported_io_types": {
00:08:49.779  "read": true,
00:08:49.779  "write": true,
00:08:49.779  "unmap": true,
00:08:49.779  "write_zeroes": true,
00:08:49.779  "flush": true,
00:08:49.779  "reset": true,
00:08:49.779  "compare": false,
00:08:49.779  "compare_and_write": false,
00:08:49.779  "abort": true,
00:08:49.779  "nvme_admin": false,
00:08:49.779  "nvme_io": false
00:08:49.779  },
00:08:49.779  "memory_domains": [
00:08:49.779  {
00:08:49.779  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:08:49.779  "dma_device_type": 2
00:08:49.779  }
00:08:49.779  ],
00:08:49.779  "driver_specific": {}
00:08:49.779  },
00:08:49.779  {
00:08:49.779  "name": "Passthru0",
00:08:49.779  "aliases": [
00:08:49.779  "093a4168-bf9c-5f2e-8147-a9f7db562cb1"
00:08:49.779  ],
00:08:49.779  "product_name": "passthru",
00:08:49.779  "block_size": 512,
00:08:49.779  "num_blocks": 16384,
00:08:49.779  "uuid": "093a4168-bf9c-5f2e-8147-a9f7db562cb1",
00:08:49.779  "assigned_rate_limits": {
00:08:49.779  "rw_ios_per_sec": 0,
00:08:49.779  "rw_mbytes_per_sec": 0,
00:08:49.779  "r_mbytes_per_sec": 0,
00:08:49.779  "w_mbytes_per_sec": 0
00:08:49.779  },
00:08:49.779  "claimed": false,
00:08:49.779  "zoned": false,
00:08:49.779  "supported_io_types": {
00:08:49.779  "read": true,
00:08:49.779  "write": true,
00:08:49.779  "unmap": true,
00:08:49.779  "write_zeroes": true,
00:08:49.779  "flush": true,
00:08:49.779  "reset": true,
00:08:49.779  "compare": false,
00:08:49.779  "compare_and_write": false,
00:08:49.779  "abort": true,
00:08:49.779  "nvme_admin": false,
00:08:49.779  "nvme_io": false
00:08:49.779  },
00:08:49.779  "memory_domains": [
00:08:49.779  {
00:08:49.779  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:08:49.779  "dma_device_type": 2
00:08:49.779  }
00:08:49.779  ],
00:08:49.779  "driver_specific": {
00:08:49.779  "passthru": {
00:08:49.779  "name": "Passthru0",
00:08:49.779  "base_bdev_name": "Malloc2"
00:08:49.779  }
00:08:49.779  }
00:08:49.779  }
00:08:49.779  ]'
00:08:49.779    16:52:42	-- rpc/rpc.sh@21 -- # jq length
00:08:49.779   16:52:42	-- rpc/rpc.sh@21 -- # '[' 2 == 2 ']'
00:08:49.779   16:52:42	-- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0
00:08:49.779   16:52:42	-- common/autotest_common.sh@561 -- # xtrace_disable
00:08:49.779   16:52:42	-- common/autotest_common.sh@10 -- # set +x
00:08:49.779   16:52:42	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:49.779   16:52:42	-- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2
00:08:49.779   16:52:42	-- common/autotest_common.sh@561 -- # xtrace_disable
00:08:49.779   16:52:42	-- common/autotest_common.sh@10 -- # set +x
00:08:49.779   16:52:42	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:49.779    16:52:42	-- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs
00:08:49.779    16:52:42	-- common/autotest_common.sh@561 -- # xtrace_disable
00:08:49.779    16:52:42	-- common/autotest_common.sh@10 -- # set +x
00:08:49.779    16:52:42	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:49.779   16:52:42	-- rpc/rpc.sh@25 -- # bdevs='[]'
00:08:49.779    16:52:42	-- rpc/rpc.sh@26 -- # jq length
00:08:50.037   16:52:42	-- rpc/rpc.sh@26 -- # '[' 0 == 0 ']'
00:08:50.037  
00:08:50.037  real	0m0.319s
00:08:50.037  user	0m0.203s
00:08:50.037  sys	0m0.048s
00:08:50.037   16:52:42	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:08:50.037   16:52:42	-- common/autotest_common.sh@10 -- # set +x
00:08:50.037  ************************************
00:08:50.037  END TEST rpc_daemon_integrity
00:08:50.037  ************************************
00:08:50.037   16:52:42	-- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT
00:08:50.037   16:52:42	-- rpc/rpc.sh@84 -- # killprocess 114813
00:08:50.037   16:52:42	-- common/autotest_common.sh@936 -- # '[' -z 114813 ']'
00:08:50.037   16:52:42	-- common/autotest_common.sh@940 -- # kill -0 114813
00:08:50.037    16:52:42	-- common/autotest_common.sh@941 -- # uname
00:08:50.037   16:52:42	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:08:50.037    16:52:42	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 114813
00:08:50.037   16:52:42	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:08:50.037   16:52:42	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:08:50.037  killing process with pid 114813
00:08:50.037   16:52:42	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 114813'
00:08:50.037   16:52:42	-- common/autotest_common.sh@955 -- # kill 114813
00:08:50.037   16:52:42	-- common/autotest_common.sh@960 -- # wait 114813
00:08:50.604  ************************************
00:08:50.604  END TEST rpc
00:08:50.604  ************************************
00:08:50.604  
00:08:50.604  real	0m3.128s
00:08:50.604  user	0m3.976s
00:08:50.604  sys	0m0.854s
00:08:50.604   16:52:43	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:08:50.604   16:52:43	-- common/autotest_common.sh@10 -- # set +x
00:08:50.604   16:52:43	-- spdk/autotest.sh@164 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh
00:08:50.604   16:52:43	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:08:50.604   16:52:43	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:08:50.604   16:52:43	-- common/autotest_common.sh@10 -- # set +x
00:08:50.604  ************************************
00:08:50.604  START TEST rpc_client
00:08:50.604  ************************************
00:08:50.604   16:52:43	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh
00:08:50.604  * Looking for test storage...
00:08:50.604  * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client
00:08:50.604    16:52:43	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:08:50.604     16:52:43	-- common/autotest_common.sh@1690 -- # lcov --version
00:08:50.604     16:52:43	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:08:50.604    16:52:43	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:08:50.604    16:52:43	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:08:50.604    16:52:43	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:08:50.604    16:52:43	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:08:50.604    16:52:43	-- scripts/common.sh@335 -- # IFS=.-:
00:08:50.604    16:52:43	-- scripts/common.sh@335 -- # read -ra ver1
00:08:50.604    16:52:43	-- scripts/common.sh@336 -- # IFS=.-:
00:08:50.604    16:52:43	-- scripts/common.sh@336 -- # read -ra ver2
00:08:50.604    16:52:43	-- scripts/common.sh@337 -- # local 'op=<'
00:08:50.604    16:52:43	-- scripts/common.sh@339 -- # ver1_l=2
00:08:50.604    16:52:43	-- scripts/common.sh@340 -- # ver2_l=1
00:08:50.604    16:52:43	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:08:50.604    16:52:43	-- scripts/common.sh@343 -- # case "$op" in
00:08:50.604    16:52:43	-- scripts/common.sh@344 -- # : 1
00:08:50.604    16:52:43	-- scripts/common.sh@363 -- # (( v = 0 ))
00:08:50.604    16:52:43	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:08:50.604     16:52:43	-- scripts/common.sh@364 -- # decimal 1
00:08:50.604     16:52:43	-- scripts/common.sh@352 -- # local d=1
00:08:50.604     16:52:43	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:50.604     16:52:43	-- scripts/common.sh@354 -- # echo 1
00:08:50.604    16:52:43	-- scripts/common.sh@364 -- # ver1[v]=1
00:08:50.604     16:52:43	-- scripts/common.sh@365 -- # decimal 2
00:08:50.604     16:52:43	-- scripts/common.sh@352 -- # local d=2
00:08:50.604     16:52:43	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:50.604     16:52:43	-- scripts/common.sh@354 -- # echo 2
00:08:50.604    16:52:43	-- scripts/common.sh@365 -- # ver2[v]=2
00:08:50.604    16:52:43	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:08:50.604    16:52:43	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:08:50.604    16:52:43	-- scripts/common.sh@367 -- # return 0
00:08:50.604    16:52:43	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:08:50.604    16:52:43	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:08:50.604  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:50.604  		--rc genhtml_branch_coverage=1
00:08:50.604  		--rc genhtml_function_coverage=1
00:08:50.604  		--rc genhtml_legend=1
00:08:50.604  		--rc geninfo_all_blocks=1
00:08:50.604  		--rc geninfo_unexecuted_blocks=1
00:08:50.604  		
00:08:50.604  		'
00:08:50.604    16:52:43	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:08:50.604  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:50.604  		--rc genhtml_branch_coverage=1
00:08:50.604  		--rc genhtml_function_coverage=1
00:08:50.604  		--rc genhtml_legend=1
00:08:50.604  		--rc geninfo_all_blocks=1
00:08:50.604  		--rc geninfo_unexecuted_blocks=1
00:08:50.604  		
00:08:50.604  		'
00:08:50.604    16:52:43	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:08:50.604  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:50.604  		--rc genhtml_branch_coverage=1
00:08:50.604  		--rc genhtml_function_coverage=1
00:08:50.604  		--rc genhtml_legend=1
00:08:50.604  		--rc geninfo_all_blocks=1
00:08:50.604  		--rc geninfo_unexecuted_blocks=1
00:08:50.604  		
00:08:50.604  		'
00:08:50.604    16:52:43	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:08:50.604  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:50.604  		--rc genhtml_branch_coverage=1
00:08:50.604  		--rc genhtml_function_coverage=1
00:08:50.604  		--rc genhtml_legend=1
00:08:50.604  		--rc geninfo_all_blocks=1
00:08:50.604  		--rc geninfo_unexecuted_blocks=1
00:08:50.604  		
00:08:50.604  		'
00:08:50.604   16:52:43	-- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test
00:08:50.862  OK
00:08:50.862   16:52:43	-- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT
00:08:50.862  ************************************
00:08:50.862  END TEST rpc_client
00:08:50.862  ************************************
00:08:50.862  
00:08:50.862  real	0m0.273s
00:08:50.862  user	0m0.182s
00:08:50.862  sys	0m0.120s
00:08:50.862   16:52:43	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:08:50.862   16:52:43	-- common/autotest_common.sh@10 -- # set +x
00:08:50.862   16:52:43	-- spdk/autotest.sh@165 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh
00:08:50.862   16:52:43	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:08:50.862   16:52:43	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:08:50.862   16:52:43	-- common/autotest_common.sh@10 -- # set +x
00:08:50.862  ************************************
00:08:50.862  START TEST json_config
00:08:50.862  ************************************
00:08:50.862   16:52:43	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh
00:08:50.862    16:52:43	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:08:50.862     16:52:43	-- common/autotest_common.sh@1690 -- # lcov --version
00:08:50.862     16:52:43	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:08:51.120    16:52:43	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:08:51.120    16:52:43	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:08:51.120    16:52:43	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:08:51.120    16:52:43	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:08:51.120    16:52:43	-- scripts/common.sh@335 -- # IFS=.-:
00:08:51.120    16:52:43	-- scripts/common.sh@335 -- # read -ra ver1
00:08:51.120    16:52:43	-- scripts/common.sh@336 -- # IFS=.-:
00:08:51.120    16:52:43	-- scripts/common.sh@336 -- # read -ra ver2
00:08:51.120    16:52:43	-- scripts/common.sh@337 -- # local 'op=<'
00:08:51.120    16:52:43	-- scripts/common.sh@339 -- # ver1_l=2
00:08:51.120    16:52:43	-- scripts/common.sh@340 -- # ver2_l=1
00:08:51.120    16:52:43	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:08:51.120    16:52:43	-- scripts/common.sh@343 -- # case "$op" in
00:08:51.120    16:52:43	-- scripts/common.sh@344 -- # : 1
00:08:51.120    16:52:43	-- scripts/common.sh@363 -- # (( v = 0 ))
00:08:51.120    16:52:43	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:08:51.120     16:52:43	-- scripts/common.sh@364 -- # decimal 1
00:08:51.120     16:52:43	-- scripts/common.sh@352 -- # local d=1
00:08:51.120     16:52:43	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:51.120     16:52:43	-- scripts/common.sh@354 -- # echo 1
00:08:51.120    16:52:43	-- scripts/common.sh@364 -- # ver1[v]=1
00:08:51.120     16:52:43	-- scripts/common.sh@365 -- # decimal 2
00:08:51.120     16:52:43	-- scripts/common.sh@352 -- # local d=2
00:08:51.120     16:52:43	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:51.120     16:52:43	-- scripts/common.sh@354 -- # echo 2
00:08:51.120    16:52:43	-- scripts/common.sh@365 -- # ver2[v]=2
00:08:51.120    16:52:43	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:08:51.120    16:52:43	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:08:51.121    16:52:43	-- scripts/common.sh@367 -- # return 0
00:08:51.121    16:52:43	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:08:51.121    16:52:43	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:08:51.121  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:51.121  		--rc genhtml_branch_coverage=1
00:08:51.121  		--rc genhtml_function_coverage=1
00:08:51.121  		--rc genhtml_legend=1
00:08:51.121  		--rc geninfo_all_blocks=1
00:08:51.121  		--rc geninfo_unexecuted_blocks=1
00:08:51.121  		
00:08:51.121  		'
00:08:51.121    16:52:43	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:08:51.121  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:51.121  		--rc genhtml_branch_coverage=1
00:08:51.121  		--rc genhtml_function_coverage=1
00:08:51.121  		--rc genhtml_legend=1
00:08:51.121  		--rc geninfo_all_blocks=1
00:08:51.121  		--rc geninfo_unexecuted_blocks=1
00:08:51.121  		
00:08:51.121  		'
00:08:51.121    16:52:43	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:08:51.121  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:51.121  		--rc genhtml_branch_coverage=1
00:08:51.121  		--rc genhtml_function_coverage=1
00:08:51.121  		--rc genhtml_legend=1
00:08:51.121  		--rc geninfo_all_blocks=1
00:08:51.121  		--rc geninfo_unexecuted_blocks=1
00:08:51.121  		
00:08:51.121  		'
00:08:51.121    16:52:43	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:08:51.121  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:51.121  		--rc genhtml_branch_coverage=1
00:08:51.121  		--rc genhtml_function_coverage=1
00:08:51.121  		--rc genhtml_legend=1
00:08:51.121  		--rc geninfo_all_blocks=1
00:08:51.121  		--rc geninfo_unexecuted_blocks=1
00:08:51.121  		
00:08:51.121  		'
00:08:51.121   16:52:43	-- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:08:51.121     16:52:43	-- nvmf/common.sh@7 -- # uname -s
00:08:51.121    16:52:43	-- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:08:51.121    16:52:43	-- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:08:51.121    16:52:43	-- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:08:51.121    16:52:43	-- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:08:51.121    16:52:43	-- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:08:51.121    16:52:43	-- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:08:51.121    16:52:43	-- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:08:51.121    16:52:43	-- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:08:51.121    16:52:43	-- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:08:51.121     16:52:43	-- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:08:51.121    16:52:43	-- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:47d8708b-214f-42fe-9313-56f5b7c8a020
00:08:51.121    16:52:43	-- nvmf/common.sh@18 -- # NVME_HOSTID=47d8708b-214f-42fe-9313-56f5b7c8a020
00:08:51.121    16:52:43	-- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:08:51.121    16:52:43	-- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:08:51.121    16:52:43	-- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback
00:08:51.121    16:52:43	-- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:08:51.121     16:52:43	-- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]]
00:08:51.121     16:52:43	-- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:08:51.121     16:52:43	-- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:08:51.121      16:52:43	-- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:08:51.121      16:52:43	-- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:08:51.121      16:52:43	-- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:08:51.121      16:52:43	-- paths/export.sh@5 -- # export PATH
00:08:51.121      16:52:43	-- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:08:51.121    16:52:43	-- nvmf/common.sh@46 -- # : 0
00:08:51.121    16:52:43	-- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID
00:08:51.121    16:52:43	-- nvmf/common.sh@48 -- # build_nvmf_app_args
00:08:51.121    16:52:43	-- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']'
00:08:51.121    16:52:43	-- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:08:51.121    16:52:43	-- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:08:51.121    16:52:43	-- nvmf/common.sh@32 -- # '[' -n '' ']'
00:08:51.121    16:52:43	-- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']'
00:08:51.121    16:52:43	-- nvmf/common.sh@50 -- # have_pci_nics=0
00:08:51.121   16:52:43	-- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]]
00:08:51.121   16:52:43	-- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]]
00:08:51.121   16:52:43	-- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]]
00:08:51.121   16:52:43	-- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + 	SPDK_TEST_ISCSI + 	SPDK_TEST_NVMF + 	SPDK_TEST_VHOST + 	SPDK_TEST_VHOST_INIT + 	SPDK_TEST_RBD == 0 ))
00:08:51.121   16:52:43	-- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='')
00:08:51.121   16:52:43	-- json_config/json_config.sh@30 -- # declare -A app_pid
00:08:51.121   16:52:43	-- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock')
00:08:51.121   16:52:43	-- json_config/json_config.sh@31 -- # declare -A app_socket
00:08:51.121   16:52:43	-- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024')
00:08:51.121   16:52:43	-- json_config/json_config.sh@32 -- # declare -A app_params
00:08:51.121   16:52:43	-- json_config/json_config.sh@33 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json')
00:08:51.121   16:52:43	-- json_config/json_config.sh@33 -- # declare -A configs_path
00:08:51.121   16:52:43	-- json_config/json_config.sh@43 -- # last_event_id=0
00:08:51.121   16:52:43	-- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR
00:08:51.121  INFO: JSON configuration test init
00:08:51.121   16:52:43	-- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init'
00:08:51.121   16:52:43	-- json_config/json_config.sh@420 -- # json_config_test_init
00:08:51.121   16:52:43	-- json_config/json_config.sh@315 -- # timing_enter json_config_test_init
00:08:51.121   16:52:43	-- common/autotest_common.sh@722 -- # xtrace_disable
00:08:51.121   16:52:43	-- common/autotest_common.sh@10 -- # set +x
00:08:51.121   16:52:43	-- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target
00:08:51.121   16:52:43	-- common/autotest_common.sh@722 -- # xtrace_disable
00:08:51.121   16:52:43	-- common/autotest_common.sh@10 -- # set +x
00:08:51.121   16:52:43	-- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc
00:08:51.121   16:52:43	-- json_config/json_config.sh@98 -- # local app=target
00:08:51.121   16:52:43	-- json_config/json_config.sh@99 -- # shift
00:08:51.121   16:52:43	-- json_config/json_config.sh@101 -- # [[ -n 22 ]]
00:08:51.121   16:52:43	-- json_config/json_config.sh@102 -- # [[ -z '' ]]
00:08:51.121   16:52:43	-- json_config/json_config.sh@104 -- # local app_extra_params=
00:08:51.121   16:52:43	-- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]]
00:08:51.121   16:52:43	-- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]]
00:08:51.121   16:52:43	-- json_config/json_config.sh@111 -- # app_pid[$app]=115112
00:08:51.121  Waiting for target to run...
00:08:51.121   16:52:43	-- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...'
00:08:51.121   16:52:43	-- json_config/json_config.sh@114 -- # waitforlisten 115112 /var/tmp/spdk_tgt.sock
00:08:51.121   16:52:43	-- common/autotest_common.sh@829 -- # '[' -z 115112 ']'
00:08:51.121   16:52:43	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock
00:08:51.121   16:52:43	-- common/autotest_common.sh@834 -- # local max_retries=100
00:08:51.121   16:52:43	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...'
00:08:51.121  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...
00:08:51.121   16:52:43	-- common/autotest_common.sh@838 -- # xtrace_disable
00:08:51.121   16:52:43	-- common/autotest_common.sh@10 -- # set +x
00:08:51.121   16:52:43	-- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc
00:08:51.121  [2024-11-19 16:52:43.915599] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:08:51.121  [2024-11-19 16:52:43.916097] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115112 ]
00:08:51.687  [2024-11-19 16:52:44.510598] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:51.687  [2024-11-19 16:52:44.542515] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:08:51.687  [2024-11-19 16:52:44.543101] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:08:52.253   16:52:44	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:08:52.253   16:52:44	-- common/autotest_common.sh@862 -- # return 0
00:08:52.253   16:52:44	-- json_config/json_config.sh@115 -- # echo ''
00:08:52.253  
00:08:52.253   16:52:44	-- json_config/json_config.sh@322 -- # create_accel_config
00:08:52.253   16:52:44	-- json_config/json_config.sh@146 -- # timing_enter create_accel_config
00:08:52.253   16:52:44	-- common/autotest_common.sh@722 -- # xtrace_disable
00:08:52.253   16:52:44	-- common/autotest_common.sh@10 -- # set +x
00:08:52.253   16:52:44	-- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]]
00:08:52.253   16:52:44	-- json_config/json_config.sh@154 -- # timing_exit create_accel_config
00:08:52.253   16:52:44	-- common/autotest_common.sh@728 -- # xtrace_disable
00:08:52.253   16:52:44	-- common/autotest_common.sh@10 -- # set +x
00:08:52.253   16:52:44	-- json_config/json_config.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems
00:08:52.253   16:52:44	-- json_config/json_config.sh@327 -- # tgt_rpc load_config
00:08:52.253   16:52:44	-- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config
00:08:52.511   16:52:45	-- json_config/json_config.sh@329 -- # tgt_check_notification_types
00:08:52.511   16:52:45	-- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types
00:08:52.511   16:52:45	-- common/autotest_common.sh@722 -- # xtrace_disable
00:08:52.511   16:52:45	-- common/autotest_common.sh@10 -- # set +x
00:08:52.511   16:52:45	-- json_config/json_config.sh@48 -- # local ret=0
00:08:52.511   16:52:45	-- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister')
00:08:52.511   16:52:45	-- json_config/json_config.sh@49 -- # local enabled_types
00:08:52.511    16:52:45	-- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types
00:08:52.511    16:52:45	-- json_config/json_config.sh@51 -- # jq -r '.[]'
00:08:52.511    16:52:45	-- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types
00:08:52.770   16:52:45	-- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister')
00:08:52.770   16:52:45	-- json_config/json_config.sh@51 -- # local get_types
00:08:52.770   16:52:45	-- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]]
00:08:52.770   16:52:45	-- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types
00:08:52.770   16:52:45	-- common/autotest_common.sh@728 -- # xtrace_disable
00:08:52.770   16:52:45	-- common/autotest_common.sh@10 -- # set +x
00:08:53.029   16:52:45	-- json_config/json_config.sh@58 -- # return 0
00:08:53.029   16:52:45	-- json_config/json_config.sh@331 -- # [[ 1 -eq 1 ]]
00:08:53.029   16:52:45	-- json_config/json_config.sh@332 -- # create_bdev_subsystem_config
00:08:53.029   16:52:45	-- json_config/json_config.sh@158 -- # timing_enter create_bdev_subsystem_config
00:08:53.029   16:52:45	-- common/autotest_common.sh@722 -- # xtrace_disable
00:08:53.029   16:52:45	-- common/autotest_common.sh@10 -- # set +x
00:08:53.029   16:52:45	-- json_config/json_config.sh@160 -- # expected_notifications=()
00:08:53.029   16:52:45	-- json_config/json_config.sh@160 -- # local expected_notifications
00:08:53.029   16:52:45	-- json_config/json_config.sh@164 -- # expected_notifications+=($(get_notifications))
00:08:53.029    16:52:45	-- json_config/json_config.sh@164 -- # get_notifications
00:08:53.029    16:52:45	-- json_config/json_config.sh@62 -- # local ev_type ev_ctx event_id
00:08:53.029    16:52:45	-- json_config/json_config.sh@64 -- # IFS=:
00:08:53.029    16:52:45	-- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id
00:08:53.029     16:52:45	-- json_config/json_config.sh@61 -- # tgt_rpc notify_get_notifications -i 0
00:08:53.029     16:52:45	-- json_config/json_config.sh@61 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"'
00:08:53.029     16:52:45	-- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0
00:08:53.287    16:52:45	-- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1
00:08:53.287    16:52:45	-- json_config/json_config.sh@64 -- # IFS=:
00:08:53.287    16:52:45	-- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id
00:08:53.287   16:52:45	-- json_config/json_config.sh@166 -- # [[ 1 -eq 1 ]]
00:08:53.287   16:52:45	-- json_config/json_config.sh@167 -- # local lvol_store_base_bdev=Nvme0n1
00:08:53.287   16:52:45	-- json_config/json_config.sh@169 -- # tgt_rpc bdev_split_create Nvme0n1 2
00:08:53.287   16:52:45	-- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Nvme0n1 2
00:08:53.546  Nvme0n1p0 Nvme0n1p1
00:08:53.546   16:52:46	-- json_config/json_config.sh@170 -- # tgt_rpc bdev_split_create Malloc0 3
00:08:53.546   16:52:46	-- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Malloc0 3
00:08:53.804  [2024-11-19 16:52:46.444424] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0
00:08:53.804  [2024-11-19 16:52:46.445089] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0
00:08:53.804  
00:08:53.804   16:52:46	-- json_config/json_config.sh@171 -- # tgt_rpc bdev_malloc_create 8 4096 --name Malloc3
00:08:53.804   16:52:46	-- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 4096 --name Malloc3
00:08:53.804  Malloc3
00:08:53.804   16:52:46	-- json_config/json_config.sh@172 -- # tgt_rpc bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3
00:08:53.804   16:52:46	-- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3
00:08:54.064  [2024-11-19 16:52:46.872573] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3
00:08:54.064  [2024-11-19 16:52:46.873006] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:08:54.064  [2024-11-19 16:52:46.873164] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80
00:08:54.064  [2024-11-19 16:52:46.873307] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:08:54.064  [2024-11-19 16:52:46.876485] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:08:54.064  [2024-11-19 16:52:46.876699] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3
00:08:54.064  PTBdevFromMalloc3
00:08:54.064   16:52:46	-- json_config/json_config.sh@174 -- # tgt_rpc bdev_null_create Null0 32 512
00:08:54.064   16:52:46	-- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_null_create Null0 32 512
00:08:54.323  Null0
00:08:54.323   16:52:47	-- json_config/json_config.sh@176 -- # tgt_rpc bdev_malloc_create 32 512 --name Malloc0
00:08:54.323   16:52:47	-- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 32 512 --name Malloc0
00:08:54.582  Malloc0
00:08:54.582   16:52:47	-- json_config/json_config.sh@177 -- # tgt_rpc bdev_malloc_create 16 4096 --name Malloc1
00:08:54.582   16:52:47	-- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 16 4096 --name Malloc1
00:08:54.841  Malloc1
00:08:54.841   16:52:47	-- json_config/json_config.sh@190 -- # expected_notifications+=(bdev_register:${lvol_store_base_bdev}p1 bdev_register:${lvol_store_base_bdev}p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1)
00:08:54.841   16:52:47	-- json_config/json_config.sh@193 -- # dd if=/dev/zero of=/sample_aio bs=1024 count=102400
00:08:55.409  102400+0 records in
00:08:55.409  102400+0 records out
00:08:55.409  104857600 bytes (105 MB, 100 MiB) copied, 0.350455 s, 299 MB/s
00:08:55.409   16:52:47	-- json_config/json_config.sh@194 -- # tgt_rpc bdev_aio_create /sample_aio aio_disk 1024
00:08:55.409   16:52:47	-- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_aio_create /sample_aio aio_disk 1024
00:08:55.409  aio_disk
00:08:55.409   16:52:48	-- json_config/json_config.sh@195 -- # expected_notifications+=(bdev_register:aio_disk)
00:08:55.409   16:52:48	-- json_config/json_config.sh@200 -- # tgt_rpc bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test
00:08:55.409   16:52:48	-- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test
00:08:55.667  ca3ed476-7ca8-4d35-8d21-29c8c07214db
00:08:55.667   16:52:48	-- json_config/json_config.sh@207 -- # expected_notifications+=("bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test lvol0 32)" "bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32)" "bdev_register:$(tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0)" "bdev_register:$(tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0)")
00:08:55.667    16:52:48	-- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_create -l lvs_test lvol0 32
00:08:55.667    16:52:48	-- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test lvol0 32
00:08:55.926    16:52:48	-- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32
00:08:55.926    16:52:48	-- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test -t lvol1 32
00:08:56.184    16:52:48	-- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0
00:08:56.184    16:52:48	-- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_snapshot lvs_test/lvol0 snapshot0
00:08:56.442    16:52:49	-- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0
00:08:56.442    16:52:49	-- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_clone lvs_test/snapshot0 clone0
00:08:56.702   16:52:49	-- json_config/json_config.sh@210 -- # [[ 0 -eq 1 ]]
00:08:56.702   16:52:49	-- json_config/json_config.sh@225 -- # [[ 0 -eq 1 ]]
00:08:56.702   16:52:49	-- json_config/json_config.sh@231 -- # tgt_check_notifications bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:ee197116-ba79-4851-92bb-494b5095177b bdev_register:314d5c89-60a6-4251-8580-6b6b770900f0 bdev_register:01eddeb4-3a68-4a21-90b4-89317f03d4f1 bdev_register:609b88cb-b362-4641-bb4e-78eaf67720fa
00:08:56.702   16:52:49	-- json_config/json_config.sh@70 -- # local events_to_check
00:08:56.702   16:52:49	-- json_config/json_config.sh@71 -- # local recorded_events
00:08:56.702   16:52:49	-- json_config/json_config.sh@74 -- # events_to_check=($(printf '%s\n' "$@" | sort))
00:08:56.702    16:52:49	-- json_config/json_config.sh@74 -- # printf '%s\n' bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:ee197116-ba79-4851-92bb-494b5095177b bdev_register:314d5c89-60a6-4251-8580-6b6b770900f0 bdev_register:01eddeb4-3a68-4a21-90b4-89317f03d4f1 bdev_register:609b88cb-b362-4641-bb4e-78eaf67720fa
00:08:56.702    16:52:49	-- json_config/json_config.sh@74 -- # sort
00:08:56.702   16:52:49	-- json_config/json_config.sh@75 -- # recorded_events=($(get_notifications | sort))
00:08:56.702    16:52:49	-- json_config/json_config.sh@75 -- # get_notifications
00:08:56.702    16:52:49	-- json_config/json_config.sh@75 -- # sort
00:08:56.702    16:52:49	-- json_config/json_config.sh@62 -- # local ev_type ev_ctx event_id
00:08:56.702    16:52:49	-- json_config/json_config.sh@64 -- # IFS=:
00:08:56.702    16:52:49	-- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id
00:08:56.702     16:52:49	-- json_config/json_config.sh@61 -- # tgt_rpc notify_get_notifications -i 0
00:08:56.702     16:52:49	-- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0
00:08:56.702     16:52:49	-- json_config/json_config.sh@61 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"'
00:08:56.986    16:52:49	-- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1
00:08:56.986    16:52:49	-- json_config/json_config.sh@64 -- # IFS=:
00:08:56.986    16:52:49	-- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id
00:08:56.986    16:52:49	-- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1p1
00:08:56.986    16:52:49	-- json_config/json_config.sh@64 -- # IFS=:
00:08:56.986    16:52:49	-- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id
00:08:56.986    16:52:49	-- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1p0
00:08:56.986    16:52:49	-- json_config/json_config.sh@64 -- # IFS=:
00:08:56.986    16:52:49	-- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id
00:08:56.986    16:52:49	-- json_config/json_config.sh@65 -- # echo bdev_register:Malloc3
00:08:56.986    16:52:49	-- json_config/json_config.sh@64 -- # IFS=:
00:08:56.986    16:52:49	-- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id
00:08:56.986    16:52:49	-- json_config/json_config.sh@65 -- # echo bdev_register:PTBdevFromMalloc3
00:08:56.986    16:52:49	-- json_config/json_config.sh@64 -- # IFS=:
00:08:56.986    16:52:49	-- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id
00:08:56.986    16:52:49	-- json_config/json_config.sh@65 -- # echo bdev_register:Null0
00:08:56.986    16:52:49	-- json_config/json_config.sh@64 -- # IFS=:
00:08:56.986    16:52:49	-- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id
00:08:56.986    16:52:49	-- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0
00:08:56.986    16:52:49	-- json_config/json_config.sh@64 -- # IFS=:
00:08:56.986    16:52:49	-- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id
00:08:56.986    16:52:49	-- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p2
00:08:56.986    16:52:49	-- json_config/json_config.sh@64 -- # IFS=:
00:08:56.986    16:52:49	-- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id
00:08:56.986    16:52:49	-- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p1
00:08:56.986    16:52:49	-- json_config/json_config.sh@64 -- # IFS=:
00:08:56.986    16:52:49	-- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id
00:08:56.986    16:52:49	-- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p0
00:08:56.986    16:52:49	-- json_config/json_config.sh@64 -- # IFS=:
00:08:56.986    16:52:49	-- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id
00:08:56.986    16:52:49	-- json_config/json_config.sh@65 -- # echo bdev_register:Malloc1
00:08:56.986    16:52:49	-- json_config/json_config.sh@64 -- # IFS=:
00:08:56.986    16:52:49	-- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id
00:08:56.986    16:52:49	-- json_config/json_config.sh@65 -- # echo bdev_register:aio_disk
00:08:56.986    16:52:49	-- json_config/json_config.sh@64 -- # IFS=:
00:08:56.986    16:52:49	-- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id
00:08:56.986    16:52:49	-- json_config/json_config.sh@65 -- # echo bdev_register:ee197116-ba79-4851-92bb-494b5095177b
00:08:56.986    16:52:49	-- json_config/json_config.sh@64 -- # IFS=:
00:08:56.986    16:52:49	-- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id
00:08:56.986    16:52:49	-- json_config/json_config.sh@65 -- # echo bdev_register:314d5c89-60a6-4251-8580-6b6b770900f0
00:08:56.986    16:52:49	-- json_config/json_config.sh@64 -- # IFS=:
00:08:56.986    16:52:49	-- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id
00:08:56.986    16:52:49	-- json_config/json_config.sh@65 -- # echo bdev_register:01eddeb4-3a68-4a21-90b4-89317f03d4f1
00:08:56.986    16:52:49	-- json_config/json_config.sh@64 -- # IFS=:
00:08:56.986    16:52:49	-- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id
00:08:56.986    16:52:49	-- json_config/json_config.sh@65 -- # echo bdev_register:609b88cb-b362-4641-bb4e-78eaf67720fa
00:08:56.986    16:52:49	-- json_config/json_config.sh@64 -- # IFS=:
00:08:56.986    16:52:49	-- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id
00:08:56.986   16:52:49	-- json_config/json_config.sh@77 -- # [[ bdev_register:01eddeb4-3a68-4a21-90b4-89317f03d4f1 bdev_register:314d5c89-60a6-4251-8580-6b6b770900f0 bdev_register:609b88cb-b362-4641-bb4e-78eaf67720fa bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk bdev_register:ee197116-ba79-4851-92bb-494b5095177b != \b\d\e\v\_\r\e\g\i\s\t\e\r\:\0\1\e\d\d\e\b\4\-\3\a\6\8\-\4\a\2\1\-\9\0\b\4\-\8\9\3\1\7\f\0\3\d\4\f\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\3\1\4\d\5\c\8\9\-\6\0\a\6\-\4\2\5\1\-\8\5\8\0\-\6\b\6\b\7\7\0\9\0\0\f\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\6\0\9\b\8\8\c\b\-\b\3\6\2\-\4\6\4\1\-\b\b\4\e\-\7\8\e\a\f\6\7\7\2\0\f\a\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\u\l\l\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\P\T\B\d\e\v\F\r\o\m\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\i\o\_\d\i\s\k\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\e\e\1\9\7\1\1\6\-\b\a\7\9\-\4\8\5\1\-\9\2\b\b\-\4\9\4\b\5\0\9\5\1\7\7\b ]]
00:08:56.986   16:52:49	-- json_config/json_config.sh@89 -- # cat
00:08:56.986    16:52:49	-- json_config/json_config.sh@89 -- # printf ' %s\n' bdev_register:01eddeb4-3a68-4a21-90b4-89317f03d4f1 bdev_register:314d5c89-60a6-4251-8580-6b6b770900f0 bdev_register:609b88cb-b362-4641-bb4e-78eaf67720fa bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk bdev_register:ee197116-ba79-4851-92bb-494b5095177b
00:08:56.986  Expected events matched:
00:08:56.986   bdev_register:01eddeb4-3a68-4a21-90b4-89317f03d4f1
00:08:56.986   bdev_register:314d5c89-60a6-4251-8580-6b6b770900f0
00:08:56.986   bdev_register:609b88cb-b362-4641-bb4e-78eaf67720fa
00:08:56.986   bdev_register:Malloc0
00:08:56.986   bdev_register:Malloc0p0
00:08:56.986   bdev_register:Malloc0p1
00:08:56.986   bdev_register:Malloc0p2
00:08:56.986   bdev_register:Malloc1
00:08:56.986   bdev_register:Malloc3
00:08:56.986   bdev_register:Null0
00:08:56.986   bdev_register:Nvme0n1
00:08:56.986   bdev_register:Nvme0n1p0
00:08:56.986   bdev_register:Nvme0n1p1
00:08:56.986   bdev_register:PTBdevFromMalloc3
00:08:56.986   bdev_register:aio_disk
00:08:56.986   bdev_register:ee197116-ba79-4851-92bb-494b5095177b
00:08:56.986   16:52:49	-- json_config/json_config.sh@233 -- # timing_exit create_bdev_subsystem_config
00:08:56.986   16:52:49	-- common/autotest_common.sh@728 -- # xtrace_disable
00:08:56.986   16:52:49	-- common/autotest_common.sh@10 -- # set +x
00:08:56.986   16:52:49	-- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]]
00:08:56.986   16:52:49	-- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]]
00:08:56.986   16:52:49	-- json_config/json_config.sh@343 -- # [[ 0 -eq 1 ]]
00:08:56.986   16:52:49	-- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target
00:08:56.987   16:52:49	-- common/autotest_common.sh@728 -- # xtrace_disable
00:08:56.987   16:52:49	-- common/autotest_common.sh@10 -- # set +x
00:08:56.987   16:52:49	-- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]]
00:08:56.987   16:52:49	-- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck
00:08:56.987   16:52:49	-- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck
00:08:57.256  MallocBdevForConfigChangeCheck
00:08:57.256   16:52:50	-- json_config/json_config.sh@355 -- # timing_exit json_config_test_init
00:08:57.256   16:52:50	-- common/autotest_common.sh@728 -- # xtrace_disable
00:08:57.256   16:52:50	-- common/autotest_common.sh@10 -- # set +x
00:08:57.256   16:52:50	-- json_config/json_config.sh@422 -- # tgt_rpc save_config
00:08:57.256   16:52:50	-- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config
00:08:57.822  INFO: shutting down applications...
00:08:57.822   16:52:50	-- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...'
00:08:57.822   16:52:50	-- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]]
00:08:57.822   16:52:50	-- json_config/json_config.sh@431 -- # json_config_clear target
00:08:57.822   16:52:50	-- json_config/json_config.sh@385 -- # [[ -n 22 ]]
00:08:57.822   16:52:50	-- json_config/json_config.sh@386 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config
00:08:57.822  [2024-11-19 16:52:50.573394] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev Nvme0n1p0 being removed: closing lvstore lvs_test
00:08:58.081  Calling clear_vhost_scsi_subsystem
00:08:58.081  Calling clear_iscsi_subsystem
00:08:58.081  Calling clear_vhost_blk_subsystem
00:08:58.081  Calling clear_nbd_subsystem
00:08:58.081  Calling clear_nvmf_subsystem
00:08:58.081  Calling clear_bdev_subsystem
00:08:58.081  Calling clear_accel_subsystem
00:08:58.081  Calling clear_iobuf_subsystem
00:08:58.081  Calling clear_sock_subsystem
00:08:58.081  Calling clear_vmd_subsystem
00:08:58.081  Calling clear_scheduler_subsystem
00:08:58.081   16:52:50	-- json_config/json_config.sh@390 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py
00:08:58.081   16:52:50	-- json_config/json_config.sh@396 -- # count=100
00:08:58.081   16:52:50	-- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']'
00:08:58.081   16:52:50	-- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config
00:08:58.081   16:52:50	-- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters
00:08:58.081   16:52:50	-- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty
00:08:58.339   16:52:51	-- json_config/json_config.sh@398 -- # break
00:08:58.339   16:52:51	-- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']'
00:08:58.339   16:52:51	-- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target
00:08:58.339   16:52:51	-- json_config/json_config.sh@120 -- # local app=target
00:08:58.339   16:52:51	-- json_config/json_config.sh@123 -- # [[ -n 22 ]]
00:08:58.339   16:52:51	-- json_config/json_config.sh@124 -- # [[ -n 115112 ]]
00:08:58.339   16:52:51	-- json_config/json_config.sh@127 -- # kill -SIGINT 115112
00:08:58.339   16:52:51	-- json_config/json_config.sh@129 -- # (( i = 0 ))
00:08:58.339   16:52:51	-- json_config/json_config.sh@129 -- # (( i < 30 ))
00:08:58.339   16:52:51	-- json_config/json_config.sh@130 -- # kill -0 115112
00:08:58.339   16:52:51	-- json_config/json_config.sh@134 -- # sleep 0.5
00:08:58.907   16:52:51	-- json_config/json_config.sh@129 -- # (( i++ ))
00:08:58.907   16:52:51	-- json_config/json_config.sh@129 -- # (( i < 30 ))
00:08:58.907   16:52:51	-- json_config/json_config.sh@130 -- # kill -0 115112
00:08:58.907   16:52:51	-- json_config/json_config.sh@131 -- # app_pid[$app]=
00:08:58.907   16:52:51	-- json_config/json_config.sh@132 -- # break
00:08:58.907   16:52:51	-- json_config/json_config.sh@137 -- # [[ -n '' ]]
00:08:58.907  SPDK target shutdown done
00:08:58.907   16:52:51	-- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done'
00:08:58.907  INFO: relaunching applications...
00:08:58.907   16:52:51	-- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...'
00:08:58.907   16:52:51	-- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json
00:08:58.907   16:52:51	-- json_config/json_config.sh@98 -- # local app=target
00:08:58.907   16:52:51	-- json_config/json_config.sh@99 -- # shift
00:08:58.907   16:52:51	-- json_config/json_config.sh@101 -- # [[ -n 22 ]]
00:08:58.907   16:52:51	-- json_config/json_config.sh@102 -- # [[ -z '' ]]
00:08:58.907   16:52:51	-- json_config/json_config.sh@104 -- # local app_extra_params=
00:08:58.907   16:52:51	-- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]]
00:08:58.907   16:52:51	-- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]]
00:08:58.907   16:52:51	-- json_config/json_config.sh@111 -- # app_pid[$app]=115357
00:08:58.907  Waiting for target to run...
00:08:58.907   16:52:51	-- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...'
00:08:58.907   16:52:51	-- json_config/json_config.sh@114 -- # waitforlisten 115357 /var/tmp/spdk_tgt.sock
00:08:58.907   16:52:51	-- common/autotest_common.sh@829 -- # '[' -z 115357 ']'
00:08:58.907   16:52:51	-- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json
00:08:58.907   16:52:51	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock
00:08:58.907   16:52:51	-- common/autotest_common.sh@834 -- # local max_retries=100
00:08:58.907  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...
00:08:58.907   16:52:51	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...'
00:08:58.907   16:52:51	-- common/autotest_common.sh@838 -- # xtrace_disable
00:08:58.907   16:52:51	-- common/autotest_common.sh@10 -- # set +x
00:08:58.907  [2024-11-19 16:52:51.663728] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:08:58.907  [2024-11-19 16:52:51.664402] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115357 ]
00:08:59.474  [2024-11-19 16:52:52.042052] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:59.474  [2024-11-19 16:52:52.074479] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:08:59.474  [2024-11-19 16:52:52.074981] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:08:59.474  [2024-11-19 16:52:52.220414] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1
00:08:59.474  [2024-11-19 16:52:52.220739] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1
00:08:59.474  [2024-11-19 16:52:52.228361] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0
00:08:59.474  [2024-11-19 16:52:52.228581] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0
00:08:59.474  [2024-11-19 16:52:52.236424] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3
00:08:59.474  [2024-11-19 16:52:52.236625] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3
00:08:59.474  [2024-11-19 16:52:52.236754] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival
00:08:59.474  [2024-11-19 16:52:52.318556] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3
00:08:59.474  [2024-11-19 16:52:52.318966] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:08:59.474  [2024-11-19 16:52:52.319117] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480
00:08:59.474  [2024-11-19 16:52:52.319243] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:08:59.474  [2024-11-19 16:52:52.319960] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:08:59.474  [2024-11-19 16:52:52.320135] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3
00:08:59.732   16:52:52	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:08:59.732   16:52:52	-- common/autotest_common.sh@862 -- # return 0
00:08:59.732  
00:08:59.732   16:52:52	-- json_config/json_config.sh@115 -- # echo ''
00:08:59.732   16:52:52	-- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]]
00:08:59.732  INFO: Checking if target configuration is the same...
00:08:59.732   16:52:52	-- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...'
00:08:59.732    16:52:52	-- json_config/json_config.sh@441 -- # tgt_rpc save_config
00:08:59.732    16:52:52	-- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config
00:08:59.732   16:52:52	-- json_config/json_config.sh@441 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json
00:08:59.732  + '[' 2 -ne 2 ']'
00:08:59.732  +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh
00:08:59.732  ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../..
00:08:59.989  + rootdir=/home/vagrant/spdk_repo/spdk
00:08:59.989  +++ basename /dev/fd/62
00:08:59.989  ++ mktemp /tmp/62.XXX
00:08:59.989  + tmp_file_1=/tmp/62.28M
00:08:59.989  +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json
00:08:59.989  ++ mktemp /tmp/spdk_tgt_config.json.XXX
00:08:59.989  + tmp_file_2=/tmp/spdk_tgt_config.json.v2X
00:08:59.989  + ret=0
00:08:59.989  + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort
00:09:00.247  + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort
00:09:00.247  + diff -u /tmp/62.28M /tmp/spdk_tgt_config.json.v2X
00:09:00.247  INFO: JSON config files are the same
00:09:00.247  + echo 'INFO: JSON config files are the same'
00:09:00.247  + rm /tmp/62.28M /tmp/spdk_tgt_config.json.v2X
00:09:00.247  + exit 0
00:09:00.247   16:52:53	-- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]]
00:09:00.247  INFO: changing configuration and checking if this can be detected...
00:09:00.247   16:52:53	-- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...'
00:09:00.247   16:52:53	-- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck
00:09:00.247   16:52:53	-- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck
00:09:00.505   16:52:53	-- json_config/json_config.sh@450 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json
00:09:00.505    16:52:53	-- json_config/json_config.sh@450 -- # tgt_rpc save_config
00:09:00.505    16:52:53	-- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config
00:09:00.505  + '[' 2 -ne 2 ']'
00:09:00.505  +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh
00:09:00.505  ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../..
00:09:00.505  + rootdir=/home/vagrant/spdk_repo/spdk
00:09:00.505  +++ basename /dev/fd/62
00:09:00.505  ++ mktemp /tmp/62.XXX
00:09:00.505  + tmp_file_1=/tmp/62.PWS
00:09:00.505  +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json
00:09:00.505  ++ mktemp /tmp/spdk_tgt_config.json.XXX
00:09:00.505  + tmp_file_2=/tmp/spdk_tgt_config.json.1Ae
00:09:00.505  + ret=0
00:09:00.505  + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort
00:09:00.762  + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort
00:09:01.021  + diff -u /tmp/62.PWS /tmp/spdk_tgt_config.json.1Ae
00:09:01.021  + ret=1
00:09:01.021  + echo '=== Start of file: /tmp/62.PWS ==='
00:09:01.021  + cat /tmp/62.PWS
00:09:01.021  + echo '=== End of file: /tmp/62.PWS ==='
00:09:01.021  + echo ''
00:09:01.021  + echo '=== Start of file: /tmp/spdk_tgt_config.json.1Ae ==='
00:09:01.021  + cat /tmp/spdk_tgt_config.json.1Ae
00:09:01.021  + echo '=== End of file: /tmp/spdk_tgt_config.json.1Ae ==='
00:09:01.021  + echo ''
00:09:01.021  + rm /tmp/62.PWS /tmp/spdk_tgt_config.json.1Ae
00:09:01.021  + exit 1
00:09:01.021  INFO: configuration change detected.
00:09:01.021   16:52:53	-- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.'
00:09:01.021   16:52:53	-- json_config/json_config.sh@457 -- # json_config_test_fini
00:09:01.021   16:52:53	-- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini
00:09:01.021   16:52:53	-- common/autotest_common.sh@722 -- # xtrace_disable
00:09:01.021   16:52:53	-- common/autotest_common.sh@10 -- # set +x
00:09:01.021   16:52:53	-- json_config/json_config.sh@360 -- # local ret=0
00:09:01.021   16:52:53	-- json_config/json_config.sh@362 -- # [[ -n '' ]]
00:09:01.021   16:52:53	-- json_config/json_config.sh@370 -- # [[ -n 115357 ]]
00:09:01.021   16:52:53	-- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config
00:09:01.021   16:52:53	-- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config
00:09:01.021   16:52:53	-- common/autotest_common.sh@722 -- # xtrace_disable
00:09:01.021   16:52:53	-- common/autotest_common.sh@10 -- # set +x
00:09:01.021   16:52:53	-- json_config/json_config.sh@239 -- # [[ 1 -eq 1 ]]
00:09:01.021   16:52:53	-- json_config/json_config.sh@240 -- # tgt_rpc bdev_lvol_delete lvs_test/clone0
00:09:01.021   16:52:53	-- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/clone0
00:09:01.280   16:52:53	-- json_config/json_config.sh@241 -- # tgt_rpc bdev_lvol_delete lvs_test/lvol0
00:09:01.280   16:52:53	-- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/lvol0
00:09:01.538   16:52:54	-- json_config/json_config.sh@242 -- # tgt_rpc bdev_lvol_delete lvs_test/snapshot0
00:09:01.538   16:52:54	-- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/snapshot0
00:09:01.796   16:52:54	-- json_config/json_config.sh@243 -- # tgt_rpc bdev_lvol_delete_lvstore -l lvs_test
00:09:01.796   16:52:54	-- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete_lvstore -l lvs_test
00:09:02.054    16:52:54	-- json_config/json_config.sh@246 -- # uname -s
00:09:02.054   16:52:54	-- json_config/json_config.sh@246 -- # [[ Linux = Linux ]]
00:09:02.054   16:52:54	-- json_config/json_config.sh@247 -- # rm -f /sample_aio
00:09:02.054   16:52:54	-- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]]
00:09:02.054   16:52:54	-- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config
00:09:02.054   16:52:54	-- common/autotest_common.sh@728 -- # xtrace_disable
00:09:02.054   16:52:54	-- common/autotest_common.sh@10 -- # set +x
00:09:02.054   16:52:54	-- json_config/json_config.sh@376 -- # killprocess 115357
00:09:02.054   16:52:54	-- common/autotest_common.sh@936 -- # '[' -z 115357 ']'
00:09:02.054   16:52:54	-- common/autotest_common.sh@940 -- # kill -0 115357
00:09:02.054    16:52:54	-- common/autotest_common.sh@941 -- # uname
00:09:02.054   16:52:54	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:09:02.054    16:52:54	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 115357
00:09:02.054   16:52:54	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:09:02.054  killing process with pid 115357
00:09:02.054   16:52:54	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:09:02.054   16:52:54	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 115357'
00:09:02.054   16:52:54	-- common/autotest_common.sh@955 -- # kill 115357
00:09:02.054   16:52:54	-- common/autotest_common.sh@960 -- # wait 115357
00:09:02.313   16:52:55	-- json_config/json_config.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json
00:09:02.313   16:52:55	-- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini
00:09:02.313   16:52:55	-- common/autotest_common.sh@728 -- # xtrace_disable
00:09:02.313   16:52:55	-- common/autotest_common.sh@10 -- # set +x
00:09:02.313   16:52:55	-- json_config/json_config.sh@381 -- # return 0
00:09:02.313   16:52:55	-- json_config/json_config.sh@459 -- # echo 'INFO: Success'
00:09:02.313  INFO: Success
00:09:02.313  
00:09:02.313  real	0m11.552s
00:09:02.313  user	0m17.010s
00:09:02.313  sys	0m3.012s
00:09:02.313   16:52:55	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:09:02.313   16:52:55	-- common/autotest_common.sh@10 -- # set +x
00:09:02.313  ************************************
00:09:02.313  END TEST json_config
00:09:02.313  ************************************
00:09:02.572   16:52:55	-- spdk/autotest.sh@166 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh
00:09:02.572   16:52:55	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:09:02.572   16:52:55	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:09:02.572   16:52:55	-- common/autotest_common.sh@10 -- # set +x
00:09:02.572  ************************************
00:09:02.572  START TEST json_config_extra_key
00:09:02.572  ************************************
00:09:02.572   16:52:55	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh
00:09:02.572    16:52:55	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:09:02.572     16:52:55	-- common/autotest_common.sh@1690 -- # lcov --version
00:09:02.572     16:52:55	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:09:02.572    16:52:55	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:09:02.572    16:52:55	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:09:02.572    16:52:55	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:09:02.572    16:52:55	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:09:02.572    16:52:55	-- scripts/common.sh@335 -- # IFS=.-:
00:09:02.572    16:52:55	-- scripts/common.sh@335 -- # read -ra ver1
00:09:02.572    16:52:55	-- scripts/common.sh@336 -- # IFS=.-:
00:09:02.572    16:52:55	-- scripts/common.sh@336 -- # read -ra ver2
00:09:02.572    16:52:55	-- scripts/common.sh@337 -- # local 'op=<'
00:09:02.572    16:52:55	-- scripts/common.sh@339 -- # ver1_l=2
00:09:02.572    16:52:55	-- scripts/common.sh@340 -- # ver2_l=1
00:09:02.572    16:52:55	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:09:02.572    16:52:55	-- scripts/common.sh@343 -- # case "$op" in
00:09:02.572    16:52:55	-- scripts/common.sh@344 -- # : 1
00:09:02.572    16:52:55	-- scripts/common.sh@363 -- # (( v = 0 ))
00:09:02.572    16:52:55	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:09:02.572     16:52:55	-- scripts/common.sh@364 -- # decimal 1
00:09:02.572     16:52:55	-- scripts/common.sh@352 -- # local d=1
00:09:02.572     16:52:55	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:02.572     16:52:55	-- scripts/common.sh@354 -- # echo 1
00:09:02.572    16:52:55	-- scripts/common.sh@364 -- # ver1[v]=1
00:09:02.572     16:52:55	-- scripts/common.sh@365 -- # decimal 2
00:09:02.572     16:52:55	-- scripts/common.sh@352 -- # local d=2
00:09:02.572     16:52:55	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:09:02.572     16:52:55	-- scripts/common.sh@354 -- # echo 2
00:09:02.572    16:52:55	-- scripts/common.sh@365 -- # ver2[v]=2
00:09:02.572    16:52:55	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:09:02.572    16:52:55	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:09:02.572    16:52:55	-- scripts/common.sh@367 -- # return 0
00:09:02.572    16:52:55	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:09:02.572    16:52:55	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:09:02.572  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:02.572  		--rc genhtml_branch_coverage=1
00:09:02.572  		--rc genhtml_function_coverage=1
00:09:02.572  		--rc genhtml_legend=1
00:09:02.572  		--rc geninfo_all_blocks=1
00:09:02.572  		--rc geninfo_unexecuted_blocks=1
00:09:02.572  		
00:09:02.572  		'
00:09:02.572    16:52:55	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:09:02.572  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:02.572  		--rc genhtml_branch_coverage=1
00:09:02.572  		--rc genhtml_function_coverage=1
00:09:02.572  		--rc genhtml_legend=1
00:09:02.572  		--rc geninfo_all_blocks=1
00:09:02.572  		--rc geninfo_unexecuted_blocks=1
00:09:02.572  		
00:09:02.572  		'
00:09:02.572    16:52:55	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:09:02.572  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:02.572  		--rc genhtml_branch_coverage=1
00:09:02.572  		--rc genhtml_function_coverage=1
00:09:02.572  		--rc genhtml_legend=1
00:09:02.572  		--rc geninfo_all_blocks=1
00:09:02.572  		--rc geninfo_unexecuted_blocks=1
00:09:02.572  		
00:09:02.572  		'
00:09:02.572    16:52:55	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:09:02.572  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:02.572  		--rc genhtml_branch_coverage=1
00:09:02.572  		--rc genhtml_function_coverage=1
00:09:02.572  		--rc genhtml_legend=1
00:09:02.572  		--rc geninfo_all_blocks=1
00:09:02.572  		--rc geninfo_unexecuted_blocks=1
00:09:02.572  		
00:09:02.572  		'
00:09:02.572   16:52:55	-- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:09:02.572     16:52:55	-- nvmf/common.sh@7 -- # uname -s
00:09:02.572    16:52:55	-- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:09:02.572    16:52:55	-- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:09:02.572    16:52:55	-- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:09:02.572    16:52:55	-- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:09:02.572    16:52:55	-- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:09:02.572    16:52:55	-- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:09:02.572    16:52:55	-- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:09:02.572    16:52:55	-- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:09:02.572    16:52:55	-- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:09:02.572     16:52:55	-- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:09:02.572    16:52:55	-- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:aa84b4c6-c906-4b9d-b837-cd21ddd93b43
00:09:02.572    16:52:55	-- nvmf/common.sh@18 -- # NVME_HOSTID=aa84b4c6-c906-4b9d-b837-cd21ddd93b43
00:09:02.572    16:52:55	-- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:09:02.572    16:52:55	-- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:09:02.572    16:52:55	-- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback
00:09:02.572    16:52:55	-- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:09:02.572     16:52:55	-- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]]
00:09:02.572     16:52:55	-- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:09:02.572     16:52:55	-- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:09:02.572      16:52:55	-- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:09:02.572      16:52:55	-- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:09:02.573      16:52:55	-- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:09:02.573      16:52:55	-- paths/export.sh@5 -- # export PATH
00:09:02.573      16:52:55	-- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:09:02.573    16:52:55	-- nvmf/common.sh@46 -- # : 0
00:09:02.573    16:52:55	-- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID
00:09:02.573    16:52:55	-- nvmf/common.sh@48 -- # build_nvmf_app_args
00:09:02.573    16:52:55	-- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']'
00:09:02.573    16:52:55	-- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:09:02.573    16:52:55	-- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:09:02.573    16:52:55	-- nvmf/common.sh@32 -- # '[' -n '' ']'
00:09:02.573    16:52:55	-- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']'
00:09:02.573    16:52:55	-- nvmf/common.sh@50 -- # have_pci_nics=0
00:09:02.573   16:52:55	-- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='')
00:09:02.573   16:52:55	-- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid
00:09:02.573   16:52:55	-- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock')
00:09:02.573   16:52:55	-- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket
00:09:02.573   16:52:55	-- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024')
00:09:02.573   16:52:55	-- json_config/json_config_extra_key.sh@18 -- # declare -A app_params
00:09:02.573   16:52:55	-- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json')
00:09:02.573   16:52:55	-- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path
00:09:02.573   16:52:55	-- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR
00:09:02.573  INFO: launching applications...
00:09:02.573   16:52:55	-- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...'
00:09:02.573   16:52:55	-- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json
00:09:02.573   16:52:55	-- json_config/json_config_extra_key.sh@24 -- # local app=target
00:09:02.573   16:52:55	-- json_config/json_config_extra_key.sh@25 -- # shift
00:09:02.573   16:52:55	-- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]]
00:09:02.573   16:52:55	-- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]]
00:09:02.573   16:52:55	-- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=115530
00:09:02.573   16:52:55	-- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json
00:09:02.573  Waiting for target to run...
00:09:02.573   16:52:55	-- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...'
00:09:02.573   16:52:55	-- json_config/json_config_extra_key.sh@34 -- # waitforlisten 115530 /var/tmp/spdk_tgt.sock
00:09:02.573   16:52:55	-- common/autotest_common.sh@829 -- # '[' -z 115530 ']'
00:09:02.573   16:52:55	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock
00:09:02.573   16:52:55	-- common/autotest_common.sh@834 -- # local max_retries=100
00:09:02.573  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...
00:09:02.573   16:52:55	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...'
00:09:02.573   16:52:55	-- common/autotest_common.sh@838 -- # xtrace_disable
00:09:02.573   16:52:55	-- common/autotest_common.sh@10 -- # set +x
00:09:02.831  [2024-11-19 16:52:55.461225] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:09:02.831  [2024-11-19 16:52:55.461440] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115530 ]
00:09:03.090  [2024-11-19 16:52:55.838516] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:03.090  [2024-11-19 16:52:55.872289] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:09:03.090  [2024-11-19 16:52:55.872708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:09:03.657   16:52:56	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:09:03.657  
00:09:03.657   16:52:56	-- common/autotest_common.sh@862 -- # return 0
00:09:03.657   16:52:56	-- json_config/json_config_extra_key.sh@35 -- # echo ''
00:09:03.657  INFO: shutting down applications...
00:09:03.657   16:52:56	-- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...'
00:09:03.657   16:52:56	-- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target
00:09:03.657   16:52:56	-- json_config/json_config_extra_key.sh@40 -- # local app=target
00:09:03.657   16:52:56	-- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]]
00:09:03.657   16:52:56	-- json_config/json_config_extra_key.sh@44 -- # [[ -n 115530 ]]
00:09:03.657   16:52:56	-- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 115530
00:09:03.657   16:52:56	-- json_config/json_config_extra_key.sh@49 -- # (( i = 0 ))
00:09:03.658   16:52:56	-- json_config/json_config_extra_key.sh@49 -- # (( i < 30 ))
00:09:03.658   16:52:56	-- json_config/json_config_extra_key.sh@50 -- # kill -0 115530
00:09:03.658   16:52:56	-- json_config/json_config_extra_key.sh@54 -- # sleep 0.5
00:09:04.225   16:52:56	-- json_config/json_config_extra_key.sh@49 -- # (( i++ ))
00:09:04.225   16:52:56	-- json_config/json_config_extra_key.sh@49 -- # (( i < 30 ))
00:09:04.225   16:52:56	-- json_config/json_config_extra_key.sh@50 -- # kill -0 115530
00:09:04.225   16:52:56	-- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]=
00:09:04.225   16:52:56	-- json_config/json_config_extra_key.sh@52 -- # break
00:09:04.225   16:52:56	-- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]]
00:09:04.225  SPDK target shutdown done
00:09:04.225   16:52:56	-- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done'
00:09:04.225  Success
00:09:04.225   16:52:56	-- json_config/json_config_extra_key.sh@82 -- # echo Success
00:09:04.225  
00:09:04.225  real	0m1.679s
00:09:04.225  user	0m1.517s
00:09:04.225  sys	0m0.476s
00:09:04.225   16:52:56	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:09:04.225   16:52:56	-- common/autotest_common.sh@10 -- # set +x
00:09:04.225  ************************************
00:09:04.225  END TEST json_config_extra_key
00:09:04.225  ************************************
00:09:04.225   16:52:56	-- spdk/autotest.sh@167 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh
00:09:04.225   16:52:56	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:09:04.225   16:52:56	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:09:04.225   16:52:56	-- common/autotest_common.sh@10 -- # set +x
00:09:04.225  ************************************
00:09:04.225  START TEST alias_rpc
00:09:04.225  ************************************
00:09:04.225   16:52:56	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh
00:09:04.225  * Looking for test storage...
00:09:04.225  * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc
00:09:04.225    16:52:57	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:09:04.225     16:52:57	-- common/autotest_common.sh@1690 -- # lcov --version
00:09:04.225     16:52:57	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:09:04.483    16:52:57	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:09:04.483    16:52:57	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:09:04.483    16:52:57	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:09:04.483    16:52:57	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:09:04.483    16:52:57	-- scripts/common.sh@335 -- # IFS=.-:
00:09:04.483    16:52:57	-- scripts/common.sh@335 -- # read -ra ver1
00:09:04.483    16:52:57	-- scripts/common.sh@336 -- # IFS=.-:
00:09:04.483    16:52:57	-- scripts/common.sh@336 -- # read -ra ver2
00:09:04.483    16:52:57	-- scripts/common.sh@337 -- # local 'op=<'
00:09:04.483    16:52:57	-- scripts/common.sh@339 -- # ver1_l=2
00:09:04.483    16:52:57	-- scripts/common.sh@340 -- # ver2_l=1
00:09:04.483    16:52:57	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:09:04.483    16:52:57	-- scripts/common.sh@343 -- # case "$op" in
00:09:04.483    16:52:57	-- scripts/common.sh@344 -- # : 1
00:09:04.483    16:52:57	-- scripts/common.sh@363 -- # (( v = 0 ))
00:09:04.483    16:52:57	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:09:04.483     16:52:57	-- scripts/common.sh@364 -- # decimal 1
00:09:04.483     16:52:57	-- scripts/common.sh@352 -- # local d=1
00:09:04.483     16:52:57	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:04.483     16:52:57	-- scripts/common.sh@354 -- # echo 1
00:09:04.483    16:52:57	-- scripts/common.sh@364 -- # ver1[v]=1
00:09:04.483     16:52:57	-- scripts/common.sh@365 -- # decimal 2
00:09:04.483     16:52:57	-- scripts/common.sh@352 -- # local d=2
00:09:04.483     16:52:57	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:09:04.483     16:52:57	-- scripts/common.sh@354 -- # echo 2
00:09:04.483    16:52:57	-- scripts/common.sh@365 -- # ver2[v]=2
00:09:04.483    16:52:57	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:09:04.483    16:52:57	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:09:04.483    16:52:57	-- scripts/common.sh@367 -- # return 0
00:09:04.483    16:52:57	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:09:04.483    16:52:57	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:09:04.483  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:04.483  		--rc genhtml_branch_coverage=1
00:09:04.483  		--rc genhtml_function_coverage=1
00:09:04.483  		--rc genhtml_legend=1
00:09:04.483  		--rc geninfo_all_blocks=1
00:09:04.483  		--rc geninfo_unexecuted_blocks=1
00:09:04.483  		
00:09:04.483  		'
00:09:04.483    16:52:57	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:09:04.483  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:04.483  		--rc genhtml_branch_coverage=1
00:09:04.483  		--rc genhtml_function_coverage=1
00:09:04.483  		--rc genhtml_legend=1
00:09:04.483  		--rc geninfo_all_blocks=1
00:09:04.483  		--rc geninfo_unexecuted_blocks=1
00:09:04.483  		
00:09:04.483  		'
00:09:04.483    16:52:57	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:09:04.483  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:04.483  		--rc genhtml_branch_coverage=1
00:09:04.483  		--rc genhtml_function_coverage=1
00:09:04.483  		--rc genhtml_legend=1
00:09:04.483  		--rc geninfo_all_blocks=1
00:09:04.483  		--rc geninfo_unexecuted_blocks=1
00:09:04.483  		
00:09:04.483  		'
00:09:04.483    16:52:57	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:09:04.483  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:04.483  		--rc genhtml_branch_coverage=1
00:09:04.483  		--rc genhtml_function_coverage=1
00:09:04.483  		--rc genhtml_legend=1
00:09:04.483  		--rc geninfo_all_blocks=1
00:09:04.483  		--rc geninfo_unexecuted_blocks=1
00:09:04.483  		
00:09:04.483  		'
00:09:04.483   16:52:57	-- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR
00:09:04.483   16:52:57	-- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=115617
00:09:04.483   16:52:57	-- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 115617
00:09:04.483   16:52:57	-- common/autotest_common.sh@829 -- # '[' -z 115617 ']'
00:09:04.483   16:52:57	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:04.483   16:52:57	-- common/autotest_common.sh@834 -- # local max_retries=100
00:09:04.483   16:52:57	-- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:09:04.483   16:52:57	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:04.483  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:04.483   16:52:57	-- common/autotest_common.sh@838 -- # xtrace_disable
00:09:04.483   16:52:57	-- common/autotest_common.sh@10 -- # set +x
00:09:04.483  [2024-11-19 16:52:57.218189] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:09:04.483  [2024-11-19 16:52:57.218390] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115617 ]
00:09:04.742  [2024-11-19 16:52:57.360020] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:04.742  [2024-11-19 16:52:57.434831] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:09:04.742  [2024-11-19 16:52:57.435606] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:09:05.308   16:52:58	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:09:05.308   16:52:58	-- common/autotest_common.sh@862 -- # return 0
00:09:05.308   16:52:58	-- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i
00:09:05.567   16:52:58	-- alias_rpc/alias_rpc.sh@19 -- # killprocess 115617
00:09:05.567   16:52:58	-- common/autotest_common.sh@936 -- # '[' -z 115617 ']'
00:09:05.567   16:52:58	-- common/autotest_common.sh@940 -- # kill -0 115617
00:09:05.567    16:52:58	-- common/autotest_common.sh@941 -- # uname
00:09:05.567   16:52:58	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:09:05.567    16:52:58	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 115617
00:09:05.567   16:52:58	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:09:05.567   16:52:58	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:09:05.567   16:52:58	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 115617'
00:09:05.567  killing process with pid 115617
00:09:05.567   16:52:58	-- common/autotest_common.sh@955 -- # kill 115617
00:09:05.567   16:52:58	-- common/autotest_common.sh@960 -- # wait 115617
00:09:06.134  
00:09:06.134  real	0m1.824s
00:09:06.134  user	0m1.783s
00:09:06.134  sys	0m0.593s
00:09:06.134   16:52:58	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:09:06.134  ************************************
00:09:06.134  END TEST alias_rpc
00:09:06.134  ************************************
00:09:06.134   16:52:58	-- common/autotest_common.sh@10 -- # set +x
00:09:06.134   16:52:58	-- spdk/autotest.sh@169 -- # [[ 0 -eq 0 ]]
00:09:06.134   16:52:58	-- spdk/autotest.sh@170 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh
00:09:06.134   16:52:58	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:09:06.134   16:52:58	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:09:06.134   16:52:58	-- common/autotest_common.sh@10 -- # set +x
00:09:06.134  ************************************
00:09:06.134  START TEST spdkcli_tcp
00:09:06.134  ************************************
00:09:06.134   16:52:58	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh
00:09:06.134  * Looking for test storage...
00:09:06.134  * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli
00:09:06.134    16:52:58	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:09:06.134     16:52:58	-- common/autotest_common.sh@1690 -- # lcov --version
00:09:06.134     16:52:58	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:09:06.134    16:52:58	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:09:06.134    16:52:58	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:09:06.134    16:52:58	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:09:06.134    16:52:58	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:09:06.134    16:52:58	-- scripts/common.sh@335 -- # IFS=.-:
00:09:06.134    16:52:58	-- scripts/common.sh@335 -- # read -ra ver1
00:09:06.134    16:52:58	-- scripts/common.sh@336 -- # IFS=.-:
00:09:06.134    16:52:58	-- scripts/common.sh@336 -- # read -ra ver2
00:09:06.134    16:52:58	-- scripts/common.sh@337 -- # local 'op=<'
00:09:06.134    16:52:58	-- scripts/common.sh@339 -- # ver1_l=2
00:09:06.134    16:52:58	-- scripts/common.sh@340 -- # ver2_l=1
00:09:06.134    16:52:58	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:09:06.134    16:52:58	-- scripts/common.sh@343 -- # case "$op" in
00:09:06.134    16:52:58	-- scripts/common.sh@344 -- # : 1
00:09:06.134    16:52:58	-- scripts/common.sh@363 -- # (( v = 0 ))
00:09:06.134    16:52:58	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:09:06.134     16:52:58	-- scripts/common.sh@364 -- # decimal 1
00:09:06.134     16:52:58	-- scripts/common.sh@352 -- # local d=1
00:09:06.134     16:52:58	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:06.134     16:52:58	-- scripts/common.sh@354 -- # echo 1
00:09:06.134    16:52:58	-- scripts/common.sh@364 -- # ver1[v]=1
00:09:06.394     16:52:58	-- scripts/common.sh@365 -- # decimal 2
00:09:06.394     16:52:58	-- scripts/common.sh@352 -- # local d=2
00:09:06.394     16:52:58	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:09:06.394     16:52:58	-- scripts/common.sh@354 -- # echo 2
00:09:06.394    16:52:58	-- scripts/common.sh@365 -- # ver2[v]=2
00:09:06.394    16:52:58	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:09:06.394    16:52:58	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:09:06.394    16:52:58	-- scripts/common.sh@367 -- # return 0
00:09:06.394    16:52:58	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:09:06.394    16:52:58	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:09:06.394  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:06.394  		--rc genhtml_branch_coverage=1
00:09:06.394  		--rc genhtml_function_coverage=1
00:09:06.394  		--rc genhtml_legend=1
00:09:06.394  		--rc geninfo_all_blocks=1
00:09:06.394  		--rc geninfo_unexecuted_blocks=1
00:09:06.394  		
00:09:06.394  		'
00:09:06.394    16:52:58	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:09:06.394  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:06.394  		--rc genhtml_branch_coverage=1
00:09:06.394  		--rc genhtml_function_coverage=1
00:09:06.394  		--rc genhtml_legend=1
00:09:06.394  		--rc geninfo_all_blocks=1
00:09:06.394  		--rc geninfo_unexecuted_blocks=1
00:09:06.394  		
00:09:06.394  		'
00:09:06.394    16:52:58	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:09:06.394  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:06.394  		--rc genhtml_branch_coverage=1
00:09:06.394  		--rc genhtml_function_coverage=1
00:09:06.394  		--rc genhtml_legend=1
00:09:06.394  		--rc geninfo_all_blocks=1
00:09:06.394  		--rc geninfo_unexecuted_blocks=1
00:09:06.394  		
00:09:06.394  		'
00:09:06.394    16:52:58	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:09:06.394  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:06.394  		--rc genhtml_branch_coverage=1
00:09:06.394  		--rc genhtml_function_coverage=1
00:09:06.394  		--rc genhtml_legend=1
00:09:06.394  		--rc geninfo_all_blocks=1
00:09:06.394  		--rc geninfo_unexecuted_blocks=1
00:09:06.394  		
00:09:06.394  		'
00:09:06.394   16:52:58	-- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh
00:09:06.394    16:52:58	-- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py
00:09:06.394    16:52:58	-- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py
00:09:06.394   16:52:58	-- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1
00:09:06.394   16:52:58	-- spdkcli/tcp.sh@19 -- # PORT=9998
00:09:06.394   16:52:58	-- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT
00:09:06.394   16:52:58	-- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp
00:09:06.394   16:52:58	-- common/autotest_common.sh@722 -- # xtrace_disable
00:09:06.394   16:52:58	-- common/autotest_common.sh@10 -- # set +x
00:09:06.394   16:52:59	-- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=115714
00:09:06.394   16:52:59	-- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0
00:09:06.394   16:52:59	-- spdkcli/tcp.sh@27 -- # waitforlisten 115714
00:09:06.394   16:52:59	-- common/autotest_common.sh@829 -- # '[' -z 115714 ']'
00:09:06.394   16:52:59	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:06.394   16:52:59	-- common/autotest_common.sh@834 -- # local max_retries=100
00:09:06.394   16:52:59	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:06.394  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:06.394   16:52:59	-- common/autotest_common.sh@838 -- # xtrace_disable
00:09:06.394   16:52:59	-- common/autotest_common.sh@10 -- # set +x
00:09:06.394  [2024-11-19 16:52:59.080134] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:09:06.394  [2024-11-19 16:52:59.080439] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115714 ]
00:09:06.394  [2024-11-19 16:52:59.240837] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2
00:09:06.652  [2024-11-19 16:52:59.295751] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:09:06.652  [2024-11-19 16:52:59.296378] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:09:06.652  [2024-11-19 16:52:59.296381] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:09:07.217   16:52:59	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:09:07.217   16:52:59	-- common/autotest_common.sh@862 -- # return 0
00:09:07.217   16:52:59	-- spdkcli/tcp.sh@31 -- # socat_pid=115727
00:09:07.217   16:52:59	-- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods
00:09:07.217   16:52:59	-- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock
00:09:07.477  [
00:09:07.477    "spdk_get_version",
00:09:07.477    "rpc_get_methods",
00:09:07.477    "trace_get_info",
00:09:07.477    "trace_get_tpoint_group_mask",
00:09:07.477    "trace_disable_tpoint_group",
00:09:07.477    "trace_enable_tpoint_group",
00:09:07.477    "trace_clear_tpoint_mask",
00:09:07.477    "trace_set_tpoint_mask",
00:09:07.477    "framework_get_pci_devices",
00:09:07.477    "framework_get_config",
00:09:07.477    "framework_get_subsystems",
00:09:07.477    "iobuf_get_stats",
00:09:07.477    "iobuf_set_options",
00:09:07.477    "sock_set_default_impl",
00:09:07.477    "sock_impl_set_options",
00:09:07.477    "sock_impl_get_options",
00:09:07.477    "vmd_rescan",
00:09:07.477    "vmd_remove_device",
00:09:07.477    "vmd_enable",
00:09:07.477    "accel_get_stats",
00:09:07.477    "accel_set_options",
00:09:07.477    "accel_set_driver",
00:09:07.477    "accel_crypto_key_destroy",
00:09:07.477    "accel_crypto_keys_get",
00:09:07.477    "accel_crypto_key_create",
00:09:07.477    "accel_assign_opc",
00:09:07.477    "accel_get_module_info",
00:09:07.477    "accel_get_opc_assignments",
00:09:07.477    "notify_get_notifications",
00:09:07.477    "notify_get_types",
00:09:07.477    "bdev_get_histogram",
00:09:07.477    "bdev_enable_histogram",
00:09:07.477    "bdev_set_qos_limit",
00:09:07.477    "bdev_set_qd_sampling_period",
00:09:07.477    "bdev_get_bdevs",
00:09:07.477    "bdev_reset_iostat",
00:09:07.477    "bdev_get_iostat",
00:09:07.477    "bdev_examine",
00:09:07.477    "bdev_wait_for_examine",
00:09:07.477    "bdev_set_options",
00:09:07.477    "scsi_get_devices",
00:09:07.477    "thread_set_cpumask",
00:09:07.477    "framework_get_scheduler",
00:09:07.477    "framework_set_scheduler",
00:09:07.477    "framework_get_reactors",
00:09:07.477    "thread_get_io_channels",
00:09:07.477    "thread_get_pollers",
00:09:07.477    "thread_get_stats",
00:09:07.477    "framework_monitor_context_switch",
00:09:07.477    "spdk_kill_instance",
00:09:07.477    "log_enable_timestamps",
00:09:07.477    "log_get_flags",
00:09:07.477    "log_clear_flag",
00:09:07.477    "log_set_flag",
00:09:07.477    "log_get_level",
00:09:07.477    "log_set_level",
00:09:07.477    "log_get_print_level",
00:09:07.477    "log_set_print_level",
00:09:07.477    "framework_enable_cpumask_locks",
00:09:07.477    "framework_disable_cpumask_locks",
00:09:07.477    "framework_wait_init",
00:09:07.477    "framework_start_init",
00:09:07.477    "virtio_blk_create_transport",
00:09:07.477    "virtio_blk_get_transports",
00:09:07.477    "vhost_controller_set_coalescing",
00:09:07.477    "vhost_get_controllers",
00:09:07.477    "vhost_delete_controller",
00:09:07.477    "vhost_create_blk_controller",
00:09:07.477    "vhost_scsi_controller_remove_target",
00:09:07.477    "vhost_scsi_controller_add_target",
00:09:07.477    "vhost_start_scsi_controller",
00:09:07.477    "vhost_create_scsi_controller",
00:09:07.478    "nbd_get_disks",
00:09:07.478    "nbd_stop_disk",
00:09:07.478    "nbd_start_disk",
00:09:07.478    "env_dpdk_get_mem_stats",
00:09:07.478    "nvmf_subsystem_get_listeners",
00:09:07.478    "nvmf_subsystem_get_qpairs",
00:09:07.478    "nvmf_subsystem_get_controllers",
00:09:07.478    "nvmf_get_stats",
00:09:07.478    "nvmf_get_transports",
00:09:07.478    "nvmf_create_transport",
00:09:07.478    "nvmf_get_targets",
00:09:07.478    "nvmf_delete_target",
00:09:07.478    "nvmf_create_target",
00:09:07.478    "nvmf_subsystem_allow_any_host",
00:09:07.478    "nvmf_subsystem_remove_host",
00:09:07.478    "nvmf_subsystem_add_host",
00:09:07.478    "nvmf_subsystem_remove_ns",
00:09:07.478    "nvmf_subsystem_add_ns",
00:09:07.478    "nvmf_subsystem_listener_set_ana_state",
00:09:07.478    "nvmf_discovery_get_referrals",
00:09:07.478    "nvmf_discovery_remove_referral",
00:09:07.478    "nvmf_discovery_add_referral",
00:09:07.478    "nvmf_subsystem_remove_listener",
00:09:07.478    "nvmf_subsystem_add_listener",
00:09:07.478    "nvmf_delete_subsystem",
00:09:07.478    "nvmf_create_subsystem",
00:09:07.478    "nvmf_get_subsystems",
00:09:07.478    "nvmf_set_crdt",
00:09:07.478    "nvmf_set_config",
00:09:07.478    "nvmf_set_max_subsystems",
00:09:07.478    "iscsi_set_options",
00:09:07.478    "iscsi_get_auth_groups",
00:09:07.478    "iscsi_auth_group_remove_secret",
00:09:07.478    "iscsi_auth_group_add_secret",
00:09:07.478    "iscsi_delete_auth_group",
00:09:07.478    "iscsi_create_auth_group",
00:09:07.478    "iscsi_set_discovery_auth",
00:09:07.478    "iscsi_get_options",
00:09:07.478    "iscsi_target_node_request_logout",
00:09:07.478    "iscsi_target_node_set_redirect",
00:09:07.478    "iscsi_target_node_set_auth",
00:09:07.478    "iscsi_target_node_add_lun",
00:09:07.478    "iscsi_get_connections",
00:09:07.478    "iscsi_portal_group_set_auth",
00:09:07.478    "iscsi_start_portal_group",
00:09:07.478    "iscsi_delete_portal_group",
00:09:07.478    "iscsi_create_portal_group",
00:09:07.478    "iscsi_get_portal_groups",
00:09:07.478    "iscsi_delete_target_node",
00:09:07.478    "iscsi_target_node_remove_pg_ig_maps",
00:09:07.478    "iscsi_target_node_add_pg_ig_maps",
00:09:07.478    "iscsi_create_target_node",
00:09:07.478    "iscsi_get_target_nodes",
00:09:07.478    "iscsi_delete_initiator_group",
00:09:07.478    "iscsi_initiator_group_remove_initiators",
00:09:07.478    "iscsi_initiator_group_add_initiators",
00:09:07.478    "iscsi_create_initiator_group",
00:09:07.478    "iscsi_get_initiator_groups",
00:09:07.478    "iaa_scan_accel_module",
00:09:07.478    "dsa_scan_accel_module",
00:09:07.478    "ioat_scan_accel_module",
00:09:07.478    "accel_error_inject_error",
00:09:07.478    "bdev_iscsi_delete",
00:09:07.478    "bdev_iscsi_create",
00:09:07.478    "bdev_iscsi_set_options",
00:09:07.478    "bdev_virtio_attach_controller",
00:09:07.478    "bdev_virtio_scsi_get_devices",
00:09:07.478    "bdev_virtio_detach_controller",
00:09:07.478    "bdev_virtio_blk_set_hotplug",
00:09:07.478    "bdev_ftl_set_property",
00:09:07.478    "bdev_ftl_get_properties",
00:09:07.478    "bdev_ftl_get_stats",
00:09:07.478    "bdev_ftl_unmap",
00:09:07.478    "bdev_ftl_unload",
00:09:07.478    "bdev_ftl_delete",
00:09:07.478    "bdev_ftl_load",
00:09:07.478    "bdev_ftl_create",
00:09:07.478    "bdev_aio_delete",
00:09:07.478    "bdev_aio_rescan",
00:09:07.478    "bdev_aio_create",
00:09:07.478    "blobfs_create",
00:09:07.478    "blobfs_detect",
00:09:07.478    "blobfs_set_cache_size",
00:09:07.478    "bdev_zone_block_delete",
00:09:07.478    "bdev_zone_block_create",
00:09:07.478    "bdev_delay_delete",
00:09:07.478    "bdev_delay_create",
00:09:07.478    "bdev_delay_update_latency",
00:09:07.478    "bdev_split_delete",
00:09:07.478    "bdev_split_create",
00:09:07.478    "bdev_error_inject_error",
00:09:07.478    "bdev_error_delete",
00:09:07.478    "bdev_error_create",
00:09:07.478    "bdev_raid_set_options",
00:09:07.478    "bdev_raid_remove_base_bdev",
00:09:07.478    "bdev_raid_add_base_bdev",
00:09:07.478    "bdev_raid_delete",
00:09:07.478    "bdev_raid_create",
00:09:07.478    "bdev_raid_get_bdevs",
00:09:07.478    "bdev_lvol_grow_lvstore",
00:09:07.478    "bdev_lvol_get_lvols",
00:09:07.478    "bdev_lvol_get_lvstores",
00:09:07.478    "bdev_lvol_delete",
00:09:07.478    "bdev_lvol_set_read_only",
00:09:07.478    "bdev_lvol_resize",
00:09:07.478    "bdev_lvol_decouple_parent",
00:09:07.478    "bdev_lvol_inflate",
00:09:07.478    "bdev_lvol_rename",
00:09:07.478    "bdev_lvol_clone_bdev",
00:09:07.478    "bdev_lvol_clone",
00:09:07.478    "bdev_lvol_snapshot",
00:09:07.478    "bdev_lvol_create",
00:09:07.478    "bdev_lvol_delete_lvstore",
00:09:07.478    "bdev_lvol_rename_lvstore",
00:09:07.478    "bdev_lvol_create_lvstore",
00:09:07.478    "bdev_passthru_delete",
00:09:07.478    "bdev_passthru_create",
00:09:07.478    "bdev_nvme_cuse_unregister",
00:09:07.478    "bdev_nvme_cuse_register",
00:09:07.478    "bdev_opal_new_user",
00:09:07.478    "bdev_opal_set_lock_state",
00:09:07.478    "bdev_opal_delete",
00:09:07.478    "bdev_opal_get_info",
00:09:07.478    "bdev_opal_create",
00:09:07.478    "bdev_nvme_opal_revert",
00:09:07.478    "bdev_nvme_opal_init",
00:09:07.478    "bdev_nvme_send_cmd",
00:09:07.478    "bdev_nvme_get_path_iostat",
00:09:07.478    "bdev_nvme_get_mdns_discovery_info",
00:09:07.478    "bdev_nvme_stop_mdns_discovery",
00:09:07.478    "bdev_nvme_start_mdns_discovery",
00:09:07.478    "bdev_nvme_set_multipath_policy",
00:09:07.478    "bdev_nvme_set_preferred_path",
00:09:07.478    "bdev_nvme_get_io_paths",
00:09:07.478    "bdev_nvme_remove_error_injection",
00:09:07.478    "bdev_nvme_add_error_injection",
00:09:07.478    "bdev_nvme_get_discovery_info",
00:09:07.478    "bdev_nvme_stop_discovery",
00:09:07.478    "bdev_nvme_start_discovery",
00:09:07.478    "bdev_nvme_get_controller_health_info",
00:09:07.478    "bdev_nvme_disable_controller",
00:09:07.478    "bdev_nvme_enable_controller",
00:09:07.478    "bdev_nvme_reset_controller",
00:09:07.478    "bdev_nvme_get_transport_statistics",
00:09:07.478    "bdev_nvme_apply_firmware",
00:09:07.478    "bdev_nvme_detach_controller",
00:09:07.478    "bdev_nvme_get_controllers",
00:09:07.478    "bdev_nvme_attach_controller",
00:09:07.478    "bdev_nvme_set_hotplug",
00:09:07.478    "bdev_nvme_set_options",
00:09:07.478    "bdev_null_resize",
00:09:07.478    "bdev_null_delete",
00:09:07.478    "bdev_null_create",
00:09:07.478    "bdev_malloc_delete",
00:09:07.478    "bdev_malloc_create"
00:09:07.478  ]
00:09:07.478   16:53:00	-- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp
00:09:07.478   16:53:00	-- common/autotest_common.sh@728 -- # xtrace_disable
00:09:07.478   16:53:00	-- common/autotest_common.sh@10 -- # set +x
00:09:07.478   16:53:00	-- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT
00:09:07.478   16:53:00	-- spdkcli/tcp.sh@38 -- # killprocess 115714
00:09:07.478   16:53:00	-- common/autotest_common.sh@936 -- # '[' -z 115714 ']'
00:09:07.478   16:53:00	-- common/autotest_common.sh@940 -- # kill -0 115714
00:09:07.478    16:53:00	-- common/autotest_common.sh@941 -- # uname
00:09:07.478   16:53:00	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:09:07.478    16:53:00	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 115714
00:09:07.478   16:53:00	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:09:07.478  killing process with pid 115714
00:09:07.478   16:53:00	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:09:07.478   16:53:00	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 115714'
00:09:07.478   16:53:00	-- common/autotest_common.sh@955 -- # kill 115714
00:09:07.478   16:53:00	-- common/autotest_common.sh@960 -- # wait 115714
00:09:08.044  ************************************
00:09:08.044  END TEST spdkcli_tcp
00:09:08.044  ************************************
00:09:08.044  
00:09:08.044  real	0m1.904s
00:09:08.044  user	0m3.314s
00:09:08.044  sys	0m0.571s
00:09:08.044   16:53:00	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:09:08.044   16:53:00	-- common/autotest_common.sh@10 -- # set +x
00:09:08.044   16:53:00	-- spdk/autotest.sh@173 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh
00:09:08.044   16:53:00	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:09:08.044   16:53:00	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:09:08.044   16:53:00	-- common/autotest_common.sh@10 -- # set +x
00:09:08.044  ************************************
00:09:08.044  START TEST dpdk_mem_utility
00:09:08.044  ************************************
00:09:08.044   16:53:00	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh
00:09:08.044  * Looking for test storage...
00:09:08.044  * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility
00:09:08.044    16:53:00	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:09:08.044     16:53:00	-- common/autotest_common.sh@1690 -- # lcov --version
00:09:08.044     16:53:00	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:09:08.303    16:53:00	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:09:08.303    16:53:00	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:09:08.303    16:53:00	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:09:08.303    16:53:00	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:09:08.303    16:53:00	-- scripts/common.sh@335 -- # IFS=.-:
00:09:08.303    16:53:00	-- scripts/common.sh@335 -- # read -ra ver1
00:09:08.303    16:53:00	-- scripts/common.sh@336 -- # IFS=.-:
00:09:08.303    16:53:00	-- scripts/common.sh@336 -- # read -ra ver2
00:09:08.303    16:53:00	-- scripts/common.sh@337 -- # local 'op=<'
00:09:08.303    16:53:00	-- scripts/common.sh@339 -- # ver1_l=2
00:09:08.303    16:53:00	-- scripts/common.sh@340 -- # ver2_l=1
00:09:08.303    16:53:00	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:09:08.303    16:53:00	-- scripts/common.sh@343 -- # case "$op" in
00:09:08.303    16:53:00	-- scripts/common.sh@344 -- # : 1
00:09:08.303    16:53:00	-- scripts/common.sh@363 -- # (( v = 0 ))
00:09:08.303    16:53:00	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:09:08.303     16:53:00	-- scripts/common.sh@364 -- # decimal 1
00:09:08.303     16:53:00	-- scripts/common.sh@352 -- # local d=1
00:09:08.303     16:53:00	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:08.303     16:53:00	-- scripts/common.sh@354 -- # echo 1
00:09:08.303    16:53:00	-- scripts/common.sh@364 -- # ver1[v]=1
00:09:08.303     16:53:00	-- scripts/common.sh@365 -- # decimal 2
00:09:08.303     16:53:00	-- scripts/common.sh@352 -- # local d=2
00:09:08.303     16:53:00	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:09:08.303     16:53:00	-- scripts/common.sh@354 -- # echo 2
00:09:08.303    16:53:00	-- scripts/common.sh@365 -- # ver2[v]=2
00:09:08.303    16:53:00	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:09:08.304    16:53:00	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:09:08.304    16:53:00	-- scripts/common.sh@367 -- # return 0
00:09:08.304    16:53:00	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:09:08.304    16:53:00	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:09:08.304  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:08.304  		--rc genhtml_branch_coverage=1
00:09:08.304  		--rc genhtml_function_coverage=1
00:09:08.304  		--rc genhtml_legend=1
00:09:08.304  		--rc geninfo_all_blocks=1
00:09:08.304  		--rc geninfo_unexecuted_blocks=1
00:09:08.304  		
00:09:08.304  		'
00:09:08.304    16:53:00	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:09:08.304  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:08.304  		--rc genhtml_branch_coverage=1
00:09:08.304  		--rc genhtml_function_coverage=1
00:09:08.304  		--rc genhtml_legend=1
00:09:08.304  		--rc geninfo_all_blocks=1
00:09:08.304  		--rc geninfo_unexecuted_blocks=1
00:09:08.304  		
00:09:08.304  		'
00:09:08.304    16:53:00	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:09:08.304  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:08.304  		--rc genhtml_branch_coverage=1
00:09:08.304  		--rc genhtml_function_coverage=1
00:09:08.304  		--rc genhtml_legend=1
00:09:08.304  		--rc geninfo_all_blocks=1
00:09:08.304  		--rc geninfo_unexecuted_blocks=1
00:09:08.304  		
00:09:08.304  		'
00:09:08.304    16:53:00	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:09:08.304  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:08.304  		--rc genhtml_branch_coverage=1
00:09:08.304  		--rc genhtml_function_coverage=1
00:09:08.304  		--rc genhtml_legend=1
00:09:08.304  		--rc geninfo_all_blocks=1
00:09:08.304  		--rc geninfo_unexecuted_blocks=1
00:09:08.304  		
00:09:08.304  		'
00:09:08.304   16:53:00	-- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py
00:09:08.304   16:53:00	-- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=115806
00:09:08.304   16:53:00	-- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 115806
00:09:08.304   16:53:00	-- common/autotest_common.sh@829 -- # '[' -z 115806 ']'
00:09:08.304   16:53:00	-- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:09:08.304   16:53:00	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:08.304   16:53:00	-- common/autotest_common.sh@834 -- # local max_retries=100
00:09:08.304  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:08.304   16:53:00	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:08.304   16:53:00	-- common/autotest_common.sh@838 -- # xtrace_disable
00:09:08.304   16:53:00	-- common/autotest_common.sh@10 -- # set +x
00:09:08.304  [2024-11-19 16:53:01.048322] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:09:08.304  [2024-11-19 16:53:01.048637] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115806 ]
00:09:08.562  [2024-11-19 16:53:01.201257] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:08.563  [2024-11-19 16:53:01.257004] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:09:08.563  [2024-11-19 16:53:01.257654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:09:09.566   16:53:01	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:09:09.566   16:53:01	-- common/autotest_common.sh@862 -- # return 0
00:09:09.566   16:53:01	-- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT
00:09:09.566   16:53:01	-- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats
00:09:09.566   16:53:01	-- common/autotest_common.sh@561 -- # xtrace_disable
00:09:09.566   16:53:01	-- common/autotest_common.sh@10 -- # set +x
00:09:09.566  {
00:09:09.566  "filename": "/tmp/spdk_mem_dump.txt"
00:09:09.566  }
00:09:09.566   16:53:01	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:09.566   16:53:01	-- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py
00:09:09.566  DPDK memory size 814.000000 MiB in 1 heap(s)
00:09:09.566  1 heaps totaling size 814.000000 MiB
00:09:09.566    size:  814.000000 MiB heap id: 0
00:09:09.566  end heaps----------
00:09:09.566  8 mempools totaling size 598.116089 MiB
00:09:09.566    size:  212.674988 MiB name: PDU_immediate_data_Pool
00:09:09.566    size:  158.602051 MiB name: PDU_data_out_Pool
00:09:09.566    size:   84.521057 MiB name: bdev_io_115806
00:09:09.566    size:   51.011292 MiB name: evtpool_115806
00:09:09.566    size:   50.003479 MiB name: msgpool_115806
00:09:09.566    size:   21.763794 MiB name: PDU_Pool
00:09:09.566    size:   19.513306 MiB name: SCSI_TASK_Pool
00:09:09.566    size:    0.026123 MiB name: Session_Pool
00:09:09.566  end mempools-------
00:09:09.566  6 memzones totaling size 4.142822 MiB
00:09:09.566    size:    1.000366 MiB name: RG_ring_0_115806
00:09:09.566    size:    1.000366 MiB name: RG_ring_1_115806
00:09:09.566    size:    1.000366 MiB name: RG_ring_4_115806
00:09:09.566    size:    1.000366 MiB name: RG_ring_5_115806
00:09:09.566    size:    0.125366 MiB name: RG_ring_2_115806
00:09:09.566    size:    0.015991 MiB name: RG_ring_3_115806
00:09:09.566  end memzones-------
00:09:09.566   16:53:02	-- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0
00:09:09.566  heap id: 0 total size: 814.000000 MiB number of busy elements: 222 number of free elements: 15
00:09:09.566    list of free elements. size: 12.486206 MiB
00:09:09.566      element at address: 0x200000400000 with size:    1.999512 MiB
00:09:09.566      element at address: 0x200018e00000 with size:    0.999878 MiB
00:09:09.566      element at address: 0x200019000000 with size:    0.999878 MiB
00:09:09.566      element at address: 0x200003e00000 with size:    0.996277 MiB
00:09:09.566      element at address: 0x200031c00000 with size:    0.994446 MiB
00:09:09.566      element at address: 0x200013800000 with size:    0.978699 MiB
00:09:09.566      element at address: 0x200007000000 with size:    0.959839 MiB
00:09:09.566      element at address: 0x200019200000 with size:    0.936584 MiB
00:09:09.566      element at address: 0x200000200000 with size:    0.837219 MiB
00:09:09.566      element at address: 0x20001aa00000 with size:    0.567871 MiB
00:09:09.566      element at address: 0x20000b200000 with size:    0.489807 MiB
00:09:09.566      element at address: 0x200000800000 with size:    0.486511 MiB
00:09:09.566      element at address: 0x200019400000 with size:    0.485657 MiB
00:09:09.567      element at address: 0x200027e00000 with size:    0.402527 MiB
00:09:09.567      element at address: 0x200003a00000 with size:    0.351501 MiB
00:09:09.567    list of standard malloc elements. size: 199.251221 MiB
00:09:09.567      element at address: 0x20000b3fff80 with size:  132.000122 MiB
00:09:09.567      element at address: 0x2000071fff80 with size:   64.000122 MiB
00:09:09.567      element at address: 0x200018efff80 with size:    1.000122 MiB
00:09:09.567      element at address: 0x2000190fff80 with size:    1.000122 MiB
00:09:09.567      element at address: 0x2000192fff80 with size:    1.000122 MiB
00:09:09.567      element at address: 0x2000003d9f00 with size:    0.140747 MiB
00:09:09.567      element at address: 0x2000192eff00 with size:    0.062622 MiB
00:09:09.567      element at address: 0x2000003fdf80 with size:    0.007935 MiB
00:09:09.567      element at address: 0x2000192efdc0 with size:    0.000305 MiB
00:09:09.567      element at address: 0x2000002d6540 with size:    0.000183 MiB
00:09:09.567      element at address: 0x2000002d6600 with size:    0.000183 MiB
00:09:09.567      element at address: 0x2000002d66c0 with size:    0.000183 MiB
00:09:09.567      element at address: 0x2000002d6780 with size:    0.000183 MiB
00:09:09.567      element at address: 0x2000002d6840 with size:    0.000183 MiB
00:09:09.567      element at address: 0x2000002d6900 with size:    0.000183 MiB
00:09:09.567      element at address: 0x2000002d69c0 with size:    0.000183 MiB
00:09:09.567      element at address: 0x2000002d6a80 with size:    0.000183 MiB
00:09:09.567      element at address: 0x2000002d6b40 with size:    0.000183 MiB
00:09:09.567      element at address: 0x2000002d6c00 with size:    0.000183 MiB
00:09:09.567      element at address: 0x2000002d6cc0 with size:    0.000183 MiB
00:09:09.567      element at address: 0x2000002d6d80 with size:    0.000183 MiB
00:09:09.567      element at address: 0x2000002d6e40 with size:    0.000183 MiB
00:09:09.567      element at address: 0x2000002d6f00 with size:    0.000183 MiB
00:09:09.567      element at address: 0x2000002d6fc0 with size:    0.000183 MiB
00:09:09.567      element at address: 0x2000002d71c0 with size:    0.000183 MiB
00:09:09.567      element at address: 0x2000002d7280 with size:    0.000183 MiB
00:09:09.567      element at address: 0x2000002d7340 with size:    0.000183 MiB
00:09:09.567      element at address: 0x2000002d7400 with size:    0.000183 MiB
00:09:09.567      element at address: 0x2000002d74c0 with size:    0.000183 MiB
00:09:09.567      element at address: 0x2000002d7580 with size:    0.000183 MiB
00:09:09.567      element at address: 0x2000002d7640 with size:    0.000183 MiB
00:09:09.567      element at address: 0x2000002d7700 with size:    0.000183 MiB
00:09:09.567      element at address: 0x2000002d77c0 with size:    0.000183 MiB
00:09:09.567      element at address: 0x2000002d7880 with size:    0.000183 MiB
00:09:09.567      element at address: 0x2000002d7940 with size:    0.000183 MiB
00:09:09.567      element at address: 0x2000002d7a00 with size:    0.000183 MiB
00:09:09.567      element at address: 0x2000002d7ac0 with size:    0.000183 MiB
00:09:09.567      element at address: 0x2000002d7b80 with size:    0.000183 MiB
00:09:09.567      element at address: 0x2000002d7c40 with size:    0.000183 MiB
00:09:09.567      element at address: 0x2000003d9e40 with size:    0.000183 MiB
00:09:09.567      element at address: 0x20000087c8c0 with size:    0.000183 MiB
00:09:09.567      element at address: 0x20000087c980 with size:    0.000183 MiB
00:09:09.567      element at address: 0x20000087ca40 with size:    0.000183 MiB
00:09:09.567      element at address: 0x20000087cb00 with size:    0.000183 MiB
00:09:09.567      element at address: 0x20000087cbc0 with size:    0.000183 MiB
00:09:09.567      element at address: 0x20000087cc80 with size:    0.000183 MiB
00:09:09.567      element at address: 0x20000087cd40 with size:    0.000183 MiB
00:09:09.567      element at address: 0x20000087ce00 with size:    0.000183 MiB
00:09:09.567      element at address: 0x20000087cec0 with size:    0.000183 MiB
00:09:09.567      element at address: 0x2000008fd180 with size:    0.000183 MiB
00:09:09.567      element at address: 0x200003a59fc0 with size:    0.000183 MiB
00:09:09.567      element at address: 0x200003a5a080 with size:    0.000183 MiB
00:09:09.567      element at address: 0x200003a5a140 with size:    0.000183 MiB
00:09:09.567      element at address: 0x200003a5a200 with size:    0.000183 MiB
00:09:09.567      element at address: 0x200003a5a2c0 with size:    0.000183 MiB
00:09:09.567      element at address: 0x200003a5a380 with size:    0.000183 MiB
00:09:09.567      element at address: 0x200003a5a440 with size:    0.000183 MiB
00:09:09.567      element at address: 0x200003a5a500 with size:    0.000183 MiB
00:09:09.567      element at address: 0x200003a5a5c0 with size:    0.000183 MiB
00:09:09.567      element at address: 0x200003a5a680 with size:    0.000183 MiB
00:09:09.567      element at address: 0x200003a5a740 with size:    0.000183 MiB
00:09:09.567      element at address: 0x200003a5a800 with size:    0.000183 MiB
00:09:09.567      element at address: 0x200003a5a8c0 with size:    0.000183 MiB
00:09:09.567      element at address: 0x200003a5a980 with size:    0.000183 MiB
00:09:09.567      element at address: 0x200003a5aa40 with size:    0.000183 MiB
00:09:09.567      element at address: 0x200003a5ab00 with size:    0.000183 MiB
00:09:09.567      element at address: 0x200003a5abc0 with size:    0.000183 MiB
00:09:09.567      element at address: 0x200003a5ac80 with size:    0.000183 MiB
00:09:09.567      element at address: 0x200003a5ad40 with size:    0.000183 MiB
00:09:09.567      element at address: 0x200003a5ae00 with size:    0.000183 MiB
00:09:09.567      element at address: 0x200003a5aec0 with size:    0.000183 MiB
00:09:09.567      element at address: 0x200003a5af80 with size:    0.000183 MiB
00:09:09.567      element at address: 0x200003a5b040 with size:    0.000183 MiB
00:09:09.567      element at address: 0x200003adb300 with size:    0.000183 MiB
00:09:09.567      element at address: 0x200003adb500 with size:    0.000183 MiB
00:09:09.567      element at address: 0x200003adf7c0 with size:    0.000183 MiB
00:09:09.567      element at address: 0x200003affa80 with size:    0.000183 MiB
00:09:09.567      element at address: 0x200003affb40 with size:    0.000183 MiB
00:09:09.567      element at address: 0x200003eff0c0 with size:    0.000183 MiB
00:09:09.567      element at address: 0x2000070fdd80 with size:    0.000183 MiB
00:09:09.567      element at address: 0x20000b27d640 with size:    0.000183 MiB
00:09:09.567      element at address: 0x20000b27d700 with size:    0.000183 MiB
00:09:09.567      element at address: 0x20000b27d7c0 with size:    0.000183 MiB
00:09:09.567      element at address: 0x20000b27d880 with size:    0.000183 MiB
00:09:09.567      element at address: 0x20000b27d940 with size:    0.000183 MiB
00:09:09.567      element at address: 0x20000b27da00 with size:    0.000183 MiB
00:09:09.567      element at address: 0x20000b27dac0 with size:    0.000183 MiB
00:09:09.567      element at address: 0x20000b2fdd80 with size:    0.000183 MiB
00:09:09.567      element at address: 0x2000138fa8c0 with size:    0.000183 MiB
00:09:09.567      element at address: 0x2000192efc40 with size:    0.000183 MiB
00:09:09.567      element at address: 0x2000192efd00 with size:    0.000183 MiB
00:09:09.567      element at address: 0x2000194bc740 with size:    0.000183 MiB
00:09:09.567      element at address: 0x20001aa91600 with size:    0.000183 MiB
00:09:09.567      element at address: 0x20001aa916c0 with size:    0.000183 MiB
00:09:09.567      element at address: 0x20001aa91780 with size:    0.000183 MiB
00:09:09.567      element at address: 0x20001aa91840 with size:    0.000183 MiB
00:09:09.567      element at address: 0x20001aa91900 with size:    0.000183 MiB
00:09:09.567      element at address: 0x20001aa919c0 with size:    0.000183 MiB
00:09:09.567      element at address: 0x20001aa91a80 with size:    0.000183 MiB
00:09:09.567      element at address: 0x20001aa91b40 with size:    0.000183 MiB
00:09:09.567      element at address: 0x20001aa91c00 with size:    0.000183 MiB
00:09:09.567      element at address: 0x20001aa91cc0 with size:    0.000183 MiB
00:09:09.567      element at address: 0x20001aa91d80 with size:    0.000183 MiB
00:09:09.567      element at address: 0x20001aa91e40 with size:    0.000183 MiB
00:09:09.567      element at address: 0x20001aa91f00 with size:    0.000183 MiB
00:09:09.567      element at address: 0x20001aa91fc0 with size:    0.000183 MiB
00:09:09.567      element at address: 0x20001aa92080 with size:    0.000183 MiB
00:09:09.567      element at address: 0x20001aa92140 with size:    0.000183 MiB
00:09:09.567      element at address: 0x20001aa92200 with size:    0.000183 MiB
00:09:09.567      element at address: 0x20001aa922c0 with size:    0.000183 MiB
00:09:09.567      element at address: 0x20001aa92380 with size:    0.000183 MiB
00:09:09.567      element at address: 0x20001aa92440 with size:    0.000183 MiB
00:09:09.567      element at address: 0x20001aa92500 with size:    0.000183 MiB
00:09:09.567      element at address: 0x20001aa925c0 with size:    0.000183 MiB
00:09:09.567      element at address: 0x20001aa92680 with size:    0.000183 MiB
00:09:09.567      element at address: 0x20001aa92740 with size:    0.000183 MiB
00:09:09.567      element at address: 0x20001aa92800 with size:    0.000183 MiB
00:09:09.567      element at address: 0x20001aa928c0 with size:    0.000183 MiB
00:09:09.567      element at address: 0x20001aa92980 with size:    0.000183 MiB
00:09:09.567      element at address: 0x20001aa92a40 with size:    0.000183 MiB
00:09:09.567      element at address: 0x20001aa92b00 with size:    0.000183 MiB
00:09:09.567      element at address: 0x20001aa92bc0 with size:    0.000183 MiB
00:09:09.567      element at address: 0x20001aa92c80 with size:    0.000183 MiB
00:09:09.567      element at address: 0x20001aa92d40 with size:    0.000183 MiB
00:09:09.567      element at address: 0x20001aa92e00 with size:    0.000183 MiB
00:09:09.567      element at address: 0x20001aa92ec0 with size:    0.000183 MiB
00:09:09.567      element at address: 0x20001aa92f80 with size:    0.000183 MiB
00:09:09.567      element at address: 0x20001aa93040 with size:    0.000183 MiB
00:09:09.567      element at address: 0x20001aa93100 with size:    0.000183 MiB
00:09:09.567      element at address: 0x20001aa931c0 with size:    0.000183 MiB
00:09:09.567      element at address: 0x20001aa93280 with size:    0.000183 MiB
00:09:09.567      element at address: 0x20001aa93340 with size:    0.000183 MiB
00:09:09.567      element at address: 0x20001aa93400 with size:    0.000183 MiB
00:09:09.567      element at address: 0x20001aa934c0 with size:    0.000183 MiB
00:09:09.567      element at address: 0x20001aa93580 with size:    0.000183 MiB
00:09:09.567      element at address: 0x20001aa93640 with size:    0.000183 MiB
00:09:09.567      element at address: 0x20001aa93700 with size:    0.000183 MiB
00:09:09.567      element at address: 0x20001aa937c0 with size:    0.000183 MiB
00:09:09.567      element at address: 0x20001aa93880 with size:    0.000183 MiB
00:09:09.567      element at address: 0x20001aa93940 with size:    0.000183 MiB
00:09:09.567      element at address: 0x20001aa93a00 with size:    0.000183 MiB
00:09:09.567      element at address: 0x20001aa93ac0 with size:    0.000183 MiB
00:09:09.567      element at address: 0x20001aa93b80 with size:    0.000183 MiB
00:09:09.567      element at address: 0x20001aa93c40 with size:    0.000183 MiB
00:09:09.567      element at address: 0x20001aa93d00 with size:    0.000183 MiB
00:09:09.567      element at address: 0x20001aa93dc0 with size:    0.000183 MiB
00:09:09.567      element at address: 0x20001aa93e80 with size:    0.000183 MiB
00:09:09.567      element at address: 0x20001aa93f40 with size:    0.000183 MiB
00:09:09.567      element at address: 0x20001aa94000 with size:    0.000183 MiB
00:09:09.567      element at address: 0x20001aa940c0 with size:    0.000183 MiB
00:09:09.568      element at address: 0x20001aa94180 with size:    0.000183 MiB
00:09:09.568      element at address: 0x20001aa94240 with size:    0.000183 MiB
00:09:09.568      element at address: 0x20001aa94300 with size:    0.000183 MiB
00:09:09.568      element at address: 0x20001aa943c0 with size:    0.000183 MiB
00:09:09.568      element at address: 0x20001aa94480 with size:    0.000183 MiB
00:09:09.568      element at address: 0x20001aa94540 with size:    0.000183 MiB
00:09:09.568      element at address: 0x20001aa94600 with size:    0.000183 MiB
00:09:09.568      element at address: 0x20001aa946c0 with size:    0.000183 MiB
00:09:09.568      element at address: 0x20001aa94780 with size:    0.000183 MiB
00:09:09.568      element at address: 0x20001aa94840 with size:    0.000183 MiB
00:09:09.568      element at address: 0x20001aa94900 with size:    0.000183 MiB
00:09:09.568      element at address: 0x20001aa949c0 with size:    0.000183 MiB
00:09:09.568      element at address: 0x20001aa94a80 with size:    0.000183 MiB
00:09:09.568      element at address: 0x20001aa94b40 with size:    0.000183 MiB
00:09:09.568      element at address: 0x20001aa94c00 with size:    0.000183 MiB
00:09:09.568      element at address: 0x20001aa94cc0 with size:    0.000183 MiB
00:09:09.568      element at address: 0x20001aa94d80 with size:    0.000183 MiB
00:09:09.568      element at address: 0x20001aa94e40 with size:    0.000183 MiB
00:09:09.568      element at address: 0x20001aa94f00 with size:    0.000183 MiB
00:09:09.568      element at address: 0x20001aa94fc0 with size:    0.000183 MiB
00:09:09.568      element at address: 0x20001aa95080 with size:    0.000183 MiB
00:09:09.568      element at address: 0x20001aa95140 with size:    0.000183 MiB
00:09:09.568      element at address: 0x20001aa95200 with size:    0.000183 MiB
00:09:09.568      element at address: 0x20001aa952c0 with size:    0.000183 MiB
00:09:09.568      element at address: 0x20001aa95380 with size:    0.000183 MiB
00:09:09.568      element at address: 0x20001aa95440 with size:    0.000183 MiB
00:09:09.568      element at address: 0x200027e670c0 with size:    0.000183 MiB
00:09:09.568      element at address: 0x200027e67180 with size:    0.000183 MiB
00:09:09.568      element at address: 0x200027e6dd80 with size:    0.000183 MiB
00:09:09.568      element at address: 0x200027e6df80 with size:    0.000183 MiB
00:09:09.568      element at address: 0x200027e6e040 with size:    0.000183 MiB
00:09:09.568      element at address: 0x200027e6e100 with size:    0.000183 MiB
00:09:09.568      element at address: 0x200027e6e1c0 with size:    0.000183 MiB
00:09:09.568      element at address: 0x200027e6e280 with size:    0.000183 MiB
00:09:09.568      element at address: 0x200027e6e340 with size:    0.000183 MiB
00:09:09.568      element at address: 0x200027e6e400 with size:    0.000183 MiB
00:09:09.568      element at address: 0x200027e6e4c0 with size:    0.000183 MiB
00:09:09.568      element at address: 0x200027e6e580 with size:    0.000183 MiB
00:09:09.568      element at address: 0x200027e6e640 with size:    0.000183 MiB
00:09:09.568      element at address: 0x200027e6e700 with size:    0.000183 MiB
00:09:09.568      element at address: 0x200027e6e7c0 with size:    0.000183 MiB
00:09:09.568      element at address: 0x200027e6e880 with size:    0.000183 MiB
00:09:09.568      element at address: 0x200027e6e940 with size:    0.000183 MiB
00:09:09.568      element at address: 0x200027e6ea00 with size:    0.000183 MiB
00:09:09.568      element at address: 0x200027e6eac0 with size:    0.000183 MiB
00:09:09.568      element at address: 0x200027e6eb80 with size:    0.000183 MiB
00:09:09.568      element at address: 0x200027e6ec40 with size:    0.000183 MiB
00:09:09.568      element at address: 0x200027e6ed00 with size:    0.000183 MiB
00:09:09.568      element at address: 0x200027e6edc0 with size:    0.000183 MiB
00:09:09.568      element at address: 0x200027e6ee80 with size:    0.000183 MiB
00:09:09.568      element at address: 0x200027e6ef40 with size:    0.000183 MiB
00:09:09.568      element at address: 0x200027e6f000 with size:    0.000183 MiB
00:09:09.568      element at address: 0x200027e6f0c0 with size:    0.000183 MiB
00:09:09.568      element at address: 0x200027e6f180 with size:    0.000183 MiB
00:09:09.568      element at address: 0x200027e6f240 with size:    0.000183 MiB
00:09:09.568      element at address: 0x200027e6f300 with size:    0.000183 MiB
00:09:09.568      element at address: 0x200027e6f3c0 with size:    0.000183 MiB
00:09:09.568      element at address: 0x200027e6f480 with size:    0.000183 MiB
00:09:09.568      element at address: 0x200027e6f540 with size:    0.000183 MiB
00:09:09.568      element at address: 0x200027e6f600 with size:    0.000183 MiB
00:09:09.568      element at address: 0x200027e6f6c0 with size:    0.000183 MiB
00:09:09.568      element at address: 0x200027e6f780 with size:    0.000183 MiB
00:09:09.568      element at address: 0x200027e6f840 with size:    0.000183 MiB
00:09:09.568      element at address: 0x200027e6f900 with size:    0.000183 MiB
00:09:09.568      element at address: 0x200027e6f9c0 with size:    0.000183 MiB
00:09:09.568      element at address: 0x200027e6fa80 with size:    0.000183 MiB
00:09:09.568      element at address: 0x200027e6fb40 with size:    0.000183 MiB
00:09:09.568      element at address: 0x200027e6fc00 with size:    0.000183 MiB
00:09:09.568      element at address: 0x200027e6fcc0 with size:    0.000183 MiB
00:09:09.568      element at address: 0x200027e6fd80 with size:    0.000183 MiB
00:09:09.568      element at address: 0x200027e6fe40 with size:    0.000183 MiB
00:09:09.568      element at address: 0x200027e6ff00 with size:    0.000183 MiB
00:09:09.568    list of memzone associated elements. size: 602.262573 MiB
00:09:09.568      element at address: 0x20001aa95500 with size:  211.416748 MiB
00:09:09.568        associated memzone info: size:  211.416626 MiB name: MP_PDU_immediate_data_Pool_0
00:09:09.568      element at address: 0x200027e6ffc0 with size:  157.562561 MiB
00:09:09.568        associated memzone info: size:  157.562439 MiB name: MP_PDU_data_out_Pool_0
00:09:09.568      element at address: 0x2000139fab80 with size:   84.020630 MiB
00:09:09.568        associated memzone info: size:   84.020508 MiB name: MP_bdev_io_115806_0
00:09:09.568      element at address: 0x2000009ff380 with size:   48.003052 MiB
00:09:09.568        associated memzone info: size:   48.002930 MiB name: MP_evtpool_115806_0
00:09:09.568      element at address: 0x200003fff380 with size:   48.003052 MiB
00:09:09.568        associated memzone info: size:   48.002930 MiB name: MP_msgpool_115806_0
00:09:09.568      element at address: 0x2000195be940 with size:   20.255554 MiB
00:09:09.568        associated memzone info: size:   20.255432 MiB name: MP_PDU_Pool_0
00:09:09.568      element at address: 0x200031dfeb40 with size:   18.005066 MiB
00:09:09.568        associated memzone info: size:   18.004944 MiB name: MP_SCSI_TASK_Pool_0
00:09:09.568      element at address: 0x2000005ffe00 with size:    2.000488 MiB
00:09:09.568        associated memzone info: size:    2.000366 MiB name: RG_MP_evtpool_115806
00:09:09.568      element at address: 0x200003bffe00 with size:    2.000488 MiB
00:09:09.568        associated memzone info: size:    2.000366 MiB name: RG_MP_msgpool_115806
00:09:09.568      element at address: 0x2000002d7d00 with size:    1.008118 MiB
00:09:09.568        associated memzone info: size:    1.007996 MiB name: MP_evtpool_115806
00:09:09.568      element at address: 0x20000b2fde40 with size:    1.008118 MiB
00:09:09.568        associated memzone info: size:    1.007996 MiB name: MP_PDU_Pool
00:09:09.568      element at address: 0x2000194bc800 with size:    1.008118 MiB
00:09:09.568        associated memzone info: size:    1.007996 MiB name: MP_PDU_immediate_data_Pool
00:09:09.568      element at address: 0x2000070fde40 with size:    1.008118 MiB
00:09:09.568        associated memzone info: size:    1.007996 MiB name: MP_PDU_data_out_Pool
00:09:09.568      element at address: 0x2000008fd240 with size:    1.008118 MiB
00:09:09.568        associated memzone info: size:    1.007996 MiB name: MP_SCSI_TASK_Pool
00:09:09.568      element at address: 0x200003eff180 with size:    1.000488 MiB
00:09:09.568        associated memzone info: size:    1.000366 MiB name: RG_ring_0_115806
00:09:09.568      element at address: 0x200003affc00 with size:    1.000488 MiB
00:09:09.568        associated memzone info: size:    1.000366 MiB name: RG_ring_1_115806
00:09:09.568      element at address: 0x2000138fa980 with size:    1.000488 MiB
00:09:09.568        associated memzone info: size:    1.000366 MiB name: RG_ring_4_115806
00:09:09.568      element at address: 0x200031cfe940 with size:    1.000488 MiB
00:09:09.568        associated memzone info: size:    1.000366 MiB name: RG_ring_5_115806
00:09:09.568      element at address: 0x200003a5b100 with size:    0.500488 MiB
00:09:09.568        associated memzone info: size:    0.500366 MiB name: RG_MP_bdev_io_115806
00:09:09.568      element at address: 0x20000b27db80 with size:    0.500488 MiB
00:09:09.568        associated memzone info: size:    0.500366 MiB name: RG_MP_PDU_Pool
00:09:09.568      element at address: 0x20000087cf80 with size:    0.500488 MiB
00:09:09.568        associated memzone info: size:    0.500366 MiB name: RG_MP_SCSI_TASK_Pool
00:09:09.568      element at address: 0x20001947c540 with size:    0.250488 MiB
00:09:09.568        associated memzone info: size:    0.250366 MiB name: RG_MP_PDU_immediate_data_Pool
00:09:09.568      element at address: 0x200003adf880 with size:    0.125488 MiB
00:09:09.568        associated memzone info: size:    0.125366 MiB name: RG_ring_2_115806
00:09:09.568      element at address: 0x2000070f5b80 with size:    0.031738 MiB
00:09:09.568        associated memzone info: size:    0.031616 MiB name: RG_MP_PDU_data_out_Pool
00:09:09.568      element at address: 0x200027e67240 with size:    0.023743 MiB
00:09:09.568        associated memzone info: size:    0.023621 MiB name: MP_Session_Pool_0
00:09:09.568      element at address: 0x200003adb5c0 with size:    0.016113 MiB
00:09:09.568        associated memzone info: size:    0.015991 MiB name: RG_ring_3_115806
00:09:09.568      element at address: 0x200027e6d380 with size:    0.002441 MiB
00:09:09.568        associated memzone info: size:    0.002319 MiB name: RG_MP_Session_Pool
00:09:09.568      element at address: 0x2000002d7080 with size:    0.000305 MiB
00:09:09.568        associated memzone info: size:    0.000183 MiB name: MP_msgpool_115806
00:09:09.568      element at address: 0x200003adb3c0 with size:    0.000305 MiB
00:09:09.568        associated memzone info: size:    0.000183 MiB name: MP_bdev_io_115806
00:09:09.568      element at address: 0x200027e6de40 with size:    0.000305 MiB
00:09:09.568        associated memzone info: size:    0.000183 MiB name: MP_Session_Pool
00:09:09.568   16:53:02	-- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT
00:09:09.568   16:53:02	-- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 115806
00:09:09.568   16:53:02	-- common/autotest_common.sh@936 -- # '[' -z 115806 ']'
00:09:09.568   16:53:02	-- common/autotest_common.sh@940 -- # kill -0 115806
00:09:09.568    16:53:02	-- common/autotest_common.sh@941 -- # uname
00:09:09.568   16:53:02	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:09:09.568    16:53:02	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 115806
00:09:09.568   16:53:02	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:09:09.568   16:53:02	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:09:09.568   16:53:02	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 115806'
00:09:09.568  killing process with pid 115806
00:09:09.568   16:53:02	-- common/autotest_common.sh@955 -- # kill 115806
00:09:09.569   16:53:02	-- common/autotest_common.sh@960 -- # wait 115806
00:09:09.826  
00:09:09.826  real	0m1.762s
00:09:09.826  user	0m1.797s
00:09:09.826  sys	0m0.519s
00:09:09.826   16:53:02	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:09:09.826  ************************************
00:09:09.826  END TEST dpdk_mem_utility
00:09:09.826  ************************************
00:09:09.826   16:53:02	-- common/autotest_common.sh@10 -- # set +x
00:09:09.826   16:53:02	-- spdk/autotest.sh@174 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh
00:09:09.826   16:53:02	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:09:09.826   16:53:02	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:09:09.826   16:53:02	-- common/autotest_common.sh@10 -- # set +x
00:09:09.826  ************************************
00:09:09.826  START TEST event
00:09:09.826  ************************************
00:09:09.826   16:53:02	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh
00:09:10.084  * Looking for test storage...
00:09:10.084  * Found test storage at /home/vagrant/spdk_repo/spdk/test/event
00:09:10.084    16:53:02	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:09:10.084     16:53:02	-- common/autotest_common.sh@1690 -- # lcov --version
00:09:10.084     16:53:02	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:09:10.084    16:53:02	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:09:10.084    16:53:02	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:09:10.084    16:53:02	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:09:10.084    16:53:02	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:09:10.084    16:53:02	-- scripts/common.sh@335 -- # IFS=.-:
00:09:10.084    16:53:02	-- scripts/common.sh@335 -- # read -ra ver1
00:09:10.084    16:53:02	-- scripts/common.sh@336 -- # IFS=.-:
00:09:10.084    16:53:02	-- scripts/common.sh@336 -- # read -ra ver2
00:09:10.084    16:53:02	-- scripts/common.sh@337 -- # local 'op=<'
00:09:10.084    16:53:02	-- scripts/common.sh@339 -- # ver1_l=2
00:09:10.084    16:53:02	-- scripts/common.sh@340 -- # ver2_l=1
00:09:10.084    16:53:02	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:09:10.084    16:53:02	-- scripts/common.sh@343 -- # case "$op" in
00:09:10.084    16:53:02	-- scripts/common.sh@344 -- # : 1
00:09:10.084    16:53:02	-- scripts/common.sh@363 -- # (( v = 0 ))
00:09:10.084    16:53:02	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:09:10.084     16:53:02	-- scripts/common.sh@364 -- # decimal 1
00:09:10.084     16:53:02	-- scripts/common.sh@352 -- # local d=1
00:09:10.084     16:53:02	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:10.084     16:53:02	-- scripts/common.sh@354 -- # echo 1
00:09:10.084    16:53:02	-- scripts/common.sh@364 -- # ver1[v]=1
00:09:10.084     16:53:02	-- scripts/common.sh@365 -- # decimal 2
00:09:10.084     16:53:02	-- scripts/common.sh@352 -- # local d=2
00:09:10.084     16:53:02	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:09:10.084     16:53:02	-- scripts/common.sh@354 -- # echo 2
00:09:10.084    16:53:02	-- scripts/common.sh@365 -- # ver2[v]=2
00:09:10.084    16:53:02	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:09:10.084    16:53:02	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:09:10.084    16:53:02	-- scripts/common.sh@367 -- # return 0
00:09:10.084    16:53:02	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:09:10.084    16:53:02	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:09:10.084  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:10.084  		--rc genhtml_branch_coverage=1
00:09:10.084  		--rc genhtml_function_coverage=1
00:09:10.084  		--rc genhtml_legend=1
00:09:10.084  		--rc geninfo_all_blocks=1
00:09:10.084  		--rc geninfo_unexecuted_blocks=1
00:09:10.084  		
00:09:10.084  		'
00:09:10.084    16:53:02	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:09:10.084  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:10.084  		--rc genhtml_branch_coverage=1
00:09:10.084  		--rc genhtml_function_coverage=1
00:09:10.084  		--rc genhtml_legend=1
00:09:10.084  		--rc geninfo_all_blocks=1
00:09:10.084  		--rc geninfo_unexecuted_blocks=1
00:09:10.084  		
00:09:10.084  		'
00:09:10.084    16:53:02	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:09:10.084  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:10.084  		--rc genhtml_branch_coverage=1
00:09:10.084  		--rc genhtml_function_coverage=1
00:09:10.084  		--rc genhtml_legend=1
00:09:10.084  		--rc geninfo_all_blocks=1
00:09:10.084  		--rc geninfo_unexecuted_blocks=1
00:09:10.084  		
00:09:10.084  		'
00:09:10.084    16:53:02	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:09:10.084  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:10.084  		--rc genhtml_branch_coverage=1
00:09:10.084  		--rc genhtml_function_coverage=1
00:09:10.084  		--rc genhtml_legend=1
00:09:10.084  		--rc geninfo_all_blocks=1
00:09:10.084  		--rc geninfo_unexecuted_blocks=1
00:09:10.084  		
00:09:10.084  		'
00:09:10.084   16:53:02	-- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh
00:09:10.084    16:53:02	-- bdev/nbd_common.sh@6 -- # set -e
00:09:10.084   16:53:02	-- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1
00:09:10.084   16:53:02	-- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']'
00:09:10.084   16:53:02	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:09:10.084   16:53:02	-- common/autotest_common.sh@10 -- # set +x
00:09:10.084  ************************************
00:09:10.084  START TEST event_perf
00:09:10.084  ************************************
00:09:10.084   16:53:02	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1
00:09:10.084  Running I/O for 1 seconds...[2024-11-19 16:53:02.845597] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:09:10.084  [2024-11-19 16:53:02.846240] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115902 ]
00:09:10.341  [2024-11-19 16:53:03.010062] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4
00:09:10.341  [2024-11-19 16:53:03.065757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:09:10.341  [2024-11-19 16:53:03.065963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:09:10.341  [2024-11-19 16:53:03.065905] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:09:10.341  [2024-11-19 16:53:03.065971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3
00:09:11.714  Running I/O for 1 seconds...
00:09:11.714  lcore  0:    96794
00:09:11.714  lcore  1:    96797
00:09:11.714  lcore  2:    96800
00:09:11.714  lcore  3:    96792
00:09:11.714  done.
00:09:11.714  
00:09:11.714  real	0m1.384s
00:09:11.714  user	0m4.148s
00:09:11.714  sys	0m0.120s
00:09:11.714   16:53:04	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:09:11.714  ************************************
00:09:11.714  END TEST event_perf
00:09:11.714  ************************************
00:09:11.714   16:53:04	-- common/autotest_common.sh@10 -- # set +x
00:09:11.714   16:53:04	-- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1
00:09:11.714   16:53:04	-- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']'
00:09:11.714   16:53:04	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:09:11.714   16:53:04	-- common/autotest_common.sh@10 -- # set +x
00:09:11.714  ************************************
00:09:11.714  START TEST event_reactor
00:09:11.714  ************************************
00:09:11.714   16:53:04	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1
00:09:11.714  [2024-11-19 16:53:04.307319] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:09:11.714  [2024-11-19 16:53:04.308284] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115950 ]
00:09:11.714  [2024-11-19 16:53:04.470175] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:11.714  [2024-11-19 16:53:04.565748] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:09:13.087  test_start
00:09:13.087  oneshot
00:09:13.087  tick 100
00:09:13.087  tick 100
00:09:13.087  tick 250
00:09:13.087  tick 100
00:09:13.087  tick 100
00:09:13.087  tick 100
00:09:13.087  tick 250
00:09:13.087  tick 500
00:09:13.087  tick 100
00:09:13.087  tick 100
00:09:13.087  tick 250
00:09:13.087  tick 100
00:09:13.087  tick 100
00:09:13.087  test_end
00:09:13.087  
00:09:13.087  real	0m1.482s
00:09:13.087  user	0m1.242s
00:09:13.087  sys	0m0.136s
00:09:13.087   16:53:05	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:09:13.087   16:53:05	-- common/autotest_common.sh@10 -- # set +x
00:09:13.087  ************************************
00:09:13.087  END TEST event_reactor
00:09:13.087  ************************************
00:09:13.087   16:53:05	-- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1
00:09:13.087   16:53:05	-- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']'
00:09:13.087   16:53:05	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:09:13.087   16:53:05	-- common/autotest_common.sh@10 -- # set +x
00:09:13.087  ************************************
00:09:13.087  START TEST event_reactor_perf
00:09:13.087  ************************************
00:09:13.087   16:53:05	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1
00:09:13.087  [2024-11-19 16:53:05.848858] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:09:13.087  [2024-11-19 16:53:05.849153] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115995 ]
00:09:13.345  [2024-11-19 16:53:06.004788] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:13.345  [2024-11-19 16:53:06.077447] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:09:14.722  test_start
00:09:14.722  test_end
00:09:14.722  Performance:   381814 events per second
00:09:14.722  
00:09:14.722  real	0m1.384s
00:09:14.722  user	0m1.164s
00:09:14.722  sys	0m0.119s
00:09:14.722   16:53:07	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:09:14.722   16:53:07	-- common/autotest_common.sh@10 -- # set +x
00:09:14.722  ************************************
00:09:14.722  END TEST event_reactor_perf
00:09:14.722  ************************************
00:09:14.722    16:53:07	-- event/event.sh@49 -- # uname -s
00:09:14.722   16:53:07	-- event/event.sh@49 -- # '[' Linux = Linux ']'
00:09:14.722   16:53:07	-- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh
00:09:14.722   16:53:07	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:09:14.722   16:53:07	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:09:14.722   16:53:07	-- common/autotest_common.sh@10 -- # set +x
00:09:14.722  ************************************
00:09:14.722  START TEST event_scheduler
00:09:14.722  ************************************
00:09:14.722   16:53:07	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh
00:09:14.722  * Looking for test storage...
00:09:14.722  * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler
00:09:14.722    16:53:07	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:09:14.722     16:53:07	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:09:14.722     16:53:07	-- common/autotest_common.sh@1690 -- # lcov --version
00:09:14.722    16:53:07	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:09:14.722    16:53:07	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:09:14.722    16:53:07	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:09:14.722    16:53:07	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:09:14.722    16:53:07	-- scripts/common.sh@335 -- # IFS=.-:
00:09:14.722    16:53:07	-- scripts/common.sh@335 -- # read -ra ver1
00:09:14.722    16:53:07	-- scripts/common.sh@336 -- # IFS=.-:
00:09:14.722    16:53:07	-- scripts/common.sh@336 -- # read -ra ver2
00:09:14.722    16:53:07	-- scripts/common.sh@337 -- # local 'op=<'
00:09:14.722    16:53:07	-- scripts/common.sh@339 -- # ver1_l=2
00:09:14.722    16:53:07	-- scripts/common.sh@340 -- # ver2_l=1
00:09:14.722    16:53:07	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:09:14.722    16:53:07	-- scripts/common.sh@343 -- # case "$op" in
00:09:14.722    16:53:07	-- scripts/common.sh@344 -- # : 1
00:09:14.722    16:53:07	-- scripts/common.sh@363 -- # (( v = 0 ))
00:09:14.722    16:53:07	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:09:14.722     16:53:07	-- scripts/common.sh@364 -- # decimal 1
00:09:14.722     16:53:07	-- scripts/common.sh@352 -- # local d=1
00:09:14.722     16:53:07	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:14.722     16:53:07	-- scripts/common.sh@354 -- # echo 1
00:09:14.722    16:53:07	-- scripts/common.sh@364 -- # ver1[v]=1
00:09:14.722     16:53:07	-- scripts/common.sh@365 -- # decimal 2
00:09:14.722     16:53:07	-- scripts/common.sh@352 -- # local d=2
00:09:14.722     16:53:07	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:09:14.722     16:53:07	-- scripts/common.sh@354 -- # echo 2
00:09:14.722    16:53:07	-- scripts/common.sh@365 -- # ver2[v]=2
00:09:14.722    16:53:07	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:09:14.722    16:53:07	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:09:14.722    16:53:07	-- scripts/common.sh@367 -- # return 0
00:09:14.722    16:53:07	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:09:14.722    16:53:07	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:09:14.722  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:14.722  		--rc genhtml_branch_coverage=1
00:09:14.722  		--rc genhtml_function_coverage=1
00:09:14.722  		--rc genhtml_legend=1
00:09:14.722  		--rc geninfo_all_blocks=1
00:09:14.722  		--rc geninfo_unexecuted_blocks=1
00:09:14.722  		
00:09:14.722  		'
00:09:14.722    16:53:07	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:09:14.722  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:14.722  		--rc genhtml_branch_coverage=1
00:09:14.722  		--rc genhtml_function_coverage=1
00:09:14.722  		--rc genhtml_legend=1
00:09:14.722  		--rc geninfo_all_blocks=1
00:09:14.722  		--rc geninfo_unexecuted_blocks=1
00:09:14.722  		
00:09:14.722  		'
00:09:14.722    16:53:07	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:09:14.722  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:14.722  		--rc genhtml_branch_coverage=1
00:09:14.722  		--rc genhtml_function_coverage=1
00:09:14.722  		--rc genhtml_legend=1
00:09:14.722  		--rc geninfo_all_blocks=1
00:09:14.722  		--rc geninfo_unexecuted_blocks=1
00:09:14.722  		
00:09:14.722  		'
00:09:14.722    16:53:07	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:09:14.722  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:14.722  		--rc genhtml_branch_coverage=1
00:09:14.722  		--rc genhtml_function_coverage=1
00:09:14.722  		--rc genhtml_legend=1
00:09:14.722  		--rc geninfo_all_blocks=1
00:09:14.722  		--rc geninfo_unexecuted_blocks=1
00:09:14.722  		
00:09:14.722  		'
00:09:14.722   16:53:07	-- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd
00:09:14.722   16:53:07	-- scheduler/scheduler.sh@35 -- # scheduler_pid=116074
00:09:14.722   16:53:07	-- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT
00:09:14.722   16:53:07	-- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f
00:09:14.722   16:53:07	-- scheduler/scheduler.sh@37 -- # waitforlisten 116074
00:09:14.722   16:53:07	-- common/autotest_common.sh@829 -- # '[' -z 116074 ']'
00:09:14.722   16:53:07	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:14.722   16:53:07	-- common/autotest_common.sh@834 -- # local max_retries=100
00:09:14.722  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:14.722   16:53:07	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:14.722   16:53:07	-- common/autotest_common.sh@838 -- # xtrace_disable
00:09:14.722   16:53:07	-- common/autotest_common.sh@10 -- # set +x
00:09:14.722  [2024-11-19 16:53:07.501666] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:09:14.722  [2024-11-19 16:53:07.502275] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116074 ]
00:09:14.982  [2024-11-19 16:53:07.703663] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4
00:09:14.982  [2024-11-19 16:53:07.764738] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:09:14.982  [2024-11-19 16:53:07.764803] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:09:14.982  [2024-11-19 16:53:07.764925] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3
00:09:14.982  [2024-11-19 16:53:07.764939] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:09:15.918   16:53:08	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:09:15.918   16:53:08	-- common/autotest_common.sh@862 -- # return 0
00:09:15.919   16:53:08	-- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic
00:09:15.919   16:53:08	-- common/autotest_common.sh@561 -- # xtrace_disable
00:09:15.919   16:53:08	-- common/autotest_common.sh@10 -- # set +x
00:09:15.919  POWER: Env isn't set yet!
00:09:15.919  POWER: Attempting to initialise ACPI cpufreq power management...
00:09:15.919  POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor
00:09:15.919  POWER: Cannot set governor of lcore 0 to userspace
00:09:15.919  POWER: Attempting to initialise PSTAT power management...
00:09:15.919  POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor
00:09:15.919  POWER: Cannot set governor of lcore 0 to performance
00:09:15.919  POWER: Attempting to initialise CPPC power management...
00:09:15.919  POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor
00:09:15.919  POWER: Cannot set governor of lcore 0 to userspace
00:09:15.919  POWER: Attempting to initialise VM power management...
00:09:15.919  GUEST_CHANNEL: Unable to to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory
00:09:15.919  POWER: Unable to set Power Management Environment for lcore 0
00:09:15.919  [2024-11-19 16:53:08.484454] dpdk_governor.c:  88:_init_core: *ERROR*: Failed to initialize on core0
00:09:15.919  [2024-11-19 16:53:08.484543] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0
00:09:15.919  [2024-11-19 16:53:08.484653] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor
00:09:15.919  [2024-11-19 16:53:08.484845] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20
00:09:15.919  [2024-11-19 16:53:08.484908] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80
00:09:15.919  [2024-11-19 16:53:08.484940] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95
00:09:15.919   16:53:08	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:15.919   16:53:08	-- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init
00:09:15.919   16:53:08	-- common/autotest_common.sh@561 -- # xtrace_disable
00:09:15.919   16:53:08	-- common/autotest_common.sh@10 -- # set +x
00:09:15.919  [2024-11-19 16:53:08.555881] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started.
00:09:15.919   16:53:08	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:15.919   16:53:08	-- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread
00:09:15.919   16:53:08	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:09:15.919   16:53:08	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:09:15.919   16:53:08	-- common/autotest_common.sh@10 -- # set +x
00:09:15.919  ************************************
00:09:15.919  START TEST scheduler_create_thread
00:09:15.919  ************************************
00:09:15.919   16:53:08	-- common/autotest_common.sh@1114 -- # scheduler_create_thread
00:09:15.919   16:53:08	-- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100
00:09:15.919   16:53:08	-- common/autotest_common.sh@561 -- # xtrace_disable
00:09:15.919   16:53:08	-- common/autotest_common.sh@10 -- # set +x
00:09:15.919  2
00:09:15.919   16:53:08	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:15.919   16:53:08	-- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100
00:09:15.919   16:53:08	-- common/autotest_common.sh@561 -- # xtrace_disable
00:09:15.919   16:53:08	-- common/autotest_common.sh@10 -- # set +x
00:09:15.919  3
00:09:15.919   16:53:08	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:15.919   16:53:08	-- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100
00:09:15.919   16:53:08	-- common/autotest_common.sh@561 -- # xtrace_disable
00:09:15.919   16:53:08	-- common/autotest_common.sh@10 -- # set +x
00:09:15.919  4
00:09:15.919   16:53:08	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:15.919   16:53:08	-- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100
00:09:15.919   16:53:08	-- common/autotest_common.sh@561 -- # xtrace_disable
00:09:15.919   16:53:08	-- common/autotest_common.sh@10 -- # set +x
00:09:15.919  5
00:09:15.919   16:53:08	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:15.919   16:53:08	-- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0
00:09:15.919   16:53:08	-- common/autotest_common.sh@561 -- # xtrace_disable
00:09:15.919   16:53:08	-- common/autotest_common.sh@10 -- # set +x
00:09:15.919  6
00:09:15.919   16:53:08	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:15.919   16:53:08	-- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0
00:09:15.919   16:53:08	-- common/autotest_common.sh@561 -- # xtrace_disable
00:09:15.919   16:53:08	-- common/autotest_common.sh@10 -- # set +x
00:09:15.919  7
00:09:15.919   16:53:08	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:15.919   16:53:08	-- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0
00:09:15.919   16:53:08	-- common/autotest_common.sh@561 -- # xtrace_disable
00:09:15.919   16:53:08	-- common/autotest_common.sh@10 -- # set +x
00:09:15.919  8
00:09:15.919   16:53:08	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:15.919   16:53:08	-- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0
00:09:15.919   16:53:08	-- common/autotest_common.sh@561 -- # xtrace_disable
00:09:15.919   16:53:08	-- common/autotest_common.sh@10 -- # set +x
00:09:15.919  9
00:09:15.919   16:53:08	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:15.919   16:53:08	-- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30
00:09:15.919   16:53:08	-- common/autotest_common.sh@561 -- # xtrace_disable
00:09:15.919   16:53:08	-- common/autotest_common.sh@10 -- # set +x
00:09:15.919  10
00:09:15.919   16:53:08	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:15.919    16:53:08	-- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0
00:09:15.919    16:53:08	-- common/autotest_common.sh@561 -- # xtrace_disable
00:09:15.919    16:53:08	-- common/autotest_common.sh@10 -- # set +x
00:09:15.919    16:53:08	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:15.919   16:53:08	-- scheduler/scheduler.sh@22 -- # thread_id=11
00:09:15.919   16:53:08	-- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50
00:09:15.919   16:53:08	-- common/autotest_common.sh@561 -- # xtrace_disable
00:09:15.919   16:53:08	-- common/autotest_common.sh@10 -- # set +x
00:09:15.919   16:53:08	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:15.919    16:53:08	-- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100
00:09:15.919    16:53:08	-- common/autotest_common.sh@561 -- # xtrace_disable
00:09:15.919    16:53:08	-- common/autotest_common.sh@10 -- # set +x
00:09:17.310    16:53:10	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:17.310   16:53:10	-- scheduler/scheduler.sh@25 -- # thread_id=12
00:09:17.310   16:53:10	-- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12
00:09:17.310   16:53:10	-- common/autotest_common.sh@561 -- # xtrace_disable
00:09:17.310   16:53:10	-- common/autotest_common.sh@10 -- # set +x
00:09:18.690   16:53:11	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:18.690  
00:09:18.690  real	0m2.637s
00:09:18.690  user	0m0.016s
00:09:18.690  sys	0m0.009s
00:09:18.690   16:53:11	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:09:18.690  ************************************
00:09:18.690  END TEST scheduler_create_thread
00:09:18.690  ************************************
00:09:18.690   16:53:11	-- common/autotest_common.sh@10 -- # set +x
00:09:18.690   16:53:11	-- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT
00:09:18.690   16:53:11	-- scheduler/scheduler.sh@46 -- # killprocess 116074
00:09:18.690   16:53:11	-- common/autotest_common.sh@936 -- # '[' -z 116074 ']'
00:09:18.690   16:53:11	-- common/autotest_common.sh@940 -- # kill -0 116074
00:09:18.690    16:53:11	-- common/autotest_common.sh@941 -- # uname
00:09:18.690   16:53:11	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:09:18.690    16:53:11	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 116074
00:09:18.690   16:53:11	-- common/autotest_common.sh@942 -- # process_name=reactor_2
00:09:18.690  killing process with pid 116074
00:09:18.690   16:53:11	-- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']'
00:09:18.690   16:53:11	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 116074'
00:09:18.690   16:53:11	-- common/autotest_common.sh@955 -- # kill 116074
00:09:18.690   16:53:11	-- common/autotest_common.sh@960 -- # wait 116074
00:09:18.949  [2024-11-19 16:53:11.690350] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped.
00:09:19.517  ************************************
00:09:19.517  END TEST event_scheduler
00:09:19.517  ************************************
00:09:19.517  
00:09:19.517  real	0m4.848s
00:09:19.517  user	0m8.865s
00:09:19.517  sys	0m0.470s
00:09:19.517   16:53:12	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:09:19.517   16:53:12	-- common/autotest_common.sh@10 -- # set +x
00:09:19.517   16:53:12	-- event/event.sh@51 -- # modprobe -n nbd
00:09:19.517   16:53:12	-- event/event.sh@52 -- # run_test app_repeat app_repeat_test
00:09:19.517   16:53:12	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:09:19.517   16:53:12	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:09:19.517   16:53:12	-- common/autotest_common.sh@10 -- # set +x
00:09:19.517  ************************************
00:09:19.517  START TEST app_repeat
00:09:19.517  ************************************
00:09:19.517   16:53:12	-- common/autotest_common.sh@1114 -- # app_repeat_test
00:09:19.517   16:53:12	-- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:19.517   16:53:12	-- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:09:19.517   16:53:12	-- event/event.sh@13 -- # local nbd_list
00:09:19.517   16:53:12	-- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1')
00:09:19.517   16:53:12	-- event/event.sh@14 -- # local bdev_list
00:09:19.517   16:53:12	-- event/event.sh@15 -- # local repeat_times=4
00:09:19.517   16:53:12	-- event/event.sh@17 -- # modprobe nbd
00:09:19.517   16:53:12	-- event/event.sh@19 -- # repeat_pid=116190
00:09:19.517  Process app_repeat pid: 116190
00:09:19.517   16:53:12	-- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT
00:09:19.517   16:53:12	-- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4
00:09:19.517   16:53:12	-- event/event.sh@21 -- # echo 'Process app_repeat pid: 116190'
00:09:19.517   16:53:12	-- event/event.sh@23 -- # for i in {0..2}
00:09:19.517  spdk_app_start Round 0
00:09:19.517   16:53:12	-- event/event.sh@24 -- # echo 'spdk_app_start Round 0'
00:09:19.517   16:53:12	-- event/event.sh@25 -- # waitforlisten 116190 /var/tmp/spdk-nbd.sock
00:09:19.517   16:53:12	-- common/autotest_common.sh@829 -- # '[' -z 116190 ']'
00:09:19.517   16:53:12	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:09:19.517  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:09:19.517   16:53:12	-- common/autotest_common.sh@834 -- # local max_retries=100
00:09:19.517   16:53:12	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:09:19.517   16:53:12	-- common/autotest_common.sh@838 -- # xtrace_disable
00:09:19.517   16:53:12	-- common/autotest_common.sh@10 -- # set +x
00:09:19.517  [2024-11-19 16:53:12.235836] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:09:19.517  [2024-11-19 16:53:12.236101] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116190 ]
00:09:19.777  [2024-11-19 16:53:12.390519] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2
00:09:19.777  [2024-11-19 16:53:12.459575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:09:19.777  [2024-11-19 16:53:12.459591] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:09:20.713   16:53:13	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:09:20.713   16:53:13	-- common/autotest_common.sh@862 -- # return 0
00:09:20.713   16:53:13	-- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:09:20.713  Malloc0
00:09:20.713   16:53:13	-- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:09:20.972  Malloc1
00:09:20.972   16:53:13	-- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:09:20.972   16:53:13	-- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:20.972   16:53:13	-- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1')
00:09:20.972   16:53:13	-- bdev/nbd_common.sh@91 -- # local bdev_list
00:09:20.972   16:53:13	-- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:09:20.972   16:53:13	-- bdev/nbd_common.sh@92 -- # local nbd_list
00:09:20.972   16:53:13	-- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:09:20.972   16:53:13	-- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:20.972   16:53:13	-- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1')
00:09:20.972   16:53:13	-- bdev/nbd_common.sh@10 -- # local bdev_list
00:09:20.972   16:53:13	-- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:09:20.972   16:53:13	-- bdev/nbd_common.sh@11 -- # local nbd_list
00:09:20.972   16:53:13	-- bdev/nbd_common.sh@12 -- # local i
00:09:20.972   16:53:13	-- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:09:20.972   16:53:13	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:09:20.972   16:53:13	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0
00:09:21.232  /dev/nbd0
00:09:21.232    16:53:13	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:09:21.232   16:53:13	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:09:21.232   16:53:13	-- common/autotest_common.sh@866 -- # local nbd_name=nbd0
00:09:21.232   16:53:13	-- common/autotest_common.sh@867 -- # local i
00:09:21.232   16:53:13	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:09:21.232   16:53:13	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:09:21.232   16:53:13	-- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions
00:09:21.232   16:53:13	-- common/autotest_common.sh@871 -- # break
00:09:21.232   16:53:13	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:09:21.232   16:53:13	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:09:21.232   16:53:13	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:09:21.232  1+0 records in
00:09:21.232  1+0 records out
00:09:21.232  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000283569 s, 14.4 MB/s
00:09:21.232    16:53:13	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:09:21.232   16:53:13	-- common/autotest_common.sh@884 -- # size=4096
00:09:21.232   16:53:13	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:09:21.232   16:53:14	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:09:21.232   16:53:14	-- common/autotest_common.sh@887 -- # return 0
00:09:21.232   16:53:14	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:09:21.232   16:53:14	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:09:21.232   16:53:14	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1
00:09:21.491  /dev/nbd1
00:09:21.491    16:53:14	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:09:21.491   16:53:14	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:09:21.491   16:53:14	-- common/autotest_common.sh@866 -- # local nbd_name=nbd1
00:09:21.491   16:53:14	-- common/autotest_common.sh@867 -- # local i
00:09:21.491   16:53:14	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:09:21.491   16:53:14	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:09:21.491   16:53:14	-- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions
00:09:21.491   16:53:14	-- common/autotest_common.sh@871 -- # break
00:09:21.491   16:53:14	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:09:21.491   16:53:14	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:09:21.491   16:53:14	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:09:21.491  1+0 records in
00:09:21.491  1+0 records out
00:09:21.491  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000372061 s, 11.0 MB/s
00:09:21.491    16:53:14	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:09:21.491   16:53:14	-- common/autotest_common.sh@884 -- # size=4096
00:09:21.491   16:53:14	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:09:21.491   16:53:14	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:09:21.491   16:53:14	-- common/autotest_common.sh@887 -- # return 0
00:09:21.491   16:53:14	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:09:21.491   16:53:14	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:09:21.491    16:53:14	-- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:09:21.491    16:53:14	-- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:21.491     16:53:14	-- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:09:21.750    16:53:14	-- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:09:21.750    {
00:09:21.750      "nbd_device": "/dev/nbd0",
00:09:21.750      "bdev_name": "Malloc0"
00:09:21.750    },
00:09:21.750    {
00:09:21.750      "nbd_device": "/dev/nbd1",
00:09:21.750      "bdev_name": "Malloc1"
00:09:21.750    }
00:09:21.750  ]'
00:09:21.750     16:53:14	-- bdev/nbd_common.sh@64 -- # echo '[
00:09:21.750    {
00:09:21.750      "nbd_device": "/dev/nbd0",
00:09:21.750      "bdev_name": "Malloc0"
00:09:21.750    },
00:09:21.750    {
00:09:21.750      "nbd_device": "/dev/nbd1",
00:09:21.750      "bdev_name": "Malloc1"
00:09:21.750    }
00:09:21.750  ]'
00:09:21.750     16:53:14	-- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:09:21.750    16:53:14	-- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0
00:09:21.750  /dev/nbd1'
00:09:21.750     16:53:14	-- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0
00:09:21.750  /dev/nbd1'
00:09:21.751     16:53:14	-- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:09:21.751    16:53:14	-- bdev/nbd_common.sh@65 -- # count=2
00:09:21.751    16:53:14	-- bdev/nbd_common.sh@66 -- # echo 2
00:09:21.751   16:53:14	-- bdev/nbd_common.sh@95 -- # count=2
00:09:21.751   16:53:14	-- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']'
00:09:21.751   16:53:14	-- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write
00:09:21.751   16:53:14	-- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:09:21.751   16:53:14	-- bdev/nbd_common.sh@70 -- # local nbd_list
00:09:21.751   16:53:14	-- bdev/nbd_common.sh@71 -- # local operation=write
00:09:21.751   16:53:14	-- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:09:21.751   16:53:14	-- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:09:21.751   16:53:14	-- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256
00:09:21.751  256+0 records in
00:09:21.751  256+0 records out
00:09:21.751  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00718991 s, 146 MB/s
00:09:21.751   16:53:14	-- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:09:21.751   16:53:14	-- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:09:22.010  256+0 records in
00:09:22.010  256+0 records out
00:09:22.010  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0289534 s, 36.2 MB/s
00:09:22.010   16:53:14	-- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:09:22.010   16:53:14	-- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct
00:09:22.010  256+0 records in
00:09:22.010  256+0 records out
00:09:22.010  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0332105 s, 31.6 MB/s
00:09:22.010   16:53:14	-- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify
00:09:22.010   16:53:14	-- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:09:22.010   16:53:14	-- bdev/nbd_common.sh@70 -- # local nbd_list
00:09:22.010   16:53:14	-- bdev/nbd_common.sh@71 -- # local operation=verify
00:09:22.010   16:53:14	-- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:09:22.010   16:53:14	-- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:09:22.010   16:53:14	-- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:09:22.010   16:53:14	-- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:09:22.010   16:53:14	-- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0
00:09:22.010   16:53:14	-- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:09:22.010   16:53:14	-- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1
00:09:22.010   16:53:14	-- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:09:22.010   16:53:14	-- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1'
00:09:22.010   16:53:14	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:22.010   16:53:14	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:09:22.010   16:53:14	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:09:22.010   16:53:14	-- bdev/nbd_common.sh@51 -- # local i
00:09:22.010   16:53:14	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:09:22.010   16:53:14	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:09:22.269    16:53:14	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:09:22.269   16:53:14	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:09:22.269   16:53:14	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:09:22.269   16:53:14	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:09:22.269   16:53:14	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:09:22.269   16:53:14	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:09:22.269   16:53:14	-- bdev/nbd_common.sh@41 -- # break
00:09:22.269   16:53:14	-- bdev/nbd_common.sh@45 -- # return 0
00:09:22.269   16:53:14	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:09:22.269   16:53:14	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:09:22.528    16:53:15	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:09:22.528   16:53:15	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:09:22.528   16:53:15	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:09:22.528   16:53:15	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:09:22.528   16:53:15	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:09:22.528   16:53:15	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:09:22.528   16:53:15	-- bdev/nbd_common.sh@41 -- # break
00:09:22.528   16:53:15	-- bdev/nbd_common.sh@45 -- # return 0
00:09:22.528    16:53:15	-- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:09:22.528    16:53:15	-- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:22.528     16:53:15	-- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:09:22.786    16:53:15	-- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:09:22.786     16:53:15	-- bdev/nbd_common.sh@64 -- # echo '[]'
00:09:22.786     16:53:15	-- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:09:22.786    16:53:15	-- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:09:22.786     16:53:15	-- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:09:22.786     16:53:15	-- bdev/nbd_common.sh@65 -- # echo ''
00:09:22.786     16:53:15	-- bdev/nbd_common.sh@65 -- # true
00:09:22.786    16:53:15	-- bdev/nbd_common.sh@65 -- # count=0
00:09:22.786    16:53:15	-- bdev/nbd_common.sh@66 -- # echo 0
00:09:22.786   16:53:15	-- bdev/nbd_common.sh@104 -- # count=0
00:09:22.786   16:53:15	-- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:09:22.786   16:53:15	-- bdev/nbd_common.sh@109 -- # return 0
00:09:22.786   16:53:15	-- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM
00:09:23.052   16:53:15	-- event/event.sh@35 -- # sleep 3
00:09:23.328  [2024-11-19 16:53:16.148137] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2
00:09:23.598  [2024-11-19 16:53:16.222078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:09:23.598  [2024-11-19 16:53:16.222081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:09:23.598  [2024-11-19 16:53:16.299079] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered.
00:09:23.598  [2024-11-19 16:53:16.299444] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered.
00:09:26.132   16:53:18	-- event/event.sh@23 -- # for i in {0..2}
00:09:26.132  spdk_app_start Round 1
00:09:26.132   16:53:18	-- event/event.sh@24 -- # echo 'spdk_app_start Round 1'
00:09:26.132   16:53:18	-- event/event.sh@25 -- # waitforlisten 116190 /var/tmp/spdk-nbd.sock
00:09:26.132   16:53:18	-- common/autotest_common.sh@829 -- # '[' -z 116190 ']'
00:09:26.132   16:53:18	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:09:26.132   16:53:18	-- common/autotest_common.sh@834 -- # local max_retries=100
00:09:26.132  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:09:26.132   16:53:18	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:09:26.132   16:53:18	-- common/autotest_common.sh@838 -- # xtrace_disable
00:09:26.132   16:53:18	-- common/autotest_common.sh@10 -- # set +x
00:09:26.391   16:53:19	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:09:26.391   16:53:19	-- common/autotest_common.sh@862 -- # return 0
00:09:26.391   16:53:19	-- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:09:26.649  Malloc0
00:09:26.649   16:53:19	-- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:09:26.649  Malloc1
00:09:26.649   16:53:19	-- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:09:26.649   16:53:19	-- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:26.649   16:53:19	-- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1')
00:09:26.649   16:53:19	-- bdev/nbd_common.sh@91 -- # local bdev_list
00:09:26.649   16:53:19	-- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:09:26.649   16:53:19	-- bdev/nbd_common.sh@92 -- # local nbd_list
00:09:26.649   16:53:19	-- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:09:26.649   16:53:19	-- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:26.649   16:53:19	-- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1')
00:09:26.649   16:53:19	-- bdev/nbd_common.sh@10 -- # local bdev_list
00:09:26.649   16:53:19	-- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:09:26.649   16:53:19	-- bdev/nbd_common.sh@11 -- # local nbd_list
00:09:26.649   16:53:19	-- bdev/nbd_common.sh@12 -- # local i
00:09:26.649   16:53:19	-- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:09:26.649   16:53:19	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:09:26.908   16:53:19	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0
00:09:26.908  /dev/nbd0
00:09:26.908    16:53:19	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:09:26.908   16:53:19	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:09:26.908   16:53:19	-- common/autotest_common.sh@866 -- # local nbd_name=nbd0
00:09:26.908   16:53:19	-- common/autotest_common.sh@867 -- # local i
00:09:26.908   16:53:19	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:09:26.908   16:53:19	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:09:26.908   16:53:19	-- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions
00:09:26.908   16:53:19	-- common/autotest_common.sh@871 -- # break
00:09:26.908   16:53:19	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:09:26.908   16:53:19	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:09:26.908   16:53:19	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:09:26.908  1+0 records in
00:09:26.908  1+0 records out
00:09:26.908  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000461733 s, 8.9 MB/s
00:09:26.908    16:53:19	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:09:26.908   16:53:19	-- common/autotest_common.sh@884 -- # size=4096
00:09:26.908   16:53:19	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:09:26.908   16:53:19	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:09:26.908   16:53:19	-- common/autotest_common.sh@887 -- # return 0
00:09:26.908   16:53:19	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:09:26.908   16:53:19	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:09:26.908   16:53:19	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1
00:09:27.167  /dev/nbd1
00:09:27.167    16:53:19	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:09:27.167   16:53:19	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:09:27.167   16:53:19	-- common/autotest_common.sh@866 -- # local nbd_name=nbd1
00:09:27.167   16:53:19	-- common/autotest_common.sh@867 -- # local i
00:09:27.167   16:53:19	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:09:27.167   16:53:19	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:09:27.167   16:53:19	-- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions
00:09:27.167   16:53:19	-- common/autotest_common.sh@871 -- # break
00:09:27.167   16:53:19	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:09:27.167   16:53:19	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:09:27.167   16:53:19	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:09:27.167  1+0 records in
00:09:27.167  1+0 records out
00:09:27.167  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000386989 s, 10.6 MB/s
00:09:27.167    16:53:19	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:09:27.167   16:53:19	-- common/autotest_common.sh@884 -- # size=4096
00:09:27.167   16:53:19	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:09:27.167   16:53:19	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:09:27.167   16:53:19	-- common/autotest_common.sh@887 -- # return 0
00:09:27.167   16:53:19	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:09:27.167   16:53:19	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:09:27.167    16:53:19	-- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:09:27.167    16:53:19	-- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:27.167     16:53:19	-- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:09:27.426    16:53:20	-- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:09:27.426    {
00:09:27.426      "nbd_device": "/dev/nbd0",
00:09:27.426      "bdev_name": "Malloc0"
00:09:27.426    },
00:09:27.426    {
00:09:27.426      "nbd_device": "/dev/nbd1",
00:09:27.426      "bdev_name": "Malloc1"
00:09:27.426    }
00:09:27.426  ]'
00:09:27.426     16:53:20	-- bdev/nbd_common.sh@64 -- # echo '[
00:09:27.426    {
00:09:27.426      "nbd_device": "/dev/nbd0",
00:09:27.426      "bdev_name": "Malloc0"
00:09:27.426    },
00:09:27.426    {
00:09:27.426      "nbd_device": "/dev/nbd1",
00:09:27.426      "bdev_name": "Malloc1"
00:09:27.426    }
00:09:27.426  ]'
00:09:27.426     16:53:20	-- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:09:27.426    16:53:20	-- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0
00:09:27.426  /dev/nbd1'
00:09:27.686     16:53:20	-- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0
00:09:27.686  /dev/nbd1'
00:09:27.686     16:53:20	-- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:09:27.686    16:53:20	-- bdev/nbd_common.sh@65 -- # count=2
00:09:27.686    16:53:20	-- bdev/nbd_common.sh@66 -- # echo 2
00:09:27.686   16:53:20	-- bdev/nbd_common.sh@95 -- # count=2
00:09:27.686   16:53:20	-- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']'
00:09:27.686   16:53:20	-- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write
00:09:27.686   16:53:20	-- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:09:27.686   16:53:20	-- bdev/nbd_common.sh@70 -- # local nbd_list
00:09:27.686   16:53:20	-- bdev/nbd_common.sh@71 -- # local operation=write
00:09:27.686   16:53:20	-- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:09:27.686   16:53:20	-- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:09:27.686   16:53:20	-- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256
00:09:27.686  256+0 records in
00:09:27.686  256+0 records out
00:09:27.686  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.011906 s, 88.1 MB/s
00:09:27.686   16:53:20	-- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:09:27.686   16:53:20	-- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:09:27.686  256+0 records in
00:09:27.686  256+0 records out
00:09:27.686  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0303698 s, 34.5 MB/s
00:09:27.686   16:53:20	-- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:09:27.686   16:53:20	-- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct
00:09:27.686  256+0 records in
00:09:27.686  256+0 records out
00:09:27.686  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0307835 s, 34.1 MB/s
00:09:27.686   16:53:20	-- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify
00:09:27.686   16:53:20	-- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:09:27.686   16:53:20	-- bdev/nbd_common.sh@70 -- # local nbd_list
00:09:27.686   16:53:20	-- bdev/nbd_common.sh@71 -- # local operation=verify
00:09:27.686   16:53:20	-- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:09:27.686   16:53:20	-- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:09:27.686   16:53:20	-- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:09:27.686   16:53:20	-- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:09:27.686   16:53:20	-- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0
00:09:27.686   16:53:20	-- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:09:27.686   16:53:20	-- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1
00:09:27.686   16:53:20	-- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:09:27.686   16:53:20	-- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1'
00:09:27.686   16:53:20	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:27.686   16:53:20	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:09:27.686   16:53:20	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:09:27.686   16:53:20	-- bdev/nbd_common.sh@51 -- # local i
00:09:27.686   16:53:20	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:09:27.686   16:53:20	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:09:27.946    16:53:20	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:09:27.946   16:53:20	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:09:27.946   16:53:20	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:09:27.946   16:53:20	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:09:27.946   16:53:20	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:09:27.946   16:53:20	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:09:27.946   16:53:20	-- bdev/nbd_common.sh@41 -- # break
00:09:27.946   16:53:20	-- bdev/nbd_common.sh@45 -- # return 0
00:09:27.946   16:53:20	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:09:27.946   16:53:20	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:09:27.946    16:53:20	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:09:27.946   16:53:20	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:09:27.946   16:53:20	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:09:27.946   16:53:20	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:09:27.946   16:53:20	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:09:27.946   16:53:20	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:09:28.205   16:53:20	-- bdev/nbd_common.sh@41 -- # break
00:09:28.205   16:53:20	-- bdev/nbd_common.sh@45 -- # return 0
00:09:28.205    16:53:20	-- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:09:28.205    16:53:20	-- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:28.205     16:53:20	-- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:09:28.205    16:53:20	-- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:09:28.205     16:53:21	-- bdev/nbd_common.sh@64 -- # echo '[]'
00:09:28.205     16:53:21	-- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:09:28.205    16:53:21	-- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:09:28.205     16:53:21	-- bdev/nbd_common.sh@65 -- # echo ''
00:09:28.205     16:53:21	-- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:09:28.205     16:53:21	-- bdev/nbd_common.sh@65 -- # true
00:09:28.205    16:53:21	-- bdev/nbd_common.sh@65 -- # count=0
00:09:28.205    16:53:21	-- bdev/nbd_common.sh@66 -- # echo 0
00:09:28.205   16:53:21	-- bdev/nbd_common.sh@104 -- # count=0
00:09:28.205   16:53:21	-- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:09:28.205   16:53:21	-- bdev/nbd_common.sh@109 -- # return 0
00:09:28.205   16:53:21	-- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM
00:09:28.464   16:53:21	-- event/event.sh@35 -- # sleep 3
00:09:29.031  [2024-11-19 16:53:21.596077] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2
00:09:29.031  [2024-11-19 16:53:21.679923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:09:29.031  [2024-11-19 16:53:21.679931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:09:29.031  [2024-11-19 16:53:21.758195] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered.
00:09:29.031  [2024-11-19 16:53:21.758536] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered.
00:09:31.566  spdk_app_start Round 2
00:09:31.566  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:09:31.566   16:53:24	-- event/event.sh@23 -- # for i in {0..2}
00:09:31.566   16:53:24	-- event/event.sh@24 -- # echo 'spdk_app_start Round 2'
00:09:31.566   16:53:24	-- event/event.sh@25 -- # waitforlisten 116190 /var/tmp/spdk-nbd.sock
00:09:31.566   16:53:24	-- common/autotest_common.sh@829 -- # '[' -z 116190 ']'
00:09:31.566   16:53:24	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:09:31.566   16:53:24	-- common/autotest_common.sh@834 -- # local max_retries=100
00:09:31.566   16:53:24	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:09:31.566   16:53:24	-- common/autotest_common.sh@838 -- # xtrace_disable
00:09:31.566   16:53:24	-- common/autotest_common.sh@10 -- # set +x
00:09:31.825   16:53:24	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:09:31.825   16:53:24	-- common/autotest_common.sh@862 -- # return 0
00:09:31.825   16:53:24	-- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:09:32.084  Malloc0
00:09:32.084   16:53:24	-- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:09:32.344  Malloc1
00:09:32.344   16:53:25	-- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:09:32.344   16:53:25	-- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:32.344   16:53:25	-- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1')
00:09:32.344   16:53:25	-- bdev/nbd_common.sh@91 -- # local bdev_list
00:09:32.344   16:53:25	-- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:09:32.344   16:53:25	-- bdev/nbd_common.sh@92 -- # local nbd_list
00:09:32.344   16:53:25	-- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:09:32.344   16:53:25	-- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:32.344   16:53:25	-- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1')
00:09:32.344   16:53:25	-- bdev/nbd_common.sh@10 -- # local bdev_list
00:09:32.344   16:53:25	-- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:09:32.344   16:53:25	-- bdev/nbd_common.sh@11 -- # local nbd_list
00:09:32.344   16:53:25	-- bdev/nbd_common.sh@12 -- # local i
00:09:32.344   16:53:25	-- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:09:32.344   16:53:25	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:09:32.344   16:53:25	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0
00:09:32.603  /dev/nbd0
00:09:32.603    16:53:25	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:09:32.603   16:53:25	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:09:32.603   16:53:25	-- common/autotest_common.sh@866 -- # local nbd_name=nbd0
00:09:32.603   16:53:25	-- common/autotest_common.sh@867 -- # local i
00:09:32.603   16:53:25	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:09:32.603   16:53:25	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:09:32.603   16:53:25	-- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions
00:09:32.603   16:53:25	-- common/autotest_common.sh@871 -- # break
00:09:32.603   16:53:25	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:09:32.603   16:53:25	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:09:32.603   16:53:25	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:09:32.603  1+0 records in
00:09:32.603  1+0 records out
00:09:32.603  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000477461 s, 8.6 MB/s
00:09:32.603    16:53:25	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:09:32.603   16:53:25	-- common/autotest_common.sh@884 -- # size=4096
00:09:32.603   16:53:25	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:09:32.603   16:53:25	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:09:32.603   16:53:25	-- common/autotest_common.sh@887 -- # return 0
00:09:32.603   16:53:25	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:09:32.603   16:53:25	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:09:32.603   16:53:25	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1
00:09:32.862  /dev/nbd1
00:09:32.862    16:53:25	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:09:32.862   16:53:25	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:09:32.862   16:53:25	-- common/autotest_common.sh@866 -- # local nbd_name=nbd1
00:09:32.862   16:53:25	-- common/autotest_common.sh@867 -- # local i
00:09:32.862   16:53:25	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:09:32.862   16:53:25	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:09:32.862   16:53:25	-- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions
00:09:32.862   16:53:25	-- common/autotest_common.sh@871 -- # break
00:09:32.862   16:53:25	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:09:32.862   16:53:25	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:09:32.862   16:53:25	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:09:32.862  1+0 records in
00:09:32.862  1+0 records out
00:09:32.862  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000656655 s, 6.2 MB/s
00:09:32.862    16:53:25	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:09:32.862   16:53:25	-- common/autotest_common.sh@884 -- # size=4096
00:09:32.862   16:53:25	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:09:32.862   16:53:25	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:09:32.862   16:53:25	-- common/autotest_common.sh@887 -- # return 0
00:09:32.862   16:53:25	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:09:32.862   16:53:25	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:09:32.862    16:53:25	-- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:09:32.862    16:53:25	-- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:32.862     16:53:25	-- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:09:33.121    16:53:25	-- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:09:33.121    {
00:09:33.121      "nbd_device": "/dev/nbd0",
00:09:33.121      "bdev_name": "Malloc0"
00:09:33.121    },
00:09:33.121    {
00:09:33.121      "nbd_device": "/dev/nbd1",
00:09:33.121      "bdev_name": "Malloc1"
00:09:33.121    }
00:09:33.121  ]'
00:09:33.121     16:53:25	-- bdev/nbd_common.sh@64 -- # echo '[
00:09:33.121    {
00:09:33.121      "nbd_device": "/dev/nbd0",
00:09:33.121      "bdev_name": "Malloc0"
00:09:33.121    },
00:09:33.121    {
00:09:33.122      "nbd_device": "/dev/nbd1",
00:09:33.122      "bdev_name": "Malloc1"
00:09:33.122    }
00:09:33.122  ]'
00:09:33.122     16:53:25	-- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:09:33.122    16:53:25	-- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0
00:09:33.122  /dev/nbd1'
00:09:33.122     16:53:25	-- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0
00:09:33.122  /dev/nbd1'
00:09:33.122     16:53:25	-- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:09:33.122    16:53:25	-- bdev/nbd_common.sh@65 -- # count=2
00:09:33.122    16:53:25	-- bdev/nbd_common.sh@66 -- # echo 2
00:09:33.122   16:53:25	-- bdev/nbd_common.sh@95 -- # count=2
00:09:33.122   16:53:25	-- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']'
00:09:33.122   16:53:25	-- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write
00:09:33.122   16:53:25	-- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:09:33.122   16:53:25	-- bdev/nbd_common.sh@70 -- # local nbd_list
00:09:33.122   16:53:25	-- bdev/nbd_common.sh@71 -- # local operation=write
00:09:33.122   16:53:25	-- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:09:33.122   16:53:25	-- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:09:33.122   16:53:25	-- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256
00:09:33.122  256+0 records in
00:09:33.122  256+0 records out
00:09:33.122  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00763274 s, 137 MB/s
00:09:33.122   16:53:25	-- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:09:33.122   16:53:25	-- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:09:33.122  256+0 records in
00:09:33.122  256+0 records out
00:09:33.122  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0260455 s, 40.3 MB/s
00:09:33.122   16:53:25	-- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:09:33.122   16:53:25	-- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct
00:09:33.122  256+0 records in
00:09:33.122  256+0 records out
00:09:33.122  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0285598 s, 36.7 MB/s
00:09:33.122   16:53:25	-- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify
00:09:33.122   16:53:25	-- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:09:33.122   16:53:25	-- bdev/nbd_common.sh@70 -- # local nbd_list
00:09:33.122   16:53:25	-- bdev/nbd_common.sh@71 -- # local operation=verify
00:09:33.122   16:53:25	-- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:09:33.122   16:53:25	-- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:09:33.122   16:53:25	-- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:09:33.122   16:53:25	-- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:09:33.122   16:53:25	-- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0
00:09:33.122   16:53:25	-- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:09:33.122   16:53:25	-- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1
00:09:33.122   16:53:25	-- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:09:33.122   16:53:25	-- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1'
00:09:33.122   16:53:25	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:33.122   16:53:25	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:09:33.122   16:53:25	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:09:33.122   16:53:25	-- bdev/nbd_common.sh@51 -- # local i
00:09:33.122   16:53:25	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:09:33.122   16:53:25	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:09:33.381    16:53:26	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:09:33.381   16:53:26	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:09:33.381   16:53:26	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:09:33.381   16:53:26	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:09:33.381   16:53:26	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:09:33.381   16:53:26	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:09:33.381   16:53:26	-- bdev/nbd_common.sh@41 -- # break
00:09:33.381   16:53:26	-- bdev/nbd_common.sh@45 -- # return 0
00:09:33.381   16:53:26	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:09:33.381   16:53:26	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:09:33.641    16:53:26	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:09:33.641   16:53:26	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:09:33.641   16:53:26	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:09:33.641   16:53:26	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:09:33.641   16:53:26	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:09:33.641   16:53:26	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:09:33.641   16:53:26	-- bdev/nbd_common.sh@41 -- # break
00:09:33.641   16:53:26	-- bdev/nbd_common.sh@45 -- # return 0
00:09:33.641    16:53:26	-- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:09:33.903    16:53:26	-- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:33.903     16:53:26	-- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:09:33.903    16:53:26	-- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:09:33.903     16:53:26	-- bdev/nbd_common.sh@64 -- # echo '[]'
00:09:33.903     16:53:26	-- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:09:33.903    16:53:26	-- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:09:33.903     16:53:26	-- bdev/nbd_common.sh@65 -- # echo ''
00:09:33.903     16:53:26	-- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:09:33.903     16:53:26	-- bdev/nbd_common.sh@65 -- # true
00:09:33.903    16:53:26	-- bdev/nbd_common.sh@65 -- # count=0
00:09:33.903    16:53:26	-- bdev/nbd_common.sh@66 -- # echo 0
00:09:33.903   16:53:26	-- bdev/nbd_common.sh@104 -- # count=0
00:09:33.903   16:53:26	-- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:09:33.903   16:53:26	-- bdev/nbd_common.sh@109 -- # return 0
00:09:33.903   16:53:26	-- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM
00:09:34.470   16:53:27	-- event/event.sh@35 -- # sleep 3
00:09:34.729  [2024-11-19 16:53:27.359174] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2
00:09:34.729  [2024-11-19 16:53:27.419104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:09:34.729  [2024-11-19 16:53:27.419109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:09:34.729  [2024-11-19 16:53:27.497322] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered.
00:09:34.729  [2024-11-19 16:53:27.497669] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered.
00:09:37.302  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:09:37.302   16:53:30	-- event/event.sh@38 -- # waitforlisten 116190 /var/tmp/spdk-nbd.sock
00:09:37.302   16:53:30	-- common/autotest_common.sh@829 -- # '[' -z 116190 ']'
00:09:37.302   16:53:30	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:09:37.302   16:53:30	-- common/autotest_common.sh@834 -- # local max_retries=100
00:09:37.302   16:53:30	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:09:37.302   16:53:30	-- common/autotest_common.sh@838 -- # xtrace_disable
00:09:37.302   16:53:30	-- common/autotest_common.sh@10 -- # set +x
00:09:37.560   16:53:30	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:09:37.560   16:53:30	-- common/autotest_common.sh@862 -- # return 0
00:09:37.560   16:53:30	-- event/event.sh@39 -- # killprocess 116190
00:09:37.560   16:53:30	-- common/autotest_common.sh@936 -- # '[' -z 116190 ']'
00:09:37.560   16:53:30	-- common/autotest_common.sh@940 -- # kill -0 116190
00:09:37.560    16:53:30	-- common/autotest_common.sh@941 -- # uname
00:09:37.560   16:53:30	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:09:37.560    16:53:30	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 116190
00:09:37.560   16:53:30	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:09:37.560   16:53:30	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:09:37.560   16:53:30	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 116190'
00:09:37.560  killing process with pid 116190
00:09:37.560   16:53:30	-- common/autotest_common.sh@955 -- # kill 116190
00:09:37.560   16:53:30	-- common/autotest_common.sh@960 -- # wait 116190
00:09:37.818  spdk_app_start is called in Round 0.
00:09:37.818  Shutdown signal received, stop current app iteration
00:09:37.818  Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 reinitialization...
00:09:37.818  spdk_app_start is called in Round 1.
00:09:37.818  Shutdown signal received, stop current app iteration
00:09:37.818  Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 reinitialization...
00:09:37.818  spdk_app_start is called in Round 2.
00:09:37.818  Shutdown signal received, stop current app iteration
00:09:37.818  Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 reinitialization...
00:09:37.818  spdk_app_start is called in Round 3.
00:09:37.818  Shutdown signal received, stop current app iteration
00:09:37.818   16:53:30	-- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT
00:09:37.818   16:53:30	-- event/event.sh@42 -- # return 0
00:09:37.818  
00:09:37.818  real	0m18.323s
00:09:37.818  user	0m39.773s
00:09:37.818  sys	0m3.377s
00:09:37.818   16:53:30	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:09:37.818   16:53:30	-- common/autotest_common.sh@10 -- # set +x
00:09:37.818  ************************************
00:09:37.818  END TEST app_repeat
00:09:37.818  ************************************
00:09:37.818   16:53:30	-- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 ))
00:09:37.818   16:53:30	-- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh
00:09:37.818   16:53:30	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:09:37.818   16:53:30	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:09:37.818   16:53:30	-- common/autotest_common.sh@10 -- # set +x
00:09:37.818  ************************************
00:09:37.818  START TEST cpu_locks
00:09:37.818  ************************************
00:09:37.818   16:53:30	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh
00:09:37.818  * Looking for test storage...
00:09:38.077  * Found test storage at /home/vagrant/spdk_repo/spdk/test/event
00:09:38.077    16:53:30	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:09:38.077     16:53:30	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:09:38.077     16:53:30	-- common/autotest_common.sh@1690 -- # lcov --version
00:09:38.077    16:53:30	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:09:38.077    16:53:30	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:09:38.077    16:53:30	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:09:38.077    16:53:30	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:09:38.077    16:53:30	-- scripts/common.sh@335 -- # IFS=.-:
00:09:38.077    16:53:30	-- scripts/common.sh@335 -- # read -ra ver1
00:09:38.077    16:53:30	-- scripts/common.sh@336 -- # IFS=.-:
00:09:38.077    16:53:30	-- scripts/common.sh@336 -- # read -ra ver2
00:09:38.077    16:53:30	-- scripts/common.sh@337 -- # local 'op=<'
00:09:38.077    16:53:30	-- scripts/common.sh@339 -- # ver1_l=2
00:09:38.077    16:53:30	-- scripts/common.sh@340 -- # ver2_l=1
00:09:38.077    16:53:30	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:09:38.077    16:53:30	-- scripts/common.sh@343 -- # case "$op" in
00:09:38.077    16:53:30	-- scripts/common.sh@344 -- # : 1
00:09:38.077    16:53:30	-- scripts/common.sh@363 -- # (( v = 0 ))
00:09:38.077    16:53:30	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:09:38.077     16:53:30	-- scripts/common.sh@364 -- # decimal 1
00:09:38.077     16:53:30	-- scripts/common.sh@352 -- # local d=1
00:09:38.077     16:53:30	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:38.077     16:53:30	-- scripts/common.sh@354 -- # echo 1
00:09:38.077    16:53:30	-- scripts/common.sh@364 -- # ver1[v]=1
00:09:38.077     16:53:30	-- scripts/common.sh@365 -- # decimal 2
00:09:38.077     16:53:30	-- scripts/common.sh@352 -- # local d=2
00:09:38.078     16:53:30	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:09:38.078     16:53:30	-- scripts/common.sh@354 -- # echo 2
00:09:38.078    16:53:30	-- scripts/common.sh@365 -- # ver2[v]=2
00:09:38.078    16:53:30	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:09:38.078    16:53:30	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:09:38.078    16:53:30	-- scripts/common.sh@367 -- # return 0
00:09:38.078    16:53:30	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:09:38.078    16:53:30	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:09:38.078  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:38.078  		--rc genhtml_branch_coverage=1
00:09:38.078  		--rc genhtml_function_coverage=1
00:09:38.078  		--rc genhtml_legend=1
00:09:38.078  		--rc geninfo_all_blocks=1
00:09:38.078  		--rc geninfo_unexecuted_blocks=1
00:09:38.078  		
00:09:38.078  		'
00:09:38.078    16:53:30	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:09:38.078  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:38.078  		--rc genhtml_branch_coverage=1
00:09:38.078  		--rc genhtml_function_coverage=1
00:09:38.078  		--rc genhtml_legend=1
00:09:38.078  		--rc geninfo_all_blocks=1
00:09:38.078  		--rc geninfo_unexecuted_blocks=1
00:09:38.078  		
00:09:38.078  		'
00:09:38.078    16:53:30	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:09:38.078  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:38.078  		--rc genhtml_branch_coverage=1
00:09:38.078  		--rc genhtml_function_coverage=1
00:09:38.078  		--rc genhtml_legend=1
00:09:38.078  		--rc geninfo_all_blocks=1
00:09:38.078  		--rc geninfo_unexecuted_blocks=1
00:09:38.078  		
00:09:38.078  		'
00:09:38.078    16:53:30	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:09:38.078  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:38.078  		--rc genhtml_branch_coverage=1
00:09:38.078  		--rc genhtml_function_coverage=1
00:09:38.078  		--rc genhtml_legend=1
00:09:38.078  		--rc geninfo_all_blocks=1
00:09:38.078  		--rc geninfo_unexecuted_blocks=1
00:09:38.078  		
00:09:38.078  		'
00:09:38.078   16:53:30	-- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock
00:09:38.078   16:53:30	-- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock
00:09:38.078   16:53:30	-- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT
00:09:38.078   16:53:30	-- event/cpu_locks.sh@166 -- # run_test default_locks default_locks
00:09:38.078   16:53:30	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:09:38.078   16:53:30	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:09:38.078   16:53:30	-- common/autotest_common.sh@10 -- # set +x
00:09:38.078  ************************************
00:09:38.078  START TEST default_locks
00:09:38.078  ************************************
00:09:38.078   16:53:30	-- common/autotest_common.sh@1114 -- # default_locks
00:09:38.078   16:53:30	-- event/cpu_locks.sh@46 -- # spdk_tgt_pid=116708
00:09:38.078   16:53:30	-- event/cpu_locks.sh@47 -- # waitforlisten 116708
00:09:38.078   16:53:30	-- common/autotest_common.sh@829 -- # '[' -z 116708 ']'
00:09:38.078   16:53:30	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:38.078   16:53:30	-- common/autotest_common.sh@834 -- # local max_retries=100
00:09:38.078   16:53:30	-- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1
00:09:38.078   16:53:30	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:38.078  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:38.078   16:53:30	-- common/autotest_common.sh@838 -- # xtrace_disable
00:09:38.078   16:53:30	-- common/autotest_common.sh@10 -- # set +x
00:09:38.078  [2024-11-19 16:53:30.851934] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:09:38.078  [2024-11-19 16:53:30.852118] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116708 ]
00:09:38.337  [2024-11-19 16:53:30.996472] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:38.337  [2024-11-19 16:53:31.041188] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:09:38.337  [2024-11-19 16:53:31.041599] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:09:38.903   16:53:31	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:09:38.903   16:53:31	-- common/autotest_common.sh@862 -- # return 0
00:09:38.903   16:53:31	-- event/cpu_locks.sh@49 -- # locks_exist 116708
00:09:38.903   16:53:31	-- event/cpu_locks.sh@22 -- # lslocks -p 116708
00:09:38.903   16:53:31	-- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:09:39.469   16:53:32	-- event/cpu_locks.sh@50 -- # killprocess 116708
00:09:39.469   16:53:32	-- common/autotest_common.sh@936 -- # '[' -z 116708 ']'
00:09:39.469   16:53:32	-- common/autotest_common.sh@940 -- # kill -0 116708
00:09:39.469    16:53:32	-- common/autotest_common.sh@941 -- # uname
00:09:39.469   16:53:32	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:09:39.469    16:53:32	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 116708
00:09:39.469   16:53:32	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:09:39.469   16:53:32	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:09:39.469   16:53:32	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 116708'
00:09:39.469  killing process with pid 116708
00:09:39.469   16:53:32	-- common/autotest_common.sh@955 -- # kill 116708
00:09:39.469   16:53:32	-- common/autotest_common.sh@960 -- # wait 116708
00:09:39.728   16:53:32	-- event/cpu_locks.sh@52 -- # NOT waitforlisten 116708
00:09:39.728   16:53:32	-- common/autotest_common.sh@650 -- # local es=0
00:09:39.728   16:53:32	-- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 116708
00:09:39.728   16:53:32	-- common/autotest_common.sh@638 -- # local arg=waitforlisten
00:09:39.728   16:53:32	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:09:39.728    16:53:32	-- common/autotest_common.sh@642 -- # type -t waitforlisten
00:09:39.728   16:53:32	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:09:39.728   16:53:32	-- common/autotest_common.sh@653 -- # waitforlisten 116708
00:09:39.728   16:53:32	-- common/autotest_common.sh@829 -- # '[' -z 116708 ']'
00:09:39.728   16:53:32	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:39.728   16:53:32	-- common/autotest_common.sh@834 -- # local max_retries=100
00:09:39.728  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:39.728   16:53:32	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:39.728   16:53:32	-- common/autotest_common.sh@838 -- # xtrace_disable
00:09:39.728   16:53:32	-- common/autotest_common.sh@10 -- # set +x
00:09:39.728  ERROR: process (pid: 116708) is no longer running
00:09:39.728  /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (116708) - No such process
00:09:39.728   16:53:32	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:09:39.728   16:53:32	-- common/autotest_common.sh@862 -- # return 1
00:09:39.728   16:53:32	-- common/autotest_common.sh@653 -- # es=1
00:09:39.728   16:53:32	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:09:39.728   16:53:32	-- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:09:39.728   16:53:32	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:09:39.728   16:53:32	-- event/cpu_locks.sh@54 -- # no_locks
00:09:39.728   16:53:32	-- event/cpu_locks.sh@26 -- # lock_files=()
00:09:39.728   16:53:32	-- event/cpu_locks.sh@26 -- # local lock_files
00:09:39.728   16:53:32	-- event/cpu_locks.sh@27 -- # (( 0 != 0 ))
00:09:39.728  
00:09:39.728  real	0m1.705s
00:09:39.728  user	0m1.738s
00:09:39.728  sys	0m0.589s
00:09:39.728   16:53:32	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:09:39.728  ************************************
00:09:39.728  END TEST default_locks
00:09:39.728   16:53:32	-- common/autotest_common.sh@10 -- # set +x
00:09:39.728  ************************************
00:09:39.728   16:53:32	-- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc
00:09:39.728   16:53:32	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:09:39.728   16:53:32	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:09:39.728   16:53:32	-- common/autotest_common.sh@10 -- # set +x
00:09:39.728  ************************************
00:09:39.728  START TEST default_locks_via_rpc
00:09:39.728  ************************************
00:09:39.728   16:53:32	-- common/autotest_common.sh@1114 -- # default_locks_via_rpc
00:09:39.728   16:53:32	-- event/cpu_locks.sh@62 -- # spdk_tgt_pid=116762
00:09:39.728   16:53:32	-- event/cpu_locks.sh@63 -- # waitforlisten 116762
00:09:39.728   16:53:32	-- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1
00:09:39.728   16:53:32	-- common/autotest_common.sh@829 -- # '[' -z 116762 ']'
00:09:39.728   16:53:32	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:39.728   16:53:32	-- common/autotest_common.sh@834 -- # local max_retries=100
00:09:39.728   16:53:32	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:39.729  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:39.729   16:53:32	-- common/autotest_common.sh@838 -- # xtrace_disable
00:09:39.729   16:53:32	-- common/autotest_common.sh@10 -- # set +x
00:09:39.986  [2024-11-19 16:53:32.643606] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:09:39.986  [2024-11-19 16:53:32.643808] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116762 ]
00:09:39.986  [2024-11-19 16:53:32.786378] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:39.986  [2024-11-19 16:53:32.837866] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:09:39.986  [2024-11-19 16:53:32.838293] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:09:40.920   16:53:33	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:09:40.920   16:53:33	-- common/autotest_common.sh@862 -- # return 0
00:09:40.920   16:53:33	-- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks
00:09:40.920   16:53:33	-- common/autotest_common.sh@561 -- # xtrace_disable
00:09:40.920   16:53:33	-- common/autotest_common.sh@10 -- # set +x
00:09:40.920   16:53:33	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:40.920   16:53:33	-- event/cpu_locks.sh@67 -- # no_locks
00:09:40.920   16:53:33	-- event/cpu_locks.sh@26 -- # lock_files=()
00:09:40.920   16:53:33	-- event/cpu_locks.sh@26 -- # local lock_files
00:09:40.920   16:53:33	-- event/cpu_locks.sh@27 -- # (( 0 != 0 ))
00:09:40.920   16:53:33	-- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks
00:09:40.920   16:53:33	-- common/autotest_common.sh@561 -- # xtrace_disable
00:09:40.920   16:53:33	-- common/autotest_common.sh@10 -- # set +x
00:09:40.920   16:53:33	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:40.920   16:53:33	-- event/cpu_locks.sh@71 -- # locks_exist 116762
00:09:40.920   16:53:33	-- event/cpu_locks.sh@22 -- # lslocks -p 116762
00:09:40.920   16:53:33	-- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:09:41.178   16:53:33	-- event/cpu_locks.sh@73 -- # killprocess 116762
00:09:41.178   16:53:33	-- common/autotest_common.sh@936 -- # '[' -z 116762 ']'
00:09:41.178   16:53:33	-- common/autotest_common.sh@940 -- # kill -0 116762
00:09:41.178    16:53:33	-- common/autotest_common.sh@941 -- # uname
00:09:41.178   16:53:33	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:09:41.178    16:53:33	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 116762
00:09:41.178   16:53:33	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:09:41.178  killing process with pid 116762
00:09:41.178   16:53:33	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:09:41.178   16:53:33	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 116762'
00:09:41.178   16:53:33	-- common/autotest_common.sh@955 -- # kill 116762
00:09:41.178   16:53:33	-- common/autotest_common.sh@960 -- # wait 116762
00:09:41.437  
00:09:41.437  real	0m1.671s
00:09:41.437  user	0m1.649s
00:09:41.437  sys	0m0.619s
00:09:41.437   16:53:34	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:09:41.437   16:53:34	-- common/autotest_common.sh@10 -- # set +x
00:09:41.437  ************************************
00:09:41.437  END TEST default_locks_via_rpc
00:09:41.437  ************************************
00:09:41.437   16:53:34	-- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask
00:09:41.437   16:53:34	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:09:41.437   16:53:34	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:09:41.437   16:53:34	-- common/autotest_common.sh@10 -- # set +x
00:09:41.695  ************************************
00:09:41.695  START TEST non_locking_app_on_locked_coremask
00:09:41.695  ************************************
00:09:41.695   16:53:34	-- common/autotest_common.sh@1114 -- # non_locking_app_on_locked_coremask
00:09:41.695   16:53:34	-- event/cpu_locks.sh@80 -- # spdk_tgt_pid=116817
00:09:41.695   16:53:34	-- event/cpu_locks.sh@81 -- # waitforlisten 116817 /var/tmp/spdk.sock
00:09:41.696   16:53:34	-- common/autotest_common.sh@829 -- # '[' -z 116817 ']'
00:09:41.696   16:53:34	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:41.696   16:53:34	-- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1
00:09:41.696   16:53:34	-- common/autotest_common.sh@834 -- # local max_retries=100
00:09:41.696  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:41.696   16:53:34	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:41.696   16:53:34	-- common/autotest_common.sh@838 -- # xtrace_disable
00:09:41.696   16:53:34	-- common/autotest_common.sh@10 -- # set +x
00:09:41.696  [2024-11-19 16:53:34.388070] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:09:41.696  [2024-11-19 16:53:34.388330] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116817 ]
00:09:41.696  [2024-11-19 16:53:34.548963] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:41.954  [2024-11-19 16:53:34.599556] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:09:41.954  [2024-11-19 16:53:34.600058] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:09:42.521   16:53:35	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:09:42.521   16:53:35	-- common/autotest_common.sh@862 -- # return 0
00:09:42.521   16:53:35	-- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=116838
00:09:42.521   16:53:35	-- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock
00:09:42.521   16:53:35	-- event/cpu_locks.sh@85 -- # waitforlisten 116838 /var/tmp/spdk2.sock
00:09:42.521   16:53:35	-- common/autotest_common.sh@829 -- # '[' -z 116838 ']'
00:09:42.521   16:53:35	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock
00:09:42.521   16:53:35	-- common/autotest_common.sh@834 -- # local max_retries=100
00:09:42.521   16:53:35	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:09:42.521  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:09:42.521   16:53:35	-- common/autotest_common.sh@838 -- # xtrace_disable
00:09:42.521   16:53:35	-- common/autotest_common.sh@10 -- # set +x
00:09:42.781  [2024-11-19 16:53:35.396959] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:09:42.781  [2024-11-19 16:53:35.397240] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116838 ]
00:09:42.781  [2024-11-19 16:53:35.548305] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated.
00:09:42.781  [2024-11-19 16:53:35.562895] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:43.040  [2024-11-19 16:53:35.667042] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:09:43.040  [2024-11-19 16:53:35.677875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:09:43.644   16:53:36	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:09:43.644   16:53:36	-- common/autotest_common.sh@862 -- # return 0
00:09:43.644   16:53:36	-- event/cpu_locks.sh@87 -- # locks_exist 116817
00:09:43.644   16:53:36	-- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:09:43.644   16:53:36	-- event/cpu_locks.sh@22 -- # lslocks -p 116817
00:09:44.217   16:53:36	-- event/cpu_locks.sh@89 -- # killprocess 116817
00:09:44.217   16:53:36	-- common/autotest_common.sh@936 -- # '[' -z 116817 ']'
00:09:44.217   16:53:36	-- common/autotest_common.sh@940 -- # kill -0 116817
00:09:44.217    16:53:36	-- common/autotest_common.sh@941 -- # uname
00:09:44.217   16:53:36	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:09:44.217    16:53:36	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 116817
00:09:44.217   16:53:36	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:09:44.217  killing process with pid 116817
00:09:44.217   16:53:36	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:09:44.217   16:53:36	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 116817'
00:09:44.217   16:53:36	-- common/autotest_common.sh@955 -- # kill 116817
00:09:44.217   16:53:36	-- common/autotest_common.sh@960 -- # wait 116817
00:09:44.854   16:53:37	-- event/cpu_locks.sh@90 -- # killprocess 116838
00:09:44.854   16:53:37	-- common/autotest_common.sh@936 -- # '[' -z 116838 ']'
00:09:44.854   16:53:37	-- common/autotest_common.sh@940 -- # kill -0 116838
00:09:44.854    16:53:37	-- common/autotest_common.sh@941 -- # uname
00:09:44.854   16:53:37	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:09:45.129    16:53:37	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 116838
00:09:45.129   16:53:37	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:09:45.129  killing process with pid 116838
00:09:45.129   16:53:37	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:09:45.129   16:53:37	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 116838'
00:09:45.129   16:53:37	-- common/autotest_common.sh@955 -- # kill 116838
00:09:45.129   16:53:37	-- common/autotest_common.sh@960 -- # wait 116838
00:09:45.387  
00:09:45.387  real	0m3.840s
00:09:45.387  user	0m4.091s
00:09:45.387  sys	0m1.244s
00:09:45.387   16:53:38	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:09:45.387   16:53:38	-- common/autotest_common.sh@10 -- # set +x
00:09:45.387  ************************************
00:09:45.387  END TEST non_locking_app_on_locked_coremask
00:09:45.387  ************************************
00:09:45.387   16:53:38	-- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask
00:09:45.387   16:53:38	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:09:45.387   16:53:38	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:09:45.387   16:53:38	-- common/autotest_common.sh@10 -- # set +x
00:09:45.387  ************************************
00:09:45.387  START TEST locking_app_on_unlocked_coremask
00:09:45.387  ************************************
00:09:45.387   16:53:38	-- common/autotest_common.sh@1114 -- # locking_app_on_unlocked_coremask
00:09:45.387   16:53:38	-- event/cpu_locks.sh@98 -- # spdk_tgt_pid=116914
00:09:45.387   16:53:38	-- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks
00:09:45.387   16:53:38	-- event/cpu_locks.sh@99 -- # waitforlisten 116914 /var/tmp/spdk.sock
00:09:45.387   16:53:38	-- common/autotest_common.sh@829 -- # '[' -z 116914 ']'
00:09:45.387   16:53:38	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:45.387  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:45.387   16:53:38	-- common/autotest_common.sh@834 -- # local max_retries=100
00:09:45.387   16:53:38	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:45.387   16:53:38	-- common/autotest_common.sh@838 -- # xtrace_disable
00:09:45.387   16:53:38	-- common/autotest_common.sh@10 -- # set +x
00:09:45.645  [2024-11-19 16:53:38.280036] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:09:45.645  [2024-11-19 16:53:38.280304] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116914 ]
00:09:45.645  [2024-11-19 16:53:38.432232] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated.
00:09:45.645  [2024-11-19 16:53:38.432554] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:45.645  [2024-11-19 16:53:38.485454] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:09:45.645  [2024-11-19 16:53:38.485924] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:09:46.579   16:53:39	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:09:46.579   16:53:39	-- common/autotest_common.sh@862 -- # return 0
00:09:46.579   16:53:39	-- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=116935
00:09:46.579   16:53:39	-- event/cpu_locks.sh@103 -- # waitforlisten 116935 /var/tmp/spdk2.sock
00:09:46.579   16:53:39	-- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock
00:09:46.579   16:53:39	-- common/autotest_common.sh@829 -- # '[' -z 116935 ']'
00:09:46.579   16:53:39	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock
00:09:46.579   16:53:39	-- common/autotest_common.sh@834 -- # local max_retries=100
00:09:46.579   16:53:39	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:09:46.579  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:09:46.579   16:53:39	-- common/autotest_common.sh@838 -- # xtrace_disable
00:09:46.579   16:53:39	-- common/autotest_common.sh@10 -- # set +x
00:09:46.579  [2024-11-19 16:53:39.395009] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:09:46.579  [2024-11-19 16:53:39.395283] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116935 ]
00:09:46.838  [2024-11-19 16:53:39.552730] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:46.838  [2024-11-19 16:53:39.646660] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:09:46.838  [2024-11-19 16:53:39.663065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:09:47.773   16:53:40	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:09:47.773   16:53:40	-- common/autotest_common.sh@862 -- # return 0
00:09:47.773   16:53:40	-- event/cpu_locks.sh@105 -- # locks_exist 116935
00:09:47.773   16:53:40	-- event/cpu_locks.sh@22 -- # lslocks -p 116935
00:09:47.773   16:53:40	-- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:09:48.339   16:53:40	-- event/cpu_locks.sh@107 -- # killprocess 116914
00:09:48.339   16:53:40	-- common/autotest_common.sh@936 -- # '[' -z 116914 ']'
00:09:48.339   16:53:40	-- common/autotest_common.sh@940 -- # kill -0 116914
00:09:48.339    16:53:40	-- common/autotest_common.sh@941 -- # uname
00:09:48.339   16:53:40	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:09:48.339    16:53:40	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 116914
00:09:48.339   16:53:40	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:09:48.339   16:53:40	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:09:48.339  killing process with pid 116914
00:09:48.339   16:53:40	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 116914'
00:09:48.339   16:53:40	-- common/autotest_common.sh@955 -- # kill 116914
00:09:48.339   16:53:40	-- common/autotest_common.sh@960 -- # wait 116914
00:09:49.274   16:53:41	-- event/cpu_locks.sh@108 -- # killprocess 116935
00:09:49.274   16:53:41	-- common/autotest_common.sh@936 -- # '[' -z 116935 ']'
00:09:49.274   16:53:41	-- common/autotest_common.sh@940 -- # kill -0 116935
00:09:49.274    16:53:41	-- common/autotest_common.sh@941 -- # uname
00:09:49.274   16:53:41	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:09:49.274    16:53:41	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 116935
00:09:49.274   16:53:41	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:09:49.274   16:53:41	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:09:49.274  killing process with pid 116935
00:09:49.274   16:53:41	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 116935'
00:09:49.274   16:53:41	-- common/autotest_common.sh@955 -- # kill 116935
00:09:49.274   16:53:41	-- common/autotest_common.sh@960 -- # wait 116935
00:09:49.532  
00:09:49.532  real	0m4.084s
00:09:49.532  user	0m4.637s
00:09:49.532  sys	0m1.217s
00:09:49.532   16:53:42	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:09:49.532   16:53:42	-- common/autotest_common.sh@10 -- # set +x
00:09:49.532  ************************************
00:09:49.532  END TEST locking_app_on_unlocked_coremask
00:09:49.532  ************************************
00:09:49.532   16:53:42	-- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask
00:09:49.532   16:53:42	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:09:49.532   16:53:42	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:09:49.532   16:53:42	-- common/autotest_common.sh@10 -- # set +x
00:09:49.532  ************************************
00:09:49.532  START TEST locking_app_on_locked_coremask
00:09:49.532  ************************************
00:09:49.532   16:53:42	-- common/autotest_common.sh@1114 -- # locking_app_on_locked_coremask
00:09:49.532   16:53:42	-- event/cpu_locks.sh@115 -- # spdk_tgt_pid=117009
00:09:49.532   16:53:42	-- event/cpu_locks.sh@116 -- # waitforlisten 117009 /var/tmp/spdk.sock
00:09:49.532   16:53:42	-- common/autotest_common.sh@829 -- # '[' -z 117009 ']'
00:09:49.532   16:53:42	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:49.532   16:53:42	-- common/autotest_common.sh@834 -- # local max_retries=100
00:09:49.532   16:53:42	-- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1
00:09:49.532   16:53:42	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:49.532  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:49.532   16:53:42	-- common/autotest_common.sh@838 -- # xtrace_disable
00:09:49.532   16:53:42	-- common/autotest_common.sh@10 -- # set +x
00:09:49.790  [2024-11-19 16:53:42.422387] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:09:49.790  [2024-11-19 16:53:42.423041] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117009 ]
00:09:49.790  [2024-11-19 16:53:42.568396] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:49.790  [2024-11-19 16:53:42.618451] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:09:49.790  [2024-11-19 16:53:42.618989] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:09:50.726   16:53:43	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:09:50.726   16:53:43	-- common/autotest_common.sh@862 -- # return 0
00:09:50.726   16:53:43	-- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=117030
00:09:50.726   16:53:43	-- event/cpu_locks.sh@120 -- # NOT waitforlisten 117030 /var/tmp/spdk2.sock
00:09:50.726   16:53:43	-- common/autotest_common.sh@650 -- # local es=0
00:09:50.726   16:53:43	-- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 117030 /var/tmp/spdk2.sock
00:09:50.726   16:53:43	-- common/autotest_common.sh@638 -- # local arg=waitforlisten
00:09:50.726   16:53:43	-- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock
00:09:50.726   16:53:43	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:09:50.726    16:53:43	-- common/autotest_common.sh@642 -- # type -t waitforlisten
00:09:50.726   16:53:43	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:09:50.726   16:53:43	-- common/autotest_common.sh@653 -- # waitforlisten 117030 /var/tmp/spdk2.sock
00:09:50.726   16:53:43	-- common/autotest_common.sh@829 -- # '[' -z 117030 ']'
00:09:50.726   16:53:43	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock
00:09:50.726   16:53:43	-- common/autotest_common.sh@834 -- # local max_retries=100
00:09:50.726  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:09:50.726   16:53:43	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:09:50.726   16:53:43	-- common/autotest_common.sh@838 -- # xtrace_disable
00:09:50.726   16:53:43	-- common/autotest_common.sh@10 -- # set +x
00:09:50.726  [2024-11-19 16:53:43.477292] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:09:50.726  [2024-11-19 16:53:43.478035] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117030 ]
00:09:50.985  [2024-11-19 16:53:43.648421] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 117009 has claimed it.
00:09:50.985  [2024-11-19 16:53:43.648733] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting.
00:09:51.552  ERROR: process (pid: 117030) is no longer running
00:09:51.552  /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (117030) - No such process
00:09:51.552   16:53:44	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:09:51.552   16:53:44	-- common/autotest_common.sh@862 -- # return 1
00:09:51.552   16:53:44	-- common/autotest_common.sh@653 -- # es=1
00:09:51.552   16:53:44	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:09:51.552   16:53:44	-- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:09:51.552   16:53:44	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:09:51.552   16:53:44	-- event/cpu_locks.sh@122 -- # locks_exist 117009
00:09:51.552   16:53:44	-- event/cpu_locks.sh@22 -- # lslocks -p 117009
00:09:51.552   16:53:44	-- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:09:51.810   16:53:44	-- event/cpu_locks.sh@124 -- # killprocess 117009
00:09:51.810   16:53:44	-- common/autotest_common.sh@936 -- # '[' -z 117009 ']'
00:09:51.810   16:53:44	-- common/autotest_common.sh@940 -- # kill -0 117009
00:09:51.810    16:53:44	-- common/autotest_common.sh@941 -- # uname
00:09:51.810   16:53:44	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:09:51.810    16:53:44	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 117009
00:09:51.810   16:53:44	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:09:51.810  killing process with pid 117009
00:09:51.810   16:53:44	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:09:51.810   16:53:44	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 117009'
00:09:51.810   16:53:44	-- common/autotest_common.sh@955 -- # kill 117009
00:09:51.810   16:53:44	-- common/autotest_common.sh@960 -- # wait 117009
00:09:52.068  
00:09:52.068  real	0m2.509s
00:09:52.068  user	0m2.930s
00:09:52.068  sys	0m0.685s
00:09:52.068   16:53:44	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:09:52.068   16:53:44	-- common/autotest_common.sh@10 -- # set +x
00:09:52.068  ************************************
00:09:52.068  END TEST locking_app_on_locked_coremask
00:09:52.068  ************************************
00:09:52.068   16:53:44	-- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask
00:09:52.068   16:53:44	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:09:52.068   16:53:44	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:09:52.068   16:53:44	-- common/autotest_common.sh@10 -- # set +x
00:09:52.326  ************************************
00:09:52.326  START TEST locking_overlapped_coremask
00:09:52.326  ************************************
00:09:52.326   16:53:44	-- common/autotest_common.sh@1114 -- # locking_overlapped_coremask
00:09:52.326   16:53:44	-- event/cpu_locks.sh@132 -- # spdk_tgt_pid=117075
00:09:52.326   16:53:44	-- event/cpu_locks.sh@133 -- # waitforlisten 117075 /var/tmp/spdk.sock
00:09:52.326   16:53:44	-- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7
00:09:52.326   16:53:44	-- common/autotest_common.sh@829 -- # '[' -z 117075 ']'
00:09:52.326   16:53:44	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:52.326   16:53:44	-- common/autotest_common.sh@834 -- # local max_retries=100
00:09:52.326   16:53:44	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:52.326  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:52.326   16:53:44	-- common/autotest_common.sh@838 -- # xtrace_disable
00:09:52.326   16:53:44	-- common/autotest_common.sh@10 -- # set +x
00:09:52.326  [2024-11-19 16:53:45.022416] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:09:52.326  [2024-11-19 16:53:45.022714] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117075 ]
00:09:52.584  [2024-11-19 16:53:45.190034] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3
00:09:52.584  [2024-11-19 16:53:45.247988] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:09:52.584  [2024-11-19 16:53:45.248557] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:09:52.584  [2024-11-19 16:53:45.248738] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:09:52.584  [2024-11-19 16:53:45.248747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:09:53.150   16:53:45	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:09:53.150   16:53:45	-- common/autotest_common.sh@862 -- # return 0
00:09:53.150   16:53:45	-- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=117098
00:09:53.150   16:53:45	-- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock
00:09:53.150   16:53:45	-- event/cpu_locks.sh@137 -- # NOT waitforlisten 117098 /var/tmp/spdk2.sock
00:09:53.150   16:53:45	-- common/autotest_common.sh@650 -- # local es=0
00:09:53.150   16:53:45	-- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 117098 /var/tmp/spdk2.sock
00:09:53.150   16:53:45	-- common/autotest_common.sh@638 -- # local arg=waitforlisten
00:09:53.150   16:53:45	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:09:53.150    16:53:45	-- common/autotest_common.sh@642 -- # type -t waitforlisten
00:09:53.150   16:53:45	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:09:53.150   16:53:45	-- common/autotest_common.sh@653 -- # waitforlisten 117098 /var/tmp/spdk2.sock
00:09:53.150   16:53:45	-- common/autotest_common.sh@829 -- # '[' -z 117098 ']'
00:09:53.150   16:53:45	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock
00:09:53.150   16:53:45	-- common/autotest_common.sh@834 -- # local max_retries=100
00:09:53.150   16:53:45	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:09:53.150  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:09:53.150   16:53:45	-- common/autotest_common.sh@838 -- # xtrace_disable
00:09:53.150   16:53:45	-- common/autotest_common.sh@10 -- # set +x
00:09:53.150  [2024-11-19 16:53:45.995426] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:09:53.150  [2024-11-19 16:53:45.995687] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117098 ]
00:09:53.410  [2024-11-19 16:53:46.179813] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 117075 has claimed it.
00:09:53.410  [2024-11-19 16:53:46.179932] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting.
00:09:53.976  /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (117098) - No such process
00:09:53.976  ERROR: process (pid: 117098) is no longer running
00:09:53.976   16:53:46	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:09:53.976   16:53:46	-- common/autotest_common.sh@862 -- # return 1
00:09:53.976   16:53:46	-- common/autotest_common.sh@653 -- # es=1
00:09:53.976   16:53:46	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:09:53.976   16:53:46	-- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:09:53.976   16:53:46	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:09:53.976   16:53:46	-- event/cpu_locks.sh@139 -- # check_remaining_locks
00:09:53.976   16:53:46	-- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*)
00:09:53.976   16:53:46	-- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002})
00:09:53.976   16:53:46	-- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]]
00:09:53.976   16:53:46	-- event/cpu_locks.sh@141 -- # killprocess 117075
00:09:53.976   16:53:46	-- common/autotest_common.sh@936 -- # '[' -z 117075 ']'
00:09:53.976   16:53:46	-- common/autotest_common.sh@940 -- # kill -0 117075
00:09:53.976    16:53:46	-- common/autotest_common.sh@941 -- # uname
00:09:53.976   16:53:46	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:09:53.976    16:53:46	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 117075
00:09:53.976   16:53:46	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:09:53.976   16:53:46	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:09:53.976  killing process with pid 117075
00:09:53.976   16:53:46	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 117075'
00:09:53.976   16:53:46	-- common/autotest_common.sh@955 -- # kill 117075
00:09:53.976   16:53:46	-- common/autotest_common.sh@960 -- # wait 117075
00:09:54.542  
00:09:54.542  real	0m2.190s
00:09:54.542  user	0m5.802s
00:09:54.542  sys	0m0.541s
00:09:54.542   16:53:47	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:09:54.542   16:53:47	-- common/autotest_common.sh@10 -- # set +x
00:09:54.542  ************************************
00:09:54.542  END TEST locking_overlapped_coremask
00:09:54.542  ************************************
00:09:54.542   16:53:47	-- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc
00:09:54.542   16:53:47	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:09:54.542   16:53:47	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:09:54.542   16:53:47	-- common/autotest_common.sh@10 -- # set +x
00:09:54.542  ************************************
00:09:54.542  START TEST locking_overlapped_coremask_via_rpc
00:09:54.542  ************************************
00:09:54.542   16:53:47	-- common/autotest_common.sh@1114 -- # locking_overlapped_coremask_via_rpc
00:09:54.542   16:53:47	-- event/cpu_locks.sh@148 -- # spdk_tgt_pid=117152
00:09:54.542   16:53:47	-- event/cpu_locks.sh@149 -- # waitforlisten 117152 /var/tmp/spdk.sock
00:09:54.542   16:53:47	-- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks
00:09:54.542   16:53:47	-- common/autotest_common.sh@829 -- # '[' -z 117152 ']'
00:09:54.542   16:53:47	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:54.542   16:53:47	-- common/autotest_common.sh@834 -- # local max_retries=100
00:09:54.542  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:54.542   16:53:47	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:54.542   16:53:47	-- common/autotest_common.sh@838 -- # xtrace_disable
00:09:54.542   16:53:47	-- common/autotest_common.sh@10 -- # set +x
00:09:54.542  [2024-11-19 16:53:47.275226] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:09:54.542  [2024-11-19 16:53:47.276218] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117152 ]
00:09:54.800  [2024-11-19 16:53:47.446226] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated.
00:09:54.800  [2024-11-19 16:53:47.446631] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3
00:09:54.800  [2024-11-19 16:53:47.531228] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:09:54.800  [2024-11-19 16:53:47.532143] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:09:54.800  [2024-11-19 16:53:47.532291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:09:54.800  [2024-11-19 16:53:47.532288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:09:55.366   16:53:48	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:09:55.366   16:53:48	-- common/autotest_common.sh@862 -- # return 0
00:09:55.366   16:53:48	-- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=117170
00:09:55.366   16:53:48	-- event/cpu_locks.sh@153 -- # waitforlisten 117170 /var/tmp/spdk2.sock
00:09:55.366   16:53:48	-- common/autotest_common.sh@829 -- # '[' -z 117170 ']'
00:09:55.366   16:53:48	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock
00:09:55.366  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:09:55.366   16:53:48	-- common/autotest_common.sh@834 -- # local max_retries=100
00:09:55.366   16:53:48	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:09:55.366   16:53:48	-- common/autotest_common.sh@838 -- # xtrace_disable
00:09:55.366   16:53:48	-- common/autotest_common.sh@10 -- # set +x
00:09:55.366   16:53:48	-- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks
00:09:55.625  [2024-11-19 16:53:48.271287] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:09:55.625  [2024-11-19 16:53:48.271532] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117170 ]
00:09:55.625  [2024-11-19 16:53:48.441730] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated.
00:09:55.625  [2024-11-19 16:53:48.441817] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3
00:09:55.884  [2024-11-19 16:53:48.589157] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:09:55.884  [2024-11-19 16:53:48.590150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3
00:09:55.884  [2024-11-19 16:53:48.606964] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:09:55.884  [2024-11-19 16:53:48.606970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4
00:09:57.261   16:53:49	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:09:57.261   16:53:49	-- common/autotest_common.sh@862 -- # return 0
00:09:57.261   16:53:49	-- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks
00:09:57.261   16:53:49	-- common/autotest_common.sh@561 -- # xtrace_disable
00:09:57.261   16:53:49	-- common/autotest_common.sh@10 -- # set +x
00:09:57.261   16:53:49	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:57.261   16:53:49	-- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks
00:09:57.261   16:53:49	-- common/autotest_common.sh@650 -- # local es=0
00:09:57.261   16:53:49	-- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks
00:09:57.261   16:53:49	-- common/autotest_common.sh@638 -- # local arg=rpc_cmd
00:09:57.261   16:53:49	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:09:57.261    16:53:49	-- common/autotest_common.sh@642 -- # type -t rpc_cmd
00:09:57.261   16:53:49	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:09:57.261   16:53:49	-- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks
00:09:57.261   16:53:49	-- common/autotest_common.sh@561 -- # xtrace_disable
00:09:57.261   16:53:49	-- common/autotest_common.sh@10 -- # set +x
00:09:57.261  [2024-11-19 16:53:49.887108] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 117152 has claimed it.
00:09:57.261  request:
00:09:57.261  {
00:09:57.261  "method": "framework_enable_cpumask_locks",
00:09:57.261  "req_id": 1
00:09:57.261  }
00:09:57.261  Got JSON-RPC error response
00:09:57.261  response:
00:09:57.261  {
00:09:57.261  "code": -32603,
00:09:57.261  "message": "Failed to claim CPU core: 2"
00:09:57.261  }
00:09:57.261   16:53:49	-- common/autotest_common.sh@589 -- # [[ 1 == 0 ]]
00:09:57.261   16:53:49	-- common/autotest_common.sh@653 -- # es=1
00:09:57.261   16:53:49	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:09:57.261   16:53:49	-- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:09:57.261   16:53:49	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:09:57.261   16:53:49	-- event/cpu_locks.sh@158 -- # waitforlisten 117152 /var/tmp/spdk.sock
00:09:57.261   16:53:49	-- common/autotest_common.sh@829 -- # '[' -z 117152 ']'
00:09:57.261   16:53:49	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:57.261   16:53:49	-- common/autotest_common.sh@834 -- # local max_retries=100
00:09:57.261  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:57.261   16:53:49	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:57.261   16:53:49	-- common/autotest_common.sh@838 -- # xtrace_disable
00:09:57.261   16:53:49	-- common/autotest_common.sh@10 -- # set +x
00:09:57.519   16:53:50	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:09:57.519   16:53:50	-- common/autotest_common.sh@862 -- # return 0
00:09:57.519   16:53:50	-- event/cpu_locks.sh@159 -- # waitforlisten 117170 /var/tmp/spdk2.sock
00:09:57.519   16:53:50	-- common/autotest_common.sh@829 -- # '[' -z 117170 ']'
00:09:57.519   16:53:50	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock
00:09:57.519   16:53:50	-- common/autotest_common.sh@834 -- # local max_retries=100
00:09:57.519  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:09:57.519   16:53:50	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:09:57.519   16:53:50	-- common/autotest_common.sh@838 -- # xtrace_disable
00:09:57.519   16:53:50	-- common/autotest_common.sh@10 -- # set +x
00:09:57.777   16:53:50	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:09:57.777   16:53:50	-- common/autotest_common.sh@862 -- # return 0
00:09:57.777   16:53:50	-- event/cpu_locks.sh@161 -- # check_remaining_locks
00:09:57.777   16:53:50	-- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*)
00:09:57.777   16:53:50	-- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002})
00:09:57.777   16:53:50	-- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]]
00:09:57.777  
00:09:57.777  real	0m3.240s
00:09:57.777  user	0m1.412s
00:09:57.777  sys	0m0.279s
00:09:57.777   16:53:50	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:09:57.777   16:53:50	-- common/autotest_common.sh@10 -- # set +x
00:09:57.777  ************************************
00:09:57.777  END TEST locking_overlapped_coremask_via_rpc
00:09:57.777  ************************************
00:09:57.777   16:53:50	-- event/cpu_locks.sh@174 -- # cleanup
00:09:57.777   16:53:50	-- event/cpu_locks.sh@15 -- # [[ -z 117152 ]]
00:09:57.777   16:53:50	-- event/cpu_locks.sh@15 -- # killprocess 117152
00:09:57.777   16:53:50	-- common/autotest_common.sh@936 -- # '[' -z 117152 ']'
00:09:57.777   16:53:50	-- common/autotest_common.sh@940 -- # kill -0 117152
00:09:57.777    16:53:50	-- common/autotest_common.sh@941 -- # uname
00:09:57.777   16:53:50	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:09:57.777    16:53:50	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 117152
00:09:57.777   16:53:50	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:09:57.777   16:53:50	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:09:57.777  killing process with pid 117152
00:09:57.777   16:53:50	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 117152'
00:09:57.777   16:53:50	-- common/autotest_common.sh@955 -- # kill 117152
00:09:57.777   16:53:50	-- common/autotest_common.sh@960 -- # wait 117152
00:09:58.344   16:53:50	-- event/cpu_locks.sh@16 -- # [[ -z 117170 ]]
00:09:58.344   16:53:50	-- event/cpu_locks.sh@16 -- # killprocess 117170
00:09:58.344   16:53:50	-- common/autotest_common.sh@936 -- # '[' -z 117170 ']'
00:09:58.344   16:53:50	-- common/autotest_common.sh@940 -- # kill -0 117170
00:09:58.344    16:53:50	-- common/autotest_common.sh@941 -- # uname
00:09:58.344   16:53:50	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:09:58.344    16:53:50	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 117170
00:09:58.344   16:53:50	-- common/autotest_common.sh@942 -- # process_name=reactor_2
00:09:58.344   16:53:50	-- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']'
00:09:58.344  killing process with pid 117170
00:09:58.344   16:53:50	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 117170'
00:09:58.344   16:53:50	-- common/autotest_common.sh@955 -- # kill 117170
00:09:58.344   16:53:50	-- common/autotest_common.sh@960 -- # wait 117170
00:09:58.912   16:53:51	-- event/cpu_locks.sh@18 -- # rm -f
00:09:58.912   16:53:51	-- event/cpu_locks.sh@1 -- # cleanup
00:09:58.912   16:53:51	-- event/cpu_locks.sh@15 -- # [[ -z 117152 ]]
00:09:58.912   16:53:51	-- event/cpu_locks.sh@15 -- # killprocess 117152
00:09:58.912   16:53:51	-- common/autotest_common.sh@936 -- # '[' -z 117152 ']'
00:09:58.912   16:53:51	-- common/autotest_common.sh@940 -- # kill -0 117152
00:09:58.912  /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (117152) - No such process
00:09:58.912  Process with pid 117152 is not found
00:09:58.912   16:53:51	-- common/autotest_common.sh@963 -- # echo 'Process with pid 117152 is not found'
00:09:58.912   16:53:51	-- event/cpu_locks.sh@16 -- # [[ -z 117170 ]]
00:09:58.912   16:53:51	-- event/cpu_locks.sh@16 -- # killprocess 117170
00:09:58.912   16:53:51	-- common/autotest_common.sh@936 -- # '[' -z 117170 ']'
00:09:58.912   16:53:51	-- common/autotest_common.sh@940 -- # kill -0 117170
00:09:58.912  /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (117170) - No such process
00:09:58.912  Process with pid 117170 is not found
00:09:58.912   16:53:51	-- common/autotest_common.sh@963 -- # echo 'Process with pid 117170 is not found'
00:09:58.912   16:53:51	-- event/cpu_locks.sh@18 -- # rm -f
00:09:58.912  ************************************
00:09:58.912  END TEST cpu_locks
00:09:58.912  ************************************
00:09:58.912  
00:09:58.912  real	0m21.096s
00:09:58.912  user	0m38.458s
00:09:58.912  sys	0m6.548s
00:09:58.912   16:53:51	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:09:58.912   16:53:51	-- common/autotest_common.sh@10 -- # set +x
00:09:58.912  
00:09:58.912  real	0m49.101s
00:09:58.912  user	1m33.941s
00:09:58.912  sys	0m11.070s
00:09:58.912   16:53:51	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:09:58.912   16:53:51	-- common/autotest_common.sh@10 -- # set +x
00:09:58.912  ************************************
00:09:58.912  END TEST event
00:09:58.912  ************************************
00:09:59.171   16:53:51	-- spdk/autotest.sh@175 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh
00:09:59.171   16:53:51	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:09:59.171   16:53:51	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:09:59.171   16:53:51	-- common/autotest_common.sh@10 -- # set +x
00:09:59.171  ************************************
00:09:59.171  START TEST thread
00:09:59.171  ************************************
00:09:59.171   16:53:51	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh
00:09:59.171  * Looking for test storage...
00:09:59.171  * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread
00:09:59.171    16:53:51	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:09:59.171     16:53:51	-- common/autotest_common.sh@1690 -- # lcov --version
00:09:59.171     16:53:51	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:09:59.171    16:53:51	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:09:59.171    16:53:51	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:09:59.172    16:53:51	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:09:59.172    16:53:51	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:09:59.172    16:53:51	-- scripts/common.sh@335 -- # IFS=.-:
00:09:59.172    16:53:51	-- scripts/common.sh@335 -- # read -ra ver1
00:09:59.172    16:53:51	-- scripts/common.sh@336 -- # IFS=.-:
00:09:59.172    16:53:51	-- scripts/common.sh@336 -- # read -ra ver2
00:09:59.172    16:53:51	-- scripts/common.sh@337 -- # local 'op=<'
00:09:59.172    16:53:51	-- scripts/common.sh@339 -- # ver1_l=2
00:09:59.172    16:53:51	-- scripts/common.sh@340 -- # ver2_l=1
00:09:59.172    16:53:51	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:09:59.172    16:53:51	-- scripts/common.sh@343 -- # case "$op" in
00:09:59.172    16:53:51	-- scripts/common.sh@344 -- # : 1
00:09:59.172    16:53:51	-- scripts/common.sh@363 -- # (( v = 0 ))
00:09:59.172    16:53:51	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:09:59.172     16:53:51	-- scripts/common.sh@364 -- # decimal 1
00:09:59.172     16:53:51	-- scripts/common.sh@352 -- # local d=1
00:09:59.172     16:53:51	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:59.172     16:53:51	-- scripts/common.sh@354 -- # echo 1
00:09:59.172    16:53:51	-- scripts/common.sh@364 -- # ver1[v]=1
00:09:59.172     16:53:51	-- scripts/common.sh@365 -- # decimal 2
00:09:59.172     16:53:51	-- scripts/common.sh@352 -- # local d=2
00:09:59.172     16:53:51	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:09:59.172     16:53:51	-- scripts/common.sh@354 -- # echo 2
00:09:59.172    16:53:51	-- scripts/common.sh@365 -- # ver2[v]=2
00:09:59.172    16:53:51	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:09:59.172    16:53:51	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:09:59.172    16:53:51	-- scripts/common.sh@367 -- # return 0
00:09:59.172    16:53:51	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:09:59.172    16:53:51	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:09:59.172  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:59.172  		--rc genhtml_branch_coverage=1
00:09:59.172  		--rc genhtml_function_coverage=1
00:09:59.172  		--rc genhtml_legend=1
00:09:59.172  		--rc geninfo_all_blocks=1
00:09:59.172  		--rc geninfo_unexecuted_blocks=1
00:09:59.172  		
00:09:59.172  		'
00:09:59.172    16:53:51	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:09:59.172  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:59.172  		--rc genhtml_branch_coverage=1
00:09:59.172  		--rc genhtml_function_coverage=1
00:09:59.172  		--rc genhtml_legend=1
00:09:59.172  		--rc geninfo_all_blocks=1
00:09:59.172  		--rc geninfo_unexecuted_blocks=1
00:09:59.172  		
00:09:59.172  		'
00:09:59.172    16:53:51	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:09:59.172  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:59.172  		--rc genhtml_branch_coverage=1
00:09:59.172  		--rc genhtml_function_coverage=1
00:09:59.172  		--rc genhtml_legend=1
00:09:59.172  		--rc geninfo_all_blocks=1
00:09:59.172  		--rc geninfo_unexecuted_blocks=1
00:09:59.172  		
00:09:59.172  		'
00:09:59.172    16:53:51	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:09:59.172  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:59.172  		--rc genhtml_branch_coverage=1
00:09:59.172  		--rc genhtml_function_coverage=1
00:09:59.172  		--rc genhtml_legend=1
00:09:59.172  		--rc geninfo_all_blocks=1
00:09:59.172  		--rc geninfo_unexecuted_blocks=1
00:09:59.172  		
00:09:59.172  		'
00:09:59.172   16:53:51	-- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1
00:09:59.172   16:53:51	-- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']'
00:09:59.172   16:53:51	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:09:59.172   16:53:51	-- common/autotest_common.sh@10 -- # set +x
00:09:59.172  ************************************
00:09:59.172  START TEST thread_poller_perf
00:09:59.172  ************************************
00:09:59.172   16:53:51	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1
00:09:59.172  [2024-11-19 16:53:52.029888] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:09:59.172  [2024-11-19 16:53:52.030157] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117327 ]
00:09:59.430  [2024-11-19 16:53:52.186835] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:59.430  [2024-11-19 16:53:52.257169] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:09:59.430  Running 1000 pollers for 1 seconds with 1 microseconds period.
00:10:00.805  
[2024-11-19T16:53:53.669Z]  ======================================
00:10:00.805  
[2024-11-19T16:53:53.669Z]  busy:2114355960 (cyc)
00:10:00.805  
[2024-11-19T16:53:53.669Z]  total_run_count: 289000
00:10:00.805  
[2024-11-19T16:53:53.669Z]  tsc_hz: 2100000000 (cyc)
00:10:00.805  
[2024-11-19T16:53:53.669Z]  ======================================
00:10:00.805  
[2024-11-19T16:53:53.669Z]  poller_cost: 7316 (cyc), 3483 (nsec)
00:10:00.805  ************************************
00:10:00.805  END TEST thread_poller_perf
00:10:00.805  ************************************
00:10:00.805  
00:10:00.805  real	0m1.402s
00:10:00.805  user	0m1.209s
00:10:00.805  sys	0m0.093s
00:10:00.805   16:53:53	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:10:00.805   16:53:53	-- common/autotest_common.sh@10 -- # set +x
00:10:00.805   16:53:53	-- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1
00:10:00.805   16:53:53	-- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']'
00:10:00.805   16:53:53	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:10:00.805   16:53:53	-- common/autotest_common.sh@10 -- # set +x
00:10:00.805  ************************************
00:10:00.805  START TEST thread_poller_perf
00:10:00.805  ************************************
00:10:00.805   16:53:53	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1
00:10:00.805  [2024-11-19 16:53:53.491689] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:10:00.805  [2024-11-19 16:53:53.492067] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117372 ]
00:10:00.805  [2024-11-19 16:53:53.649342] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:01.064  [2024-11-19 16:53:53.699117] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:10:01.064  Running 1000 pollers for 1 seconds with 0 microseconds period.
00:10:02.004  
[2024-11-19T16:53:54.868Z]  ======================================
00:10:02.004  
[2024-11-19T16:53:54.868Z]  busy:2104980924 (cyc)
00:10:02.004  
[2024-11-19T16:53:54.868Z]  total_run_count: 4092000
00:10:02.004  
[2024-11-19T16:53:54.868Z]  tsc_hz: 2100000000 (cyc)
00:10:02.004  
[2024-11-19T16:53:54.868Z]  ======================================
00:10:02.004  
[2024-11-19T16:53:54.868Z]  poller_cost: 514 (cyc), 244 (nsec)
00:10:02.004  
00:10:02.004  real	0m1.363s
00:10:02.004  user	0m1.162s
00:10:02.004  sys	0m0.100s
00:10:02.004  ************************************
00:10:02.004  END TEST thread_poller_perf
00:10:02.004  ************************************
00:10:02.004   16:53:54	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:10:02.004   16:53:54	-- common/autotest_common.sh@10 -- # set +x
00:10:02.264   16:53:54	-- thread/thread.sh@17 -- # [[ n != \y ]]
00:10:02.264   16:53:54	-- thread/thread.sh@18 -- # run_test thread_spdk_lock /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock
00:10:02.264   16:53:54	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:10:02.264   16:53:54	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:10:02.264   16:53:54	-- common/autotest_common.sh@10 -- # set +x
00:10:02.264  ************************************
00:10:02.264  START TEST thread_spdk_lock
00:10:02.264  ************************************
00:10:02.264   16:53:54	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock
00:10:02.264  [2024-11-19 16:53:54.921401] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:10:02.264  [2024-11-19 16:53:54.922029] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117408 ]
00:10:02.264  [2024-11-19 16:53:55.069757] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2
00:10:02.264  [2024-11-19 16:53:55.120345] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:10:02.264  [2024-11-19 16:53:55.120360] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:10:03.200  [2024-11-19 16:53:55.704106] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 957:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0)
00:10:03.200  [2024-11-19 16:53:55.704524] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3064:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread)
00:10:03.200  [2024-11-19 16:53:55.704660] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3019:sspin_stacks_print: *ERROR*: spinlock 0x56401513f980
00:10:03.200  [2024-11-19 16:53:55.706375] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 852:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0)
00:10:03.200  [2024-11-19 16:53:55.706645] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:1018:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0)
00:10:03.200  [2024-11-19 16:53:55.706795] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 852:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0)
00:10:03.200  Starting test contend
00:10:03.200    Worker    Delay  Wait us  Hold us Total us
00:10:03.200         0        3    94584   205415   299999
00:10:03.200         1        5    55011   305503   360514
00:10:03.200  PASS test contend
00:10:03.200  Starting test hold_by_poller
00:10:03.200  PASS test hold_by_poller
00:10:03.200  Starting test hold_by_message
00:10:03.200  PASS test hold_by_message
00:10:03.200  /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock summary:
00:10:03.200     100014 assertions passed
00:10:03.200          0 assertions failed
00:10:03.200  
00:10:03.200  real	0m0.991s
00:10:03.200  user	0m1.395s
00:10:03.200  sys	0m0.081s
00:10:03.200   16:53:55	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:10:03.200   16:53:55	-- common/autotest_common.sh@10 -- # set +x
00:10:03.200  ************************************
00:10:03.200  END TEST thread_spdk_lock
00:10:03.200  ************************************
00:10:03.200  
00:10:03.200  real	0m4.152s
00:10:03.200  user	0m3.968s
00:10:03.200  sys	0m0.483s
00:10:03.200   16:53:55	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:10:03.200   16:53:55	-- common/autotest_common.sh@10 -- # set +x
00:10:03.200  ************************************
00:10:03.200  END TEST thread
00:10:03.200  ************************************
00:10:03.200   16:53:55	-- spdk/autotest.sh@176 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh
00:10:03.200   16:53:55	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:10:03.200   16:53:55	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:10:03.200   16:53:55	-- common/autotest_common.sh@10 -- # set +x
00:10:03.200  ************************************
00:10:03.200  START TEST accel
00:10:03.200  ************************************
00:10:03.200   16:53:55	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh
00:10:03.459  * Looking for test storage...
00:10:03.460  * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel
00:10:03.460    16:53:56	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:10:03.460     16:53:56	-- common/autotest_common.sh@1690 -- # lcov --version
00:10:03.460     16:53:56	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:10:03.460    16:53:56	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:10:03.460    16:53:56	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:10:03.460    16:53:56	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:10:03.460    16:53:56	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:10:03.460    16:53:56	-- scripts/common.sh@335 -- # IFS=.-:
00:10:03.460    16:53:56	-- scripts/common.sh@335 -- # read -ra ver1
00:10:03.460    16:53:56	-- scripts/common.sh@336 -- # IFS=.-:
00:10:03.460    16:53:56	-- scripts/common.sh@336 -- # read -ra ver2
00:10:03.460    16:53:56	-- scripts/common.sh@337 -- # local 'op=<'
00:10:03.460    16:53:56	-- scripts/common.sh@339 -- # ver1_l=2
00:10:03.460    16:53:56	-- scripts/common.sh@340 -- # ver2_l=1
00:10:03.460    16:53:56	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:10:03.460    16:53:56	-- scripts/common.sh@343 -- # case "$op" in
00:10:03.460    16:53:56	-- scripts/common.sh@344 -- # : 1
00:10:03.460    16:53:56	-- scripts/common.sh@363 -- # (( v = 0 ))
00:10:03.460    16:53:56	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:10:03.460     16:53:56	-- scripts/common.sh@364 -- # decimal 1
00:10:03.460     16:53:56	-- scripts/common.sh@352 -- # local d=1
00:10:03.460     16:53:56	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:03.460     16:53:56	-- scripts/common.sh@354 -- # echo 1
00:10:03.460    16:53:56	-- scripts/common.sh@364 -- # ver1[v]=1
00:10:03.460     16:53:56	-- scripts/common.sh@365 -- # decimal 2
00:10:03.460     16:53:56	-- scripts/common.sh@352 -- # local d=2
00:10:03.460     16:53:56	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:10:03.460     16:53:56	-- scripts/common.sh@354 -- # echo 2
00:10:03.460    16:53:56	-- scripts/common.sh@365 -- # ver2[v]=2
00:10:03.460    16:53:56	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:10:03.460    16:53:56	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:10:03.460    16:53:56	-- scripts/common.sh@367 -- # return 0
00:10:03.460    16:53:56	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:10:03.460    16:53:56	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:10:03.460  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:03.460  		--rc genhtml_branch_coverage=1
00:10:03.460  		--rc genhtml_function_coverage=1
00:10:03.460  		--rc genhtml_legend=1
00:10:03.460  		--rc geninfo_all_blocks=1
00:10:03.460  		--rc geninfo_unexecuted_blocks=1
00:10:03.460  		
00:10:03.460  		'
00:10:03.460    16:53:56	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:10:03.460  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:03.460  		--rc genhtml_branch_coverage=1
00:10:03.460  		--rc genhtml_function_coverage=1
00:10:03.460  		--rc genhtml_legend=1
00:10:03.460  		--rc geninfo_all_blocks=1
00:10:03.460  		--rc geninfo_unexecuted_blocks=1
00:10:03.460  		
00:10:03.460  		'
00:10:03.460    16:53:56	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:10:03.460  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:03.460  		--rc genhtml_branch_coverage=1
00:10:03.460  		--rc genhtml_function_coverage=1
00:10:03.460  		--rc genhtml_legend=1
00:10:03.460  		--rc geninfo_all_blocks=1
00:10:03.460  		--rc geninfo_unexecuted_blocks=1
00:10:03.460  		
00:10:03.460  		'
00:10:03.460    16:53:56	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:10:03.460  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:03.460  		--rc genhtml_branch_coverage=1
00:10:03.460  		--rc genhtml_function_coverage=1
00:10:03.460  		--rc genhtml_legend=1
00:10:03.460  		--rc geninfo_all_blocks=1
00:10:03.460  		--rc geninfo_unexecuted_blocks=1
00:10:03.460  		
00:10:03.460  		'
00:10:03.460   16:53:56	-- accel/accel.sh@73 -- # declare -A expected_opcs
00:10:03.460   16:53:56	-- accel/accel.sh@74 -- # get_expected_opcs
00:10:03.460   16:53:56	-- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR
00:10:03.460   16:53:56	-- accel/accel.sh@59 -- # spdk_tgt_pid=117501
00:10:03.460   16:53:56	-- accel/accel.sh@60 -- # waitforlisten 117501
00:10:03.460   16:53:56	-- common/autotest_common.sh@829 -- # '[' -z 117501 ']'
00:10:03.460   16:53:56	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:10:03.460   16:53:56	-- common/autotest_common.sh@834 -- # local max_retries=100
00:10:03.460   16:53:56	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:10:03.460  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:10:03.460   16:53:56	-- common/autotest_common.sh@838 -- # xtrace_disable
00:10:03.460    16:53:56	-- accel/accel.sh@58 -- # build_accel_config
00:10:03.460   16:53:56	-- common/autotest_common.sh@10 -- # set +x
00:10:03.460   16:53:56	-- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63
00:10:03.460    16:53:56	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:10:03.460    16:53:56	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:10:03.460    16:53:56	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:10:03.460    16:53:56	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:10:03.460    16:53:56	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:10:03.460    16:53:56	-- accel/accel.sh@41 -- # local IFS=,
00:10:03.460    16:53:56	-- accel/accel.sh@42 -- # jq -r .
00:10:03.460  [2024-11-19 16:53:56.244908] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:10:03.460  [2024-11-19 16:53:56.245175] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117501 ]
00:10:03.719  [2024-11-19 16:53:56.401687] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:03.719  [2024-11-19 16:53:56.456516] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:10:03.719  [2024-11-19 16:53:56.457017] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:10:04.286   16:53:57	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:10:04.286   16:53:57	-- common/autotest_common.sh@862 -- # return 0
00:10:04.286   16:53:57	-- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]"))
00:10:04.286    16:53:57	-- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]'
00:10:04.286    16:53:57	-- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments
00:10:04.286    16:53:57	-- common/autotest_common.sh@561 -- # xtrace_disable
00:10:04.286    16:53:57	-- common/autotest_common.sh@10 -- # set +x
00:10:04.286    16:53:57	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:04.559   16:53:57	-- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}"
00:10:04.559   16:53:57	-- accel/accel.sh@64 -- # IFS==
00:10:04.559   16:53:57	-- accel/accel.sh@64 -- # read -r opc module
00:10:04.559   16:53:57	-- accel/accel.sh@65 -- # expected_opcs["$opc"]=software
00:10:04.559   16:53:57	-- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}"
00:10:04.559   16:53:57	-- accel/accel.sh@64 -- # IFS==
00:10:04.559   16:53:57	-- accel/accel.sh@64 -- # read -r opc module
00:10:04.559   16:53:57	-- accel/accel.sh@65 -- # expected_opcs["$opc"]=software
00:10:04.559   16:53:57	-- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}"
00:10:04.559   16:53:57	-- accel/accel.sh@64 -- # IFS==
00:10:04.559   16:53:57	-- accel/accel.sh@64 -- # read -r opc module
00:10:04.559   16:53:57	-- accel/accel.sh@65 -- # expected_opcs["$opc"]=software
00:10:04.559   16:53:57	-- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}"
00:10:04.559   16:53:57	-- accel/accel.sh@64 -- # IFS==
00:10:04.559   16:53:57	-- accel/accel.sh@64 -- # read -r opc module
00:10:04.559   16:53:57	-- accel/accel.sh@65 -- # expected_opcs["$opc"]=software
00:10:04.559   16:53:57	-- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}"
00:10:04.559   16:53:57	-- accel/accel.sh@64 -- # IFS==
00:10:04.559   16:53:57	-- accel/accel.sh@64 -- # read -r opc module
00:10:04.559   16:53:57	-- accel/accel.sh@65 -- # expected_opcs["$opc"]=software
00:10:04.560   16:53:57	-- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}"
00:10:04.560   16:53:57	-- accel/accel.sh@64 -- # IFS==
00:10:04.560   16:53:57	-- accel/accel.sh@64 -- # read -r opc module
00:10:04.560   16:53:57	-- accel/accel.sh@65 -- # expected_opcs["$opc"]=software
00:10:04.560   16:53:57	-- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}"
00:10:04.560   16:53:57	-- accel/accel.sh@64 -- # IFS==
00:10:04.560   16:53:57	-- accel/accel.sh@64 -- # read -r opc module
00:10:04.560   16:53:57	-- accel/accel.sh@65 -- # expected_opcs["$opc"]=software
00:10:04.560   16:53:57	-- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}"
00:10:04.560   16:53:57	-- accel/accel.sh@64 -- # IFS==
00:10:04.560   16:53:57	-- accel/accel.sh@64 -- # read -r opc module
00:10:04.560   16:53:57	-- accel/accel.sh@65 -- # expected_opcs["$opc"]=software
00:10:04.560   16:53:57	-- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}"
00:10:04.560   16:53:57	-- accel/accel.sh@64 -- # IFS==
00:10:04.560   16:53:57	-- accel/accel.sh@64 -- # read -r opc module
00:10:04.560   16:53:57	-- accel/accel.sh@65 -- # expected_opcs["$opc"]=software
00:10:04.560   16:53:57	-- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}"
00:10:04.560   16:53:57	-- accel/accel.sh@64 -- # IFS==
00:10:04.560   16:53:57	-- accel/accel.sh@64 -- # read -r opc module
00:10:04.560   16:53:57	-- accel/accel.sh@65 -- # expected_opcs["$opc"]=software
00:10:04.560   16:53:57	-- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}"
00:10:04.560   16:53:57	-- accel/accel.sh@64 -- # IFS==
00:10:04.560   16:53:57	-- accel/accel.sh@64 -- # read -r opc module
00:10:04.560   16:53:57	-- accel/accel.sh@65 -- # expected_opcs["$opc"]=software
00:10:04.560   16:53:57	-- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}"
00:10:04.560   16:53:57	-- accel/accel.sh@64 -- # IFS==
00:10:04.560   16:53:57	-- accel/accel.sh@64 -- # read -r opc module
00:10:04.560   16:53:57	-- accel/accel.sh@65 -- # expected_opcs["$opc"]=software
00:10:04.560   16:53:57	-- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}"
00:10:04.560   16:53:57	-- accel/accel.sh@64 -- # IFS==
00:10:04.560   16:53:57	-- accel/accel.sh@64 -- # read -r opc module
00:10:04.560   16:53:57	-- accel/accel.sh@65 -- # expected_opcs["$opc"]=software
00:10:04.560   16:53:57	-- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}"
00:10:04.560   16:53:57	-- accel/accel.sh@64 -- # IFS==
00:10:04.560   16:53:57	-- accel/accel.sh@64 -- # read -r opc module
00:10:04.560   16:53:57	-- accel/accel.sh@65 -- # expected_opcs["$opc"]=software
00:10:04.560   16:53:57	-- accel/accel.sh@67 -- # killprocess 117501
00:10:04.560   16:53:57	-- common/autotest_common.sh@936 -- # '[' -z 117501 ']'
00:10:04.560   16:53:57	-- common/autotest_common.sh@940 -- # kill -0 117501
00:10:04.560    16:53:57	-- common/autotest_common.sh@941 -- # uname
00:10:04.560   16:53:57	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:10:04.560    16:53:57	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 117501
00:10:04.560   16:53:57	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:10:04.560  killing process with pid 117501
00:10:04.560   16:53:57	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:10:04.560   16:53:57	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 117501'
00:10:04.560   16:53:57	-- common/autotest_common.sh@955 -- # kill 117501
00:10:04.560   16:53:57	-- common/autotest_common.sh@960 -- # wait 117501
00:10:04.826   16:53:57	-- accel/accel.sh@68 -- # trap - ERR
00:10:04.826   16:53:57	-- accel/accel.sh@81 -- # run_test accel_help accel_perf -h
00:10:04.826   16:53:57	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:10:04.826   16:53:57	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:10:04.826   16:53:57	-- common/autotest_common.sh@10 -- # set +x
00:10:04.826   16:53:57	-- common/autotest_common.sh@1114 -- # accel_perf -h
00:10:04.826   16:53:57	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h
00:10:04.826    16:53:57	-- accel/accel.sh@12 -- # build_accel_config
00:10:04.826    16:53:57	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:10:04.826    16:53:57	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:10:04.826    16:53:57	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:10:04.826    16:53:57	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:10:04.826    16:53:57	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:10:04.826    16:53:57	-- accel/accel.sh@41 -- # local IFS=,
00:10:04.826    16:53:57	-- accel/accel.sh@42 -- # jq -r .
00:10:05.085   16:53:57	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:10:05.085   16:53:57	-- common/autotest_common.sh@10 -- # set +x
00:10:05.085   16:53:57	-- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress
00:10:05.085   16:53:57	-- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']'
00:10:05.085   16:53:57	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:10:05.085   16:53:57	-- common/autotest_common.sh@10 -- # set +x
00:10:05.085  ************************************
00:10:05.085  START TEST accel_missing_filename
00:10:05.085  ************************************
00:10:05.085   16:53:57	-- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress
00:10:05.085   16:53:57	-- common/autotest_common.sh@650 -- # local es=0
00:10:05.085   16:53:57	-- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress
00:10:05.085   16:53:57	-- common/autotest_common.sh@638 -- # local arg=accel_perf
00:10:05.085   16:53:57	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:10:05.085    16:53:57	-- common/autotest_common.sh@642 -- # type -t accel_perf
00:10:05.085   16:53:57	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:10:05.085   16:53:57	-- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress
00:10:05.085   16:53:57	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress
00:10:05.085    16:53:57	-- accel/accel.sh@12 -- # build_accel_config
00:10:05.085    16:53:57	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:10:05.085    16:53:57	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:10:05.085    16:53:57	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:10:05.085    16:53:57	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:10:05.085    16:53:57	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:10:05.085    16:53:57	-- accel/accel.sh@41 -- # local IFS=,
00:10:05.085    16:53:57	-- accel/accel.sh@42 -- # jq -r .
00:10:05.085  [2024-11-19 16:53:57.816207] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:10:05.085  [2024-11-19 16:53:57.816489] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117566 ]
00:10:05.343  [2024-11-19 16:53:57.977752] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:05.343  [2024-11-19 16:53:58.040966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:10:05.343  [2024-11-19 16:53:58.093569] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:10:05.343  [2024-11-19 16:53:58.175274] accel_perf.c:1385:main: *ERROR*: ERROR starting application
00:10:05.601  A filename is required.
00:10:05.601   16:53:58	-- common/autotest_common.sh@653 -- # es=234
00:10:05.601   16:53:58	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:10:05.601   16:53:58	-- common/autotest_common.sh@662 -- # es=106
00:10:05.601   16:53:58	-- common/autotest_common.sh@663 -- # case "$es" in
00:10:05.601   16:53:58	-- common/autotest_common.sh@670 -- # es=1
00:10:05.601   16:53:58	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:10:05.601  
00:10:05.601  real	0m0.545s
00:10:05.601  user	0m0.317s
00:10:05.601  sys	0m0.175s
00:10:05.601   16:53:58	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:10:05.601   16:53:58	-- common/autotest_common.sh@10 -- # set +x
00:10:05.601  ************************************
00:10:05.601  END TEST accel_missing_filename
00:10:05.601  ************************************
00:10:05.601   16:53:58	-- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y
00:10:05.601   16:53:58	-- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']'
00:10:05.601   16:53:58	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:10:05.601   16:53:58	-- common/autotest_common.sh@10 -- # set +x
00:10:05.601  ************************************
00:10:05.601  START TEST accel_compress_verify
00:10:05.601  ************************************
00:10:05.601   16:53:58	-- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y
00:10:05.601   16:53:58	-- common/autotest_common.sh@650 -- # local es=0
00:10:05.601   16:53:58	-- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y
00:10:05.601   16:53:58	-- common/autotest_common.sh@638 -- # local arg=accel_perf
00:10:05.601   16:53:58	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:10:05.601    16:53:58	-- common/autotest_common.sh@642 -- # type -t accel_perf
00:10:05.601   16:53:58	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:10:05.601   16:53:58	-- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y
00:10:05.601   16:53:58	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y
00:10:05.601    16:53:58	-- accel/accel.sh@12 -- # build_accel_config
00:10:05.601    16:53:58	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:10:05.601    16:53:58	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:10:05.601    16:53:58	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:10:05.601    16:53:58	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:10:05.601    16:53:58	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:10:05.601    16:53:58	-- accel/accel.sh@41 -- # local IFS=,
00:10:05.601    16:53:58	-- accel/accel.sh@42 -- # jq -r .
00:10:05.601  [2024-11-19 16:53:58.422975] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:10:05.601  [2024-11-19 16:53:58.423502] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117596 ]
00:10:05.859  [2024-11-19 16:53:58.580581] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:05.859  [2024-11-19 16:53:58.636334] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:10:05.859  [2024-11-19 16:53:58.687456] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:10:06.118  [2024-11-19 16:53:58.768836] accel_perf.c:1385:main: *ERROR*: ERROR starting application
00:10:06.118  
00:10:06.118  Compression does not support the verify option, aborting.
00:10:06.118   16:53:58	-- common/autotest_common.sh@653 -- # es=161
00:10:06.118   16:53:58	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:10:06.118   16:53:58	-- common/autotest_common.sh@662 -- # es=33
00:10:06.118   16:53:58	-- common/autotest_common.sh@663 -- # case "$es" in
00:10:06.118   16:53:58	-- common/autotest_common.sh@670 -- # es=1
00:10:06.118   16:53:58	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:10:06.118  
00:10:06.118  real	0m0.520s
00:10:06.118  user	0m0.303s
00:10:06.118  sys	0m0.149s
00:10:06.118  ************************************
00:10:06.118  END TEST accel_compress_verify
00:10:06.118  ************************************
00:10:06.118   16:53:58	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:10:06.118   16:53:58	-- common/autotest_common.sh@10 -- # set +x
00:10:06.118   16:53:58	-- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar
00:10:06.118   16:53:58	-- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']'
00:10:06.118   16:53:58	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:10:06.118   16:53:58	-- common/autotest_common.sh@10 -- # set +x
00:10:06.376  ************************************
00:10:06.376  START TEST accel_wrong_workload
00:10:06.376  ************************************
00:10:06.376   16:53:58	-- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w foobar
00:10:06.376   16:53:58	-- common/autotest_common.sh@650 -- # local es=0
00:10:06.376   16:53:58	-- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w foobar
00:10:06.376   16:53:58	-- common/autotest_common.sh@638 -- # local arg=accel_perf
00:10:06.376   16:53:58	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:10:06.376    16:53:58	-- common/autotest_common.sh@642 -- # type -t accel_perf
00:10:06.376   16:53:58	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:10:06.376   16:53:58	-- common/autotest_common.sh@653 -- # accel_perf -t 1 -w foobar
00:10:06.376    16:53:58	-- accel/accel.sh@12 -- # build_accel_config
00:10:06.376   16:53:58	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar
00:10:06.376    16:53:58	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:10:06.376    16:53:58	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:10:06.376    16:53:58	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:10:06.376    16:53:58	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:10:06.376    16:53:58	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:10:06.376    16:53:58	-- accel/accel.sh@41 -- # local IFS=,
00:10:06.376    16:53:58	-- accel/accel.sh@42 -- # jq -r .
00:10:06.376  Unsupported workload type: foobar
00:10:06.376  [2024-11-19 16:53:59.014271] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1
00:10:06.376  accel_perf options:
00:10:06.376  	[-h help message]
00:10:06.377  	[-q queue depth per core]
00:10:06.377  	[-C for supported workloads, use this value to configure the io vector size to test (default 1)
00:10:06.377  	[-T number of threads per core
00:10:06.377  	[-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)]
00:10:06.377  	[-t time in seconds]
00:10:06.377  	[-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor,
00:10:06.377  	[                                       dif_verify, , dif_generate, dif_generate_copy
00:10:06.377  	[-M assign module to the operation, not compatible with accel_assign_opc RPC
00:10:06.377  	[-l for compress/decompress workloads, name of uncompressed input file
00:10:06.377  	[-S for crc32c workload, use this seed value (default 0)
00:10:06.377  	[-P for compare workload, percentage of operations that should miscompare (percent, default 0)
00:10:06.377  	[-f for fill workload, use this BYTE value (default 255)
00:10:06.377  	[-x for xor workload, use this number of source buffers (default, minimum: 2)]
00:10:06.377  	[-y verify result if this switch is on]
00:10:06.377  	[-a tasks to allocate per core (default: same value as -q)]
00:10:06.377  		Can be used to spread operations across a wider range of memory.
00:10:06.377   16:53:59	-- common/autotest_common.sh@653 -- # es=1
00:10:06.377   16:53:59	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:10:06.377   16:53:59	-- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:10:06.377   16:53:59	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:10:06.377  
00:10:06.377  real	0m0.071s
00:10:06.377  user	0m0.079s
00:10:06.377  sys	0m0.030s
00:10:06.377   16:53:59	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:10:06.377  ************************************
00:10:06.377  END TEST accel_wrong_workload
00:10:06.377  ************************************
00:10:06.377   16:53:59	-- common/autotest_common.sh@10 -- # set +x
00:10:06.377   16:53:59	-- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1
00:10:06.377   16:53:59	-- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']'
00:10:06.377   16:53:59	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:10:06.377   16:53:59	-- common/autotest_common.sh@10 -- # set +x
00:10:06.377  ************************************
00:10:06.377  START TEST accel_negative_buffers
00:10:06.377  ************************************
00:10:06.377   16:53:59	-- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w xor -y -x -1
00:10:06.377   16:53:59	-- common/autotest_common.sh@650 -- # local es=0
00:10:06.377   16:53:59	-- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1
00:10:06.377   16:53:59	-- common/autotest_common.sh@638 -- # local arg=accel_perf
00:10:06.377   16:53:59	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:10:06.377    16:53:59	-- common/autotest_common.sh@642 -- # type -t accel_perf
00:10:06.377   16:53:59	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:10:06.377   16:53:59	-- common/autotest_common.sh@653 -- # accel_perf -t 1 -w xor -y -x -1
00:10:06.377   16:53:59	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1
00:10:06.377    16:53:59	-- accel/accel.sh@12 -- # build_accel_config
00:10:06.377    16:53:59	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:10:06.377    16:53:59	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:10:06.377    16:53:59	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:10:06.377    16:53:59	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:10:06.377    16:53:59	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:10:06.377    16:53:59	-- accel/accel.sh@41 -- # local IFS=,
00:10:06.377    16:53:59	-- accel/accel.sh@42 -- # jq -r .
00:10:06.377  -x option must be non-negative.
00:10:06.377  [2024-11-19 16:53:59.141771] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1
00:10:06.377  accel_perf options:
00:10:06.377  	[-h help message]
00:10:06.377  	[-q queue depth per core]
00:10:06.377  	[-C for supported workloads, use this value to configure the io vector size to test (default 1)
00:10:06.377  	[-T number of threads per core
00:10:06.377  	[-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)]
00:10:06.377  	[-t time in seconds]
00:10:06.377  	[-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor,
00:10:06.377  	[                                       dif_verify, , dif_generate, dif_generate_copy
00:10:06.377  	[-M assign module to the operation, not compatible with accel_assign_opc RPC
00:10:06.377  	[-l for compress/decompress workloads, name of uncompressed input file
00:10:06.377  	[-S for crc32c workload, use this seed value (default 0)
00:10:06.377  	[-P for compare workload, percentage of operations that should miscompare (percent, default 0)
00:10:06.377  	[-f for fill workload, use this BYTE value (default 255)
00:10:06.377  	[-x for xor workload, use this number of source buffers (default, minimum: 2)]
00:10:06.377  	[-y verify result if this switch is on]
00:10:06.377  	[-a tasks to allocate per core (default: same value as -q)]
00:10:06.377  		Can be used to spread operations across a wider range of memory.
00:10:06.377   16:53:59	-- common/autotest_common.sh@653 -- # es=1
00:10:06.377   16:53:59	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:10:06.377   16:53:59	-- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:10:06.377   16:53:59	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:10:06.377  
00:10:06.377  real	0m0.060s
00:10:06.377  user	0m0.059s
00:10:06.377  sys	0m0.046s
00:10:06.377   16:53:59	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:10:06.377   16:53:59	-- common/autotest_common.sh@10 -- # set +x
00:10:06.377  ************************************
00:10:06.377  END TEST accel_negative_buffers
00:10:06.377  ************************************
00:10:06.377   16:53:59	-- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y
00:10:06.377   16:53:59	-- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']'
00:10:06.377   16:53:59	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:10:06.377   16:53:59	-- common/autotest_common.sh@10 -- # set +x
00:10:06.636  ************************************
00:10:06.636  START TEST accel_crc32c
00:10:06.636  ************************************
00:10:06.636   16:53:59	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -S 32 -y
00:10:06.636   16:53:59	-- accel/accel.sh@16 -- # local accel_opc
00:10:06.636   16:53:59	-- accel/accel.sh@17 -- # local accel_module
00:10:06.636    16:53:59	-- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y
00:10:06.636    16:53:59	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y
00:10:06.636     16:53:59	-- accel/accel.sh@12 -- # build_accel_config
00:10:06.636     16:53:59	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:10:06.636     16:53:59	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:10:06.636     16:53:59	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:10:06.636     16:53:59	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:10:06.636     16:53:59	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:10:06.636     16:53:59	-- accel/accel.sh@41 -- # local IFS=,
00:10:06.636     16:53:59	-- accel/accel.sh@42 -- # jq -r .
00:10:06.636  [2024-11-19 16:53:59.273723] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:10:06.636  [2024-11-19 16:53:59.274008] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117670 ]
00:10:06.636  [2024-11-19 16:53:59.433470] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:06.636  [2024-11-19 16:53:59.492653] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:10:08.011   16:54:00	-- accel/accel.sh@18 -- # out='
00:10:08.011  SPDK Configuration:
00:10:08.011  Core mask:      0x1
00:10:08.011  
00:10:08.011  Accel Perf Configuration:
00:10:08.011  Workload Type:  crc32c
00:10:08.011  CRC-32C seed:   32
00:10:08.011  Transfer size:  4096 bytes
00:10:08.011  Vector count    1
00:10:08.011  Module:         software
00:10:08.011  Queue depth:    32
00:10:08.011  Allocate depth: 32
00:10:08.011  # threads/core: 1
00:10:08.011  Run time:       1 seconds
00:10:08.011  Verify:         Yes
00:10:08.011  
00:10:08.012  Running for 1 seconds...
00:10:08.012  
00:10:08.012  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:10:08.012  ------------------------------------------------------------------------------------
00:10:08.012  0,0                      446144/s       1742 MiB/s                0                0
00:10:08.012  ====================================================================================
00:10:08.012  Total                    446144/s       1742 MiB/s                0                0'
00:10:08.012   16:54:00	-- accel/accel.sh@20 -- # IFS=:
00:10:08.012   16:54:00	-- accel/accel.sh@20 -- # read -r var val
00:10:08.012    16:54:00	-- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y
00:10:08.012    16:54:00	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y
00:10:08.012     16:54:00	-- accel/accel.sh@12 -- # build_accel_config
00:10:08.012     16:54:00	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:10:08.012     16:54:00	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:10:08.012     16:54:00	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:10:08.012     16:54:00	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:10:08.012     16:54:00	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:10:08.012     16:54:00	-- accel/accel.sh@41 -- # local IFS=,
00:10:08.012     16:54:00	-- accel/accel.sh@42 -- # jq -r .
00:10:08.012  [2024-11-19 16:54:00.780293] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:10:08.012  [2024-11-19 16:54:00.780558] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117705 ]
00:10:08.271  [2024-11-19 16:54:00.927836] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:08.271  [2024-11-19 16:54:00.994723] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:10:08.271   16:54:01	-- accel/accel.sh@21 -- # val=
00:10:08.271   16:54:01	-- accel/accel.sh@22 -- # case "$var" in
00:10:08.271   16:54:01	-- accel/accel.sh@20 -- # IFS=:
00:10:08.271   16:54:01	-- accel/accel.sh@20 -- # read -r var val
00:10:08.271   16:54:01	-- accel/accel.sh@21 -- # val=
00:10:08.271   16:54:01	-- accel/accel.sh@22 -- # case "$var" in
00:10:08.271   16:54:01	-- accel/accel.sh@20 -- # IFS=:
00:10:08.271   16:54:01	-- accel/accel.sh@20 -- # read -r var val
00:10:08.271   16:54:01	-- accel/accel.sh@21 -- # val=0x1
00:10:08.271   16:54:01	-- accel/accel.sh@22 -- # case "$var" in
00:10:08.271   16:54:01	-- accel/accel.sh@20 -- # IFS=:
00:10:08.271   16:54:01	-- accel/accel.sh@20 -- # read -r var val
00:10:08.271   16:54:01	-- accel/accel.sh@21 -- # val=
00:10:08.271   16:54:01	-- accel/accel.sh@22 -- # case "$var" in
00:10:08.271   16:54:01	-- accel/accel.sh@20 -- # IFS=:
00:10:08.271   16:54:01	-- accel/accel.sh@20 -- # read -r var val
00:10:08.271   16:54:01	-- accel/accel.sh@21 -- # val=
00:10:08.271   16:54:01	-- accel/accel.sh@22 -- # case "$var" in
00:10:08.271   16:54:01	-- accel/accel.sh@20 -- # IFS=:
00:10:08.271   16:54:01	-- accel/accel.sh@20 -- # read -r var val
00:10:08.271   16:54:01	-- accel/accel.sh@21 -- # val=crc32c
00:10:08.271   16:54:01	-- accel/accel.sh@22 -- # case "$var" in
00:10:08.271   16:54:01	-- accel/accel.sh@24 -- # accel_opc=crc32c
00:10:08.271   16:54:01	-- accel/accel.sh@20 -- # IFS=:
00:10:08.271   16:54:01	-- accel/accel.sh@20 -- # read -r var val
00:10:08.271   16:54:01	-- accel/accel.sh@21 -- # val=32
00:10:08.271   16:54:01	-- accel/accel.sh@22 -- # case "$var" in
00:10:08.271   16:54:01	-- accel/accel.sh@20 -- # IFS=:
00:10:08.271   16:54:01	-- accel/accel.sh@20 -- # read -r var val
00:10:08.271   16:54:01	-- accel/accel.sh@21 -- # val='4096 bytes'
00:10:08.271   16:54:01	-- accel/accel.sh@22 -- # case "$var" in
00:10:08.271   16:54:01	-- accel/accel.sh@20 -- # IFS=:
00:10:08.271   16:54:01	-- accel/accel.sh@20 -- # read -r var val
00:10:08.271   16:54:01	-- accel/accel.sh@21 -- # val=
00:10:08.271   16:54:01	-- accel/accel.sh@22 -- # case "$var" in
00:10:08.271   16:54:01	-- accel/accel.sh@20 -- # IFS=:
00:10:08.271   16:54:01	-- accel/accel.sh@20 -- # read -r var val
00:10:08.271   16:54:01	-- accel/accel.sh@21 -- # val=software
00:10:08.271   16:54:01	-- accel/accel.sh@22 -- # case "$var" in
00:10:08.271   16:54:01	-- accel/accel.sh@23 -- # accel_module=software
00:10:08.271   16:54:01	-- accel/accel.sh@20 -- # IFS=:
00:10:08.271   16:54:01	-- accel/accel.sh@20 -- # read -r var val
00:10:08.271   16:54:01	-- accel/accel.sh@21 -- # val=32
00:10:08.271   16:54:01	-- accel/accel.sh@22 -- # case "$var" in
00:10:08.271   16:54:01	-- accel/accel.sh@20 -- # IFS=:
00:10:08.271   16:54:01	-- accel/accel.sh@20 -- # read -r var val
00:10:08.271   16:54:01	-- accel/accel.sh@21 -- # val=32
00:10:08.271   16:54:01	-- accel/accel.sh@22 -- # case "$var" in
00:10:08.271   16:54:01	-- accel/accel.sh@20 -- # IFS=:
00:10:08.271   16:54:01	-- accel/accel.sh@20 -- # read -r var val
00:10:08.271   16:54:01	-- accel/accel.sh@21 -- # val=1
00:10:08.271   16:54:01	-- accel/accel.sh@22 -- # case "$var" in
00:10:08.271   16:54:01	-- accel/accel.sh@20 -- # IFS=:
00:10:08.271   16:54:01	-- accel/accel.sh@20 -- # read -r var val
00:10:08.271   16:54:01	-- accel/accel.sh@21 -- # val='1 seconds'
00:10:08.271   16:54:01	-- accel/accel.sh@22 -- # case "$var" in
00:10:08.271   16:54:01	-- accel/accel.sh@20 -- # IFS=:
00:10:08.271   16:54:01	-- accel/accel.sh@20 -- # read -r var val
00:10:08.271   16:54:01	-- accel/accel.sh@21 -- # val=Yes
00:10:08.271   16:54:01	-- accel/accel.sh@22 -- # case "$var" in
00:10:08.271   16:54:01	-- accel/accel.sh@20 -- # IFS=:
00:10:08.271   16:54:01	-- accel/accel.sh@20 -- # read -r var val
00:10:08.271   16:54:01	-- accel/accel.sh@21 -- # val=
00:10:08.271   16:54:01	-- accel/accel.sh@22 -- # case "$var" in
00:10:08.271   16:54:01	-- accel/accel.sh@20 -- # IFS=:
00:10:08.271   16:54:01	-- accel/accel.sh@20 -- # read -r var val
00:10:08.271   16:54:01	-- accel/accel.sh@21 -- # val=
00:10:08.271   16:54:01	-- accel/accel.sh@22 -- # case "$var" in
00:10:08.271   16:54:01	-- accel/accel.sh@20 -- # IFS=:
00:10:08.271   16:54:01	-- accel/accel.sh@20 -- # read -r var val
00:10:09.650   16:54:02	-- accel/accel.sh@21 -- # val=
00:10:09.650   16:54:02	-- accel/accel.sh@22 -- # case "$var" in
00:10:09.650   16:54:02	-- accel/accel.sh@20 -- # IFS=:
00:10:09.650   16:54:02	-- accel/accel.sh@20 -- # read -r var val
00:10:09.650   16:54:02	-- accel/accel.sh@21 -- # val=
00:10:09.650   16:54:02	-- accel/accel.sh@22 -- # case "$var" in
00:10:09.650   16:54:02	-- accel/accel.sh@20 -- # IFS=:
00:10:09.650   16:54:02	-- accel/accel.sh@20 -- # read -r var val
00:10:09.650   16:54:02	-- accel/accel.sh@21 -- # val=
00:10:09.650   16:54:02	-- accel/accel.sh@22 -- # case "$var" in
00:10:09.650   16:54:02	-- accel/accel.sh@20 -- # IFS=:
00:10:09.650   16:54:02	-- accel/accel.sh@20 -- # read -r var val
00:10:09.650   16:54:02	-- accel/accel.sh@21 -- # val=
00:10:09.650   16:54:02	-- accel/accel.sh@22 -- # case "$var" in
00:10:09.650   16:54:02	-- accel/accel.sh@20 -- # IFS=:
00:10:09.650   16:54:02	-- accel/accel.sh@20 -- # read -r var val
00:10:09.650   16:54:02	-- accel/accel.sh@21 -- # val=
00:10:09.650   16:54:02	-- accel/accel.sh@22 -- # case "$var" in
00:10:09.650   16:54:02	-- accel/accel.sh@20 -- # IFS=:
00:10:09.650   16:54:02	-- accel/accel.sh@20 -- # read -r var val
00:10:09.650   16:54:02	-- accel/accel.sh@21 -- # val=
00:10:09.650   16:54:02	-- accel/accel.sh@22 -- # case "$var" in
00:10:09.650   16:54:02	-- accel/accel.sh@20 -- # IFS=:
00:10:09.650   16:54:02	-- accel/accel.sh@20 -- # read -r var val
00:10:09.650   16:54:02	-- accel/accel.sh@28 -- # [[ -n software ]]
00:10:09.650   16:54:02	-- accel/accel.sh@28 -- # [[ -n crc32c ]]
00:10:09.650   16:54:02	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:10:09.650  
00:10:09.650  real	0m3.041s
00:10:09.650  user	0m2.535s
00:10:09.650  sys	0m0.323s
00:10:09.650  ************************************
00:10:09.650  END TEST accel_crc32c
00:10:09.650   16:54:02	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:10:09.650   16:54:02	-- common/autotest_common.sh@10 -- # set +x
00:10:09.650  ************************************
00:10:09.650   16:54:02	-- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2
00:10:09.650   16:54:02	-- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']'
00:10:09.650   16:54:02	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:10:09.650   16:54:02	-- common/autotest_common.sh@10 -- # set +x
00:10:09.650  ************************************
00:10:09.650  START TEST accel_crc32c_C2
00:10:09.650  ************************************
00:10:09.650   16:54:02	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -y -C 2
00:10:09.650   16:54:02	-- accel/accel.sh@16 -- # local accel_opc
00:10:09.650   16:54:02	-- accel/accel.sh@17 -- # local accel_module
00:10:09.650    16:54:02	-- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2
00:10:09.650    16:54:02	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2
00:10:09.650     16:54:02	-- accel/accel.sh@12 -- # build_accel_config
00:10:09.650     16:54:02	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:10:09.650     16:54:02	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:10:09.650     16:54:02	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:10:09.650     16:54:02	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:10:09.650     16:54:02	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:10:09.650     16:54:02	-- accel/accel.sh@41 -- # local IFS=,
00:10:09.650     16:54:02	-- accel/accel.sh@42 -- # jq -r .
00:10:09.650  [2024-11-19 16:54:02.371771] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:10:09.650  [2024-11-19 16:54:02.372048] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117738 ]
00:10:09.908  [2024-11-19 16:54:02.537647] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:09.908  [2024-11-19 16:54:02.611255] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:10:11.282   16:54:03	-- accel/accel.sh@18 -- # out='
00:10:11.282  SPDK Configuration:
00:10:11.282  Core mask:      0x1
00:10:11.282  
00:10:11.282  Accel Perf Configuration:
00:10:11.282  Workload Type:  crc32c
00:10:11.282  CRC-32C seed:   0
00:10:11.282  Transfer size:  4096 bytes
00:10:11.282  Vector count    2
00:10:11.282  Module:         software
00:10:11.282  Queue depth:    32
00:10:11.282  Allocate depth: 32
00:10:11.282  # threads/core: 1
00:10:11.282  Run time:       1 seconds
00:10:11.282  Verify:         Yes
00:10:11.282  
00:10:11.282  Running for 1 seconds...
00:10:11.282  
00:10:11.282  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:10:11.282  ------------------------------------------------------------------------------------
00:10:11.282  0,0                      318336/s       2487 MiB/s                0                0
00:10:11.282  ====================================================================================
00:10:11.282  Total                    318336/s       1243 MiB/s                0                0'
00:10:11.282   16:54:03	-- accel/accel.sh@20 -- # IFS=:
00:10:11.282   16:54:03	-- accel/accel.sh@20 -- # read -r var val
00:10:11.282    16:54:03	-- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2
00:10:11.282    16:54:03	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2
00:10:11.282     16:54:03	-- accel/accel.sh@12 -- # build_accel_config
00:10:11.282     16:54:03	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:10:11.282     16:54:03	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:10:11.282     16:54:03	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:10:11.282     16:54:03	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:10:11.282     16:54:03	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:10:11.282     16:54:03	-- accel/accel.sh@41 -- # local IFS=,
00:10:11.282     16:54:03	-- accel/accel.sh@42 -- # jq -r .
00:10:11.282  [2024-11-19 16:54:03.903368] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:10:11.282  [2024-11-19 16:54:03.903631] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117773 ]
00:10:11.282  [2024-11-19 16:54:04.049521] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:11.282  [2024-11-19 16:54:04.119896] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:10:11.541   16:54:04	-- accel/accel.sh@21 -- # val=
00:10:11.541   16:54:04	-- accel/accel.sh@22 -- # case "$var" in
00:10:11.541   16:54:04	-- accel/accel.sh@20 -- # IFS=:
00:10:11.541   16:54:04	-- accel/accel.sh@20 -- # read -r var val
00:10:11.541   16:54:04	-- accel/accel.sh@21 -- # val=
00:10:11.541   16:54:04	-- accel/accel.sh@22 -- # case "$var" in
00:10:11.541   16:54:04	-- accel/accel.sh@20 -- # IFS=:
00:10:11.541   16:54:04	-- accel/accel.sh@20 -- # read -r var val
00:10:11.541   16:54:04	-- accel/accel.sh@21 -- # val=0x1
00:10:11.541   16:54:04	-- accel/accel.sh@22 -- # case "$var" in
00:10:11.541   16:54:04	-- accel/accel.sh@20 -- # IFS=:
00:10:11.541   16:54:04	-- accel/accel.sh@20 -- # read -r var val
00:10:11.541   16:54:04	-- accel/accel.sh@21 -- # val=
00:10:11.541   16:54:04	-- accel/accel.sh@22 -- # case "$var" in
00:10:11.541   16:54:04	-- accel/accel.sh@20 -- # IFS=:
00:10:11.541   16:54:04	-- accel/accel.sh@20 -- # read -r var val
00:10:11.541   16:54:04	-- accel/accel.sh@21 -- # val=
00:10:11.541   16:54:04	-- accel/accel.sh@22 -- # case "$var" in
00:10:11.541   16:54:04	-- accel/accel.sh@20 -- # IFS=:
00:10:11.541   16:54:04	-- accel/accel.sh@20 -- # read -r var val
00:10:11.541   16:54:04	-- accel/accel.sh@21 -- # val=crc32c
00:10:11.541   16:54:04	-- accel/accel.sh@22 -- # case "$var" in
00:10:11.541   16:54:04	-- accel/accel.sh@24 -- # accel_opc=crc32c
00:10:11.541   16:54:04	-- accel/accel.sh@20 -- # IFS=:
00:10:11.541   16:54:04	-- accel/accel.sh@20 -- # read -r var val
00:10:11.541   16:54:04	-- accel/accel.sh@21 -- # val=0
00:10:11.541   16:54:04	-- accel/accel.sh@22 -- # case "$var" in
00:10:11.541   16:54:04	-- accel/accel.sh@20 -- # IFS=:
00:10:11.541   16:54:04	-- accel/accel.sh@20 -- # read -r var val
00:10:11.541   16:54:04	-- accel/accel.sh@21 -- # val='4096 bytes'
00:10:11.541   16:54:04	-- accel/accel.sh@22 -- # case "$var" in
00:10:11.541   16:54:04	-- accel/accel.sh@20 -- # IFS=:
00:10:11.541   16:54:04	-- accel/accel.sh@20 -- # read -r var val
00:10:11.541   16:54:04	-- accel/accel.sh@21 -- # val=
00:10:11.541   16:54:04	-- accel/accel.sh@22 -- # case "$var" in
00:10:11.541   16:54:04	-- accel/accel.sh@20 -- # IFS=:
00:10:11.541   16:54:04	-- accel/accel.sh@20 -- # read -r var val
00:10:11.541   16:54:04	-- accel/accel.sh@21 -- # val=software
00:10:11.541   16:54:04	-- accel/accel.sh@22 -- # case "$var" in
00:10:11.541   16:54:04	-- accel/accel.sh@23 -- # accel_module=software
00:10:11.541   16:54:04	-- accel/accel.sh@20 -- # IFS=:
00:10:11.541   16:54:04	-- accel/accel.sh@20 -- # read -r var val
00:10:11.541   16:54:04	-- accel/accel.sh@21 -- # val=32
00:10:11.541   16:54:04	-- accel/accel.sh@22 -- # case "$var" in
00:10:11.541   16:54:04	-- accel/accel.sh@20 -- # IFS=:
00:10:11.541   16:54:04	-- accel/accel.sh@20 -- # read -r var val
00:10:11.541   16:54:04	-- accel/accel.sh@21 -- # val=32
00:10:11.541   16:54:04	-- accel/accel.sh@22 -- # case "$var" in
00:10:11.541   16:54:04	-- accel/accel.sh@20 -- # IFS=:
00:10:11.541   16:54:04	-- accel/accel.sh@20 -- # read -r var val
00:10:11.541   16:54:04	-- accel/accel.sh@21 -- # val=1
00:10:11.541   16:54:04	-- accel/accel.sh@22 -- # case "$var" in
00:10:11.541   16:54:04	-- accel/accel.sh@20 -- # IFS=:
00:10:11.541   16:54:04	-- accel/accel.sh@20 -- # read -r var val
00:10:11.541   16:54:04	-- accel/accel.sh@21 -- # val='1 seconds'
00:10:11.541   16:54:04	-- accel/accel.sh@22 -- # case "$var" in
00:10:11.541   16:54:04	-- accel/accel.sh@20 -- # IFS=:
00:10:11.541   16:54:04	-- accel/accel.sh@20 -- # read -r var val
00:10:11.541   16:54:04	-- accel/accel.sh@21 -- # val=Yes
00:10:11.541   16:54:04	-- accel/accel.sh@22 -- # case "$var" in
00:10:11.541   16:54:04	-- accel/accel.sh@20 -- # IFS=:
00:10:11.541   16:54:04	-- accel/accel.sh@20 -- # read -r var val
00:10:11.541   16:54:04	-- accel/accel.sh@21 -- # val=
00:10:11.541   16:54:04	-- accel/accel.sh@22 -- # case "$var" in
00:10:11.541   16:54:04	-- accel/accel.sh@20 -- # IFS=:
00:10:11.541   16:54:04	-- accel/accel.sh@20 -- # read -r var val
00:10:11.541   16:54:04	-- accel/accel.sh@21 -- # val=
00:10:11.541   16:54:04	-- accel/accel.sh@22 -- # case "$var" in
00:10:11.541   16:54:04	-- accel/accel.sh@20 -- # IFS=:
00:10:11.541   16:54:04	-- accel/accel.sh@20 -- # read -r var val
00:10:12.919   16:54:05	-- accel/accel.sh@21 -- # val=
00:10:12.919   16:54:05	-- accel/accel.sh@22 -- # case "$var" in
00:10:12.919   16:54:05	-- accel/accel.sh@20 -- # IFS=:
00:10:12.919   16:54:05	-- accel/accel.sh@20 -- # read -r var val
00:10:12.919   16:54:05	-- accel/accel.sh@21 -- # val=
00:10:12.919   16:54:05	-- accel/accel.sh@22 -- # case "$var" in
00:10:12.919   16:54:05	-- accel/accel.sh@20 -- # IFS=:
00:10:12.919   16:54:05	-- accel/accel.sh@20 -- # read -r var val
00:10:12.919   16:54:05	-- accel/accel.sh@21 -- # val=
00:10:12.919   16:54:05	-- accel/accel.sh@22 -- # case "$var" in
00:10:12.919   16:54:05	-- accel/accel.sh@20 -- # IFS=:
00:10:12.919   16:54:05	-- accel/accel.sh@20 -- # read -r var val
00:10:12.919   16:54:05	-- accel/accel.sh@21 -- # val=
00:10:12.919   16:54:05	-- accel/accel.sh@22 -- # case "$var" in
00:10:12.919   16:54:05	-- accel/accel.sh@20 -- # IFS=:
00:10:12.919   16:54:05	-- accel/accel.sh@20 -- # read -r var val
00:10:12.919   16:54:05	-- accel/accel.sh@21 -- # val=
00:10:12.919   16:54:05	-- accel/accel.sh@22 -- # case "$var" in
00:10:12.919   16:54:05	-- accel/accel.sh@20 -- # IFS=:
00:10:12.919   16:54:05	-- accel/accel.sh@20 -- # read -r var val
00:10:12.919   16:54:05	-- accel/accel.sh@21 -- # val=
00:10:12.919   16:54:05	-- accel/accel.sh@22 -- # case "$var" in
00:10:12.919   16:54:05	-- accel/accel.sh@20 -- # IFS=:
00:10:12.919   16:54:05	-- accel/accel.sh@20 -- # read -r var val
00:10:12.919   16:54:05	-- accel/accel.sh@28 -- # [[ -n software ]]
00:10:12.919   16:54:05	-- accel/accel.sh@28 -- # [[ -n crc32c ]]
00:10:12.919   16:54:05	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:10:12.919  
00:10:12.919  real	0m3.054s
00:10:12.919  user	0m2.532s
00:10:12.919  sys	0m0.341s
00:10:12.919  ************************************
00:10:12.919  END TEST accel_crc32c_C2
00:10:12.919  ************************************
00:10:12.919   16:54:05	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:10:12.919   16:54:05	-- common/autotest_common.sh@10 -- # set +x
00:10:12.919   16:54:05	-- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y
00:10:12.919   16:54:05	-- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']'
00:10:12.919   16:54:05	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:10:12.919   16:54:05	-- common/autotest_common.sh@10 -- # set +x
00:10:12.919  ************************************
00:10:12.919  START TEST accel_copy
00:10:12.919  ************************************
00:10:12.919   16:54:05	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy -y
00:10:12.919   16:54:05	-- accel/accel.sh@16 -- # local accel_opc
00:10:12.919   16:54:05	-- accel/accel.sh@17 -- # local accel_module
00:10:12.919    16:54:05	-- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y
00:10:12.919    16:54:05	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y
00:10:12.919     16:54:05	-- accel/accel.sh@12 -- # build_accel_config
00:10:12.919     16:54:05	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:10:12.919     16:54:05	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:10:12.919     16:54:05	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:10:12.919     16:54:05	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:10:12.919     16:54:05	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:10:12.919     16:54:05	-- accel/accel.sh@41 -- # local IFS=,
00:10:12.919     16:54:05	-- accel/accel.sh@42 -- # jq -r .
00:10:12.919  [2024-11-19 16:54:05.490957] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:10:12.919  [2024-11-19 16:54:05.491220] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117806 ]
00:10:12.919  [2024-11-19 16:54:05.646301] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:12.919  [2024-11-19 16:54:05.701885] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:10:14.324   16:54:06	-- accel/accel.sh@18 -- # out='
00:10:14.324  SPDK Configuration:
00:10:14.324  Core mask:      0x1
00:10:14.324  
00:10:14.324  Accel Perf Configuration:
00:10:14.324  Workload Type:  copy
00:10:14.324  Transfer size:  4096 bytes
00:10:14.324  Vector count    1
00:10:14.324  Module:         software
00:10:14.324  Queue depth:    32
00:10:14.324  Allocate depth: 32
00:10:14.324  # threads/core: 1
00:10:14.324  Run time:       1 seconds
00:10:14.324  Verify:         Yes
00:10:14.324  
00:10:14.324  Running for 1 seconds...
00:10:14.324  
00:10:14.324  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:10:14.324  ------------------------------------------------------------------------------------
00:10:14.324  0,0                      285152/s       1113 MiB/s                0                0
00:10:14.324  ====================================================================================
00:10:14.324  Total                    285152/s       1113 MiB/s                0                0'
00:10:14.324   16:54:06	-- accel/accel.sh@20 -- # IFS=:
00:10:14.324   16:54:06	-- accel/accel.sh@20 -- # read -r var val
00:10:14.324    16:54:06	-- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y
00:10:14.324     16:54:06	-- accel/accel.sh@12 -- # build_accel_config
00:10:14.324    16:54:06	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y
00:10:14.324     16:54:06	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:10:14.324     16:54:06	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:10:14.324     16:54:06	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:10:14.324     16:54:06	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:10:14.324     16:54:06	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:10:14.324     16:54:06	-- accel/accel.sh@41 -- # local IFS=,
00:10:14.324     16:54:06	-- accel/accel.sh@42 -- # jq -r .
00:10:14.324  [2024-11-19 16:54:07.011154] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:10:14.324  [2024-11-19 16:54:07.011411] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117842 ]
00:10:14.324  [2024-11-19 16:54:07.172265] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:14.582  [2024-11-19 16:54:07.261258] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:10:14.582   16:54:07	-- accel/accel.sh@21 -- # val=
00:10:14.582   16:54:07	-- accel/accel.sh@22 -- # case "$var" in
00:10:14.582   16:54:07	-- accel/accel.sh@20 -- # IFS=:
00:10:14.582   16:54:07	-- accel/accel.sh@20 -- # read -r var val
00:10:14.582   16:54:07	-- accel/accel.sh@21 -- # val=
00:10:14.582   16:54:07	-- accel/accel.sh@22 -- # case "$var" in
00:10:14.582   16:54:07	-- accel/accel.sh@20 -- # IFS=:
00:10:14.582   16:54:07	-- accel/accel.sh@20 -- # read -r var val
00:10:14.582   16:54:07	-- accel/accel.sh@21 -- # val=0x1
00:10:14.582   16:54:07	-- accel/accel.sh@22 -- # case "$var" in
00:10:14.582   16:54:07	-- accel/accel.sh@20 -- # IFS=:
00:10:14.582   16:54:07	-- accel/accel.sh@20 -- # read -r var val
00:10:14.582   16:54:07	-- accel/accel.sh@21 -- # val=
00:10:14.582   16:54:07	-- accel/accel.sh@22 -- # case "$var" in
00:10:14.582   16:54:07	-- accel/accel.sh@20 -- # IFS=:
00:10:14.582   16:54:07	-- accel/accel.sh@20 -- # read -r var val
00:10:14.582   16:54:07	-- accel/accel.sh@21 -- # val=
00:10:14.582   16:54:07	-- accel/accel.sh@22 -- # case "$var" in
00:10:14.582   16:54:07	-- accel/accel.sh@20 -- # IFS=:
00:10:14.582   16:54:07	-- accel/accel.sh@20 -- # read -r var val
00:10:14.582   16:54:07	-- accel/accel.sh@21 -- # val=copy
00:10:14.582   16:54:07	-- accel/accel.sh@22 -- # case "$var" in
00:10:14.582   16:54:07	-- accel/accel.sh@24 -- # accel_opc=copy
00:10:14.582   16:54:07	-- accel/accel.sh@20 -- # IFS=:
00:10:14.582   16:54:07	-- accel/accel.sh@20 -- # read -r var val
00:10:14.582   16:54:07	-- accel/accel.sh@21 -- # val='4096 bytes'
00:10:14.582   16:54:07	-- accel/accel.sh@22 -- # case "$var" in
00:10:14.582   16:54:07	-- accel/accel.sh@20 -- # IFS=:
00:10:14.582   16:54:07	-- accel/accel.sh@20 -- # read -r var val
00:10:14.582   16:54:07	-- accel/accel.sh@21 -- # val=
00:10:14.582   16:54:07	-- accel/accel.sh@22 -- # case "$var" in
00:10:14.582   16:54:07	-- accel/accel.sh@20 -- # IFS=:
00:10:14.582   16:54:07	-- accel/accel.sh@20 -- # read -r var val
00:10:14.582   16:54:07	-- accel/accel.sh@21 -- # val=software
00:10:14.582   16:54:07	-- accel/accel.sh@22 -- # case "$var" in
00:10:14.582   16:54:07	-- accel/accel.sh@23 -- # accel_module=software
00:10:14.582   16:54:07	-- accel/accel.sh@20 -- # IFS=:
00:10:14.582   16:54:07	-- accel/accel.sh@20 -- # read -r var val
00:10:14.582   16:54:07	-- accel/accel.sh@21 -- # val=32
00:10:14.582   16:54:07	-- accel/accel.sh@22 -- # case "$var" in
00:10:14.582   16:54:07	-- accel/accel.sh@20 -- # IFS=:
00:10:14.582   16:54:07	-- accel/accel.sh@20 -- # read -r var val
00:10:14.582   16:54:07	-- accel/accel.sh@21 -- # val=32
00:10:14.582   16:54:07	-- accel/accel.sh@22 -- # case "$var" in
00:10:14.582   16:54:07	-- accel/accel.sh@20 -- # IFS=:
00:10:14.582   16:54:07	-- accel/accel.sh@20 -- # read -r var val
00:10:14.582   16:54:07	-- accel/accel.sh@21 -- # val=1
00:10:14.582   16:54:07	-- accel/accel.sh@22 -- # case "$var" in
00:10:14.582   16:54:07	-- accel/accel.sh@20 -- # IFS=:
00:10:14.582   16:54:07	-- accel/accel.sh@20 -- # read -r var val
00:10:14.582   16:54:07	-- accel/accel.sh@21 -- # val='1 seconds'
00:10:14.582   16:54:07	-- accel/accel.sh@22 -- # case "$var" in
00:10:14.582   16:54:07	-- accel/accel.sh@20 -- # IFS=:
00:10:14.582   16:54:07	-- accel/accel.sh@20 -- # read -r var val
00:10:14.582   16:54:07	-- accel/accel.sh@21 -- # val=Yes
00:10:14.582   16:54:07	-- accel/accel.sh@22 -- # case "$var" in
00:10:14.582   16:54:07	-- accel/accel.sh@20 -- # IFS=:
00:10:14.582   16:54:07	-- accel/accel.sh@20 -- # read -r var val
00:10:14.582   16:54:07	-- accel/accel.sh@21 -- # val=
00:10:14.582   16:54:07	-- accel/accel.sh@22 -- # case "$var" in
00:10:14.582   16:54:07	-- accel/accel.sh@20 -- # IFS=:
00:10:14.582   16:54:07	-- accel/accel.sh@20 -- # read -r var val
00:10:14.582   16:54:07	-- accel/accel.sh@21 -- # val=
00:10:14.582   16:54:07	-- accel/accel.sh@22 -- # case "$var" in
00:10:14.582   16:54:07	-- accel/accel.sh@20 -- # IFS=:
00:10:14.582   16:54:07	-- accel/accel.sh@20 -- # read -r var val
00:10:15.961   16:54:08	-- accel/accel.sh@21 -- # val=
00:10:15.961   16:54:08	-- accel/accel.sh@22 -- # case "$var" in
00:10:15.961   16:54:08	-- accel/accel.sh@20 -- # IFS=:
00:10:15.961   16:54:08	-- accel/accel.sh@20 -- # read -r var val
00:10:15.961   16:54:08	-- accel/accel.sh@21 -- # val=
00:10:15.961   16:54:08	-- accel/accel.sh@22 -- # case "$var" in
00:10:15.961   16:54:08	-- accel/accel.sh@20 -- # IFS=:
00:10:15.961   16:54:08	-- accel/accel.sh@20 -- # read -r var val
00:10:15.961   16:54:08	-- accel/accel.sh@21 -- # val=
00:10:15.961   16:54:08	-- accel/accel.sh@22 -- # case "$var" in
00:10:15.961   16:54:08	-- accel/accel.sh@20 -- # IFS=:
00:10:15.961   16:54:08	-- accel/accel.sh@20 -- # read -r var val
00:10:15.961   16:54:08	-- accel/accel.sh@21 -- # val=
00:10:15.961   16:54:08	-- accel/accel.sh@22 -- # case "$var" in
00:10:15.961   16:54:08	-- accel/accel.sh@20 -- # IFS=:
00:10:15.961   16:54:08	-- accel/accel.sh@20 -- # read -r var val
00:10:15.961   16:54:08	-- accel/accel.sh@21 -- # val=
00:10:15.961   16:54:08	-- accel/accel.sh@22 -- # case "$var" in
00:10:15.961   16:54:08	-- accel/accel.sh@20 -- # IFS=:
00:10:15.961   16:54:08	-- accel/accel.sh@20 -- # read -r var val
00:10:15.961   16:54:08	-- accel/accel.sh@21 -- # val=
00:10:15.961   16:54:08	-- accel/accel.sh@22 -- # case "$var" in
00:10:15.961   16:54:08	-- accel/accel.sh@20 -- # IFS=:
00:10:15.961   16:54:08	-- accel/accel.sh@20 -- # read -r var val
00:10:15.961   16:54:08	-- accel/accel.sh@28 -- # [[ -n software ]]
00:10:15.961   16:54:08	-- accel/accel.sh@28 -- # [[ -n copy ]]
00:10:15.961   16:54:08	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:10:15.961  
00:10:15.961  real	0m3.236s
00:10:15.961  user	0m2.693s
00:10:15.961  sys	0m0.352s
00:10:15.961   16:54:08	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:10:15.961   16:54:08	-- common/autotest_common.sh@10 -- # set +x
00:10:15.961  ************************************
00:10:15.961  END TEST accel_copy
00:10:15.961  ************************************
00:10:15.961   16:54:08	-- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y
00:10:15.961   16:54:08	-- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']'
00:10:15.961   16:54:08	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:10:15.961   16:54:08	-- common/autotest_common.sh@10 -- # set +x
00:10:15.961  ************************************
00:10:15.961  START TEST accel_fill
00:10:15.961  ************************************
00:10:15.961   16:54:08	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y
00:10:15.961   16:54:08	-- accel/accel.sh@16 -- # local accel_opc
00:10:15.961   16:54:08	-- accel/accel.sh@17 -- # local accel_module
00:10:15.961    16:54:08	-- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y
00:10:15.961    16:54:08	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y
00:10:15.961     16:54:08	-- accel/accel.sh@12 -- # build_accel_config
00:10:15.961     16:54:08	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:10:15.961     16:54:08	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:10:15.961     16:54:08	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:10:15.961     16:54:08	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:10:15.961     16:54:08	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:10:15.961     16:54:08	-- accel/accel.sh@41 -- # local IFS=,
00:10:15.961     16:54:08	-- accel/accel.sh@42 -- # jq -r .
00:10:15.961  [2024-11-19 16:54:08.779889] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:10:15.961  [2024-11-19 16:54:08.780161] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117883 ]
00:10:16.220  [2024-11-19 16:54:08.931224] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:16.220  [2024-11-19 16:54:08.998840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:10:17.602   16:54:10	-- accel/accel.sh@18 -- # out='
00:10:17.602  SPDK Configuration:
00:10:17.602  Core mask:      0x1
00:10:17.602  
00:10:17.602  Accel Perf Configuration:
00:10:17.602  Workload Type:  fill
00:10:17.602  Fill pattern:   0x80
00:10:17.602  Transfer size:  4096 bytes
00:10:17.602  Vector count    1
00:10:17.602  Module:         software
00:10:17.602  Queue depth:    64
00:10:17.602  Allocate depth: 64
00:10:17.602  # threads/core: 1
00:10:17.602  Run time:       1 seconds
00:10:17.602  Verify:         Yes
00:10:17.602  
00:10:17.602  Running for 1 seconds...
00:10:17.602  
00:10:17.602  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:10:17.602  ------------------------------------------------------------------------------------
00:10:17.602  0,0                      541248/s       2114 MiB/s                0                0
00:10:17.602  ====================================================================================
00:10:17.602  Total                    541248/s       2114 MiB/s                0                0'
00:10:17.602   16:54:10	-- accel/accel.sh@20 -- # IFS=:
00:10:17.602   16:54:10	-- accel/accel.sh@20 -- # read -r var val
00:10:17.602    16:54:10	-- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y
00:10:17.602    16:54:10	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y
00:10:17.602     16:54:10	-- accel/accel.sh@12 -- # build_accel_config
00:10:17.602     16:54:10	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:10:17.602     16:54:10	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:10:17.602     16:54:10	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:10:17.602     16:54:10	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:10:17.602     16:54:10	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:10:17.602     16:54:10	-- accel/accel.sh@41 -- # local IFS=,
00:10:17.602     16:54:10	-- accel/accel.sh@42 -- # jq -r .
00:10:17.602  [2024-11-19 16:54:10.338977] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:10:17.602  [2024-11-19 16:54:10.339275] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117911 ]
00:10:17.861  [2024-11-19 16:54:10.496926] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:17.861  [2024-11-19 16:54:10.561964] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:10:17.861   16:54:10	-- accel/accel.sh@21 -- # val=
00:10:17.861   16:54:10	-- accel/accel.sh@22 -- # case "$var" in
00:10:17.861   16:54:10	-- accel/accel.sh@20 -- # IFS=:
00:10:17.861   16:54:10	-- accel/accel.sh@20 -- # read -r var val
00:10:17.861   16:54:10	-- accel/accel.sh@21 -- # val=
00:10:17.861   16:54:10	-- accel/accel.sh@22 -- # case "$var" in
00:10:17.861   16:54:10	-- accel/accel.sh@20 -- # IFS=:
00:10:17.861   16:54:10	-- accel/accel.sh@20 -- # read -r var val
00:10:17.861   16:54:10	-- accel/accel.sh@21 -- # val=0x1
00:10:17.861   16:54:10	-- accel/accel.sh@22 -- # case "$var" in
00:10:17.861   16:54:10	-- accel/accel.sh@20 -- # IFS=:
00:10:17.861   16:54:10	-- accel/accel.sh@20 -- # read -r var val
00:10:17.861   16:54:10	-- accel/accel.sh@21 -- # val=
00:10:17.861   16:54:10	-- accel/accel.sh@22 -- # case "$var" in
00:10:17.861   16:54:10	-- accel/accel.sh@20 -- # IFS=:
00:10:17.861   16:54:10	-- accel/accel.sh@20 -- # read -r var val
00:10:17.861   16:54:10	-- accel/accel.sh@21 -- # val=
00:10:17.861   16:54:10	-- accel/accel.sh@22 -- # case "$var" in
00:10:17.861   16:54:10	-- accel/accel.sh@20 -- # IFS=:
00:10:17.861   16:54:10	-- accel/accel.sh@20 -- # read -r var val
00:10:17.861   16:54:10	-- accel/accel.sh@21 -- # val=fill
00:10:17.861   16:54:10	-- accel/accel.sh@22 -- # case "$var" in
00:10:17.861   16:54:10	-- accel/accel.sh@24 -- # accel_opc=fill
00:10:17.861   16:54:10	-- accel/accel.sh@20 -- # IFS=:
00:10:17.861   16:54:10	-- accel/accel.sh@20 -- # read -r var val
00:10:17.861   16:54:10	-- accel/accel.sh@21 -- # val=0x80
00:10:17.861   16:54:10	-- accel/accel.sh@22 -- # case "$var" in
00:10:17.861   16:54:10	-- accel/accel.sh@20 -- # IFS=:
00:10:17.861   16:54:10	-- accel/accel.sh@20 -- # read -r var val
00:10:17.861   16:54:10	-- accel/accel.sh@21 -- # val='4096 bytes'
00:10:17.861   16:54:10	-- accel/accel.sh@22 -- # case "$var" in
00:10:17.861   16:54:10	-- accel/accel.sh@20 -- # IFS=:
00:10:17.861   16:54:10	-- accel/accel.sh@20 -- # read -r var val
00:10:17.861   16:54:10	-- accel/accel.sh@21 -- # val=
00:10:17.861   16:54:10	-- accel/accel.sh@22 -- # case "$var" in
00:10:17.861   16:54:10	-- accel/accel.sh@20 -- # IFS=:
00:10:17.861   16:54:10	-- accel/accel.sh@20 -- # read -r var val
00:10:17.861   16:54:10	-- accel/accel.sh@21 -- # val=software
00:10:17.861   16:54:10	-- accel/accel.sh@22 -- # case "$var" in
00:10:17.861   16:54:10	-- accel/accel.sh@23 -- # accel_module=software
00:10:17.861   16:54:10	-- accel/accel.sh@20 -- # IFS=:
00:10:17.861   16:54:10	-- accel/accel.sh@20 -- # read -r var val
00:10:17.861   16:54:10	-- accel/accel.sh@21 -- # val=64
00:10:17.861   16:54:10	-- accel/accel.sh@22 -- # case "$var" in
00:10:17.861   16:54:10	-- accel/accel.sh@20 -- # IFS=:
00:10:17.861   16:54:10	-- accel/accel.sh@20 -- # read -r var val
00:10:17.861   16:54:10	-- accel/accel.sh@21 -- # val=64
00:10:17.861   16:54:10	-- accel/accel.sh@22 -- # case "$var" in
00:10:17.861   16:54:10	-- accel/accel.sh@20 -- # IFS=:
00:10:17.861   16:54:10	-- accel/accel.sh@20 -- # read -r var val
00:10:17.861   16:54:10	-- accel/accel.sh@21 -- # val=1
00:10:17.861   16:54:10	-- accel/accel.sh@22 -- # case "$var" in
00:10:17.861   16:54:10	-- accel/accel.sh@20 -- # IFS=:
00:10:17.861   16:54:10	-- accel/accel.sh@20 -- # read -r var val
00:10:17.861   16:54:10	-- accel/accel.sh@21 -- # val='1 seconds'
00:10:17.861   16:54:10	-- accel/accel.sh@22 -- # case "$var" in
00:10:17.861   16:54:10	-- accel/accel.sh@20 -- # IFS=:
00:10:17.861   16:54:10	-- accel/accel.sh@20 -- # read -r var val
00:10:17.861   16:54:10	-- accel/accel.sh@21 -- # val=Yes
00:10:17.861   16:54:10	-- accel/accel.sh@22 -- # case "$var" in
00:10:17.861   16:54:10	-- accel/accel.sh@20 -- # IFS=:
00:10:17.861   16:54:10	-- accel/accel.sh@20 -- # read -r var val
00:10:17.861   16:54:10	-- accel/accel.sh@21 -- # val=
00:10:17.861   16:54:10	-- accel/accel.sh@22 -- # case "$var" in
00:10:17.861   16:54:10	-- accel/accel.sh@20 -- # IFS=:
00:10:17.861   16:54:10	-- accel/accel.sh@20 -- # read -r var val
00:10:17.861   16:54:10	-- accel/accel.sh@21 -- # val=
00:10:17.861   16:54:10	-- accel/accel.sh@22 -- # case "$var" in
00:10:17.861   16:54:10	-- accel/accel.sh@20 -- # IFS=:
00:10:17.861   16:54:10	-- accel/accel.sh@20 -- # read -r var val
00:10:19.236   16:54:11	-- accel/accel.sh@21 -- # val=
00:10:19.236   16:54:11	-- accel/accel.sh@22 -- # case "$var" in
00:10:19.236   16:54:11	-- accel/accel.sh@20 -- # IFS=:
00:10:19.236   16:54:11	-- accel/accel.sh@20 -- # read -r var val
00:10:19.236   16:54:11	-- accel/accel.sh@21 -- # val=
00:10:19.236   16:54:11	-- accel/accel.sh@22 -- # case "$var" in
00:10:19.236   16:54:11	-- accel/accel.sh@20 -- # IFS=:
00:10:19.236   16:54:11	-- accel/accel.sh@20 -- # read -r var val
00:10:19.236   16:54:11	-- accel/accel.sh@21 -- # val=
00:10:19.236   16:54:11	-- accel/accel.sh@22 -- # case "$var" in
00:10:19.236   16:54:11	-- accel/accel.sh@20 -- # IFS=:
00:10:19.236   16:54:11	-- accel/accel.sh@20 -- # read -r var val
00:10:19.236   16:54:11	-- accel/accel.sh@21 -- # val=
00:10:19.236   16:54:11	-- accel/accel.sh@22 -- # case "$var" in
00:10:19.236   16:54:11	-- accel/accel.sh@20 -- # IFS=:
00:10:19.236   16:54:11	-- accel/accel.sh@20 -- # read -r var val
00:10:19.236   16:54:11	-- accel/accel.sh@21 -- # val=
00:10:19.236   16:54:11	-- accel/accel.sh@22 -- # case "$var" in
00:10:19.236   16:54:11	-- accel/accel.sh@20 -- # IFS=:
00:10:19.236   16:54:11	-- accel/accel.sh@20 -- # read -r var val
00:10:19.236   16:54:11	-- accel/accel.sh@21 -- # val=
00:10:19.236   16:54:11	-- accel/accel.sh@22 -- # case "$var" in
00:10:19.236   16:54:11	-- accel/accel.sh@20 -- # IFS=:
00:10:19.236   16:54:11	-- accel/accel.sh@20 -- # read -r var val
00:10:19.236   16:54:11	-- accel/accel.sh@28 -- # [[ -n software ]]
00:10:19.236   16:54:11	-- accel/accel.sh@28 -- # [[ -n fill ]]
00:10:19.236   16:54:11	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:10:19.236  
00:10:19.236  real	0m3.096s
00:10:19.236  user	0m2.550s
00:10:19.236  sys	0m0.365s
00:10:19.236  ************************************
00:10:19.236  END TEST accel_fill
00:10:19.236  ************************************
00:10:19.236   16:54:11	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:10:19.236   16:54:11	-- common/autotest_common.sh@10 -- # set +x
00:10:19.236   16:54:11	-- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y
00:10:19.236   16:54:11	-- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']'
00:10:19.236   16:54:11	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:10:19.236   16:54:11	-- common/autotest_common.sh@10 -- # set +x
00:10:19.236  ************************************
00:10:19.236  START TEST accel_copy_crc32c
00:10:19.236  ************************************
00:10:19.236   16:54:11	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y
00:10:19.236   16:54:11	-- accel/accel.sh@16 -- # local accel_opc
00:10:19.236   16:54:11	-- accel/accel.sh@17 -- # local accel_module
00:10:19.236    16:54:11	-- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y
00:10:19.236    16:54:11	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y
00:10:19.236     16:54:11	-- accel/accel.sh@12 -- # build_accel_config
00:10:19.236     16:54:11	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:10:19.236     16:54:11	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:10:19.236     16:54:11	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:10:19.236     16:54:11	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:10:19.236     16:54:11	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:10:19.236     16:54:11	-- accel/accel.sh@41 -- # local IFS=,
00:10:19.236     16:54:11	-- accel/accel.sh@42 -- # jq -r .
00:10:19.236  [2024-11-19 16:54:11.939394] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:10:19.236  [2024-11-19 16:54:11.939720] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117951 ]
00:10:19.494  [2024-11-19 16:54:12.101484] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:19.494  [2024-11-19 16:54:12.153257] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:10:20.870   16:54:13	-- accel/accel.sh@18 -- # out='
00:10:20.870  SPDK Configuration:
00:10:20.870  Core mask:      0x1
00:10:20.870  
00:10:20.870  Accel Perf Configuration:
00:10:20.870  Workload Type:  copy_crc32c
00:10:20.870  CRC-32C seed:   0
00:10:20.870  Vector size:    4096 bytes
00:10:20.870  Transfer size:  4096 bytes
00:10:20.870  Vector count    1
00:10:20.870  Module:         software
00:10:20.870  Queue depth:    32
00:10:20.870  Allocate depth: 32
00:10:20.870  # threads/core: 1
00:10:20.870  Run time:       1 seconds
00:10:20.870  Verify:         Yes
00:10:20.870  
00:10:20.870  Running for 1 seconds...
00:10:20.870  
00:10:20.870  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:10:20.870  ------------------------------------------------------------------------------------
00:10:20.870  0,0                      229728/s        897 MiB/s                0                0
00:10:20.870  ====================================================================================
00:10:20.870  Total                    229728/s        897 MiB/s                0                0'
00:10:20.870   16:54:13	-- accel/accel.sh@20 -- # IFS=:
00:10:20.870   16:54:13	-- accel/accel.sh@20 -- # read -r var val
00:10:20.870    16:54:13	-- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y
00:10:20.870     16:54:13	-- accel/accel.sh@12 -- # build_accel_config
00:10:20.870    16:54:13	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y
00:10:20.870     16:54:13	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:10:20.870     16:54:13	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:10:20.870     16:54:13	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:10:20.870     16:54:13	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:10:20.870     16:54:13	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:10:20.870     16:54:13	-- accel/accel.sh@41 -- # local IFS=,
00:10:20.870     16:54:13	-- accel/accel.sh@42 -- # jq -r .
00:10:20.870  [2024-11-19 16:54:13.439513] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:10:20.870  [2024-11-19 16:54:13.439802] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117975 ]
00:10:20.870  [2024-11-19 16:54:13.597439] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:20.870  [2024-11-19 16:54:13.666929] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:10:20.870   16:54:13	-- accel/accel.sh@21 -- # val=
00:10:20.870   16:54:13	-- accel/accel.sh@22 -- # case "$var" in
00:10:20.870   16:54:13	-- accel/accel.sh@20 -- # IFS=:
00:10:20.870   16:54:13	-- accel/accel.sh@20 -- # read -r var val
00:10:20.870   16:54:13	-- accel/accel.sh@21 -- # val=
00:10:20.870   16:54:13	-- accel/accel.sh@22 -- # case "$var" in
00:10:20.870   16:54:13	-- accel/accel.sh@20 -- # IFS=:
00:10:20.870   16:54:13	-- accel/accel.sh@20 -- # read -r var val
00:10:20.870   16:54:13	-- accel/accel.sh@21 -- # val=0x1
00:10:20.871   16:54:13	-- accel/accel.sh@22 -- # case "$var" in
00:10:20.871   16:54:13	-- accel/accel.sh@20 -- # IFS=:
00:10:20.871   16:54:13	-- accel/accel.sh@20 -- # read -r var val
00:10:20.871   16:54:13	-- accel/accel.sh@21 -- # val=
00:10:20.871   16:54:13	-- accel/accel.sh@22 -- # case "$var" in
00:10:20.871   16:54:13	-- accel/accel.sh@20 -- # IFS=:
00:10:20.871   16:54:13	-- accel/accel.sh@20 -- # read -r var val
00:10:20.871   16:54:13	-- accel/accel.sh@21 -- # val=
00:10:20.871   16:54:13	-- accel/accel.sh@22 -- # case "$var" in
00:10:20.871   16:54:13	-- accel/accel.sh@20 -- # IFS=:
00:10:21.130   16:54:13	-- accel/accel.sh@20 -- # read -r var val
00:10:21.130   16:54:13	-- accel/accel.sh@21 -- # val=copy_crc32c
00:10:21.130   16:54:13	-- accel/accel.sh@22 -- # case "$var" in
00:10:21.130   16:54:13	-- accel/accel.sh@24 -- # accel_opc=copy_crc32c
00:10:21.130   16:54:13	-- accel/accel.sh@20 -- # IFS=:
00:10:21.130   16:54:13	-- accel/accel.sh@20 -- # read -r var val
00:10:21.130   16:54:13	-- accel/accel.sh@21 -- # val=0
00:10:21.130   16:54:13	-- accel/accel.sh@22 -- # case "$var" in
00:10:21.130   16:54:13	-- accel/accel.sh@20 -- # IFS=:
00:10:21.130   16:54:13	-- accel/accel.sh@20 -- # read -r var val
00:10:21.130   16:54:13	-- accel/accel.sh@21 -- # val='4096 bytes'
00:10:21.130   16:54:13	-- accel/accel.sh@22 -- # case "$var" in
00:10:21.130   16:54:13	-- accel/accel.sh@20 -- # IFS=:
00:10:21.130   16:54:13	-- accel/accel.sh@20 -- # read -r var val
00:10:21.130   16:54:13	-- accel/accel.sh@21 -- # val='4096 bytes'
00:10:21.130   16:54:13	-- accel/accel.sh@22 -- # case "$var" in
00:10:21.130   16:54:13	-- accel/accel.sh@20 -- # IFS=:
00:10:21.130   16:54:13	-- accel/accel.sh@20 -- # read -r var val
00:10:21.130   16:54:13	-- accel/accel.sh@21 -- # val=
00:10:21.130   16:54:13	-- accel/accel.sh@22 -- # case "$var" in
00:10:21.130   16:54:13	-- accel/accel.sh@20 -- # IFS=:
00:10:21.130   16:54:13	-- accel/accel.sh@20 -- # read -r var val
00:10:21.130   16:54:13	-- accel/accel.sh@21 -- # val=software
00:10:21.130   16:54:13	-- accel/accel.sh@22 -- # case "$var" in
00:10:21.130   16:54:13	-- accel/accel.sh@23 -- # accel_module=software
00:10:21.130   16:54:13	-- accel/accel.sh@20 -- # IFS=:
00:10:21.130   16:54:13	-- accel/accel.sh@20 -- # read -r var val
00:10:21.130   16:54:13	-- accel/accel.sh@21 -- # val=32
00:10:21.130   16:54:13	-- accel/accel.sh@22 -- # case "$var" in
00:10:21.130   16:54:13	-- accel/accel.sh@20 -- # IFS=:
00:10:21.130   16:54:13	-- accel/accel.sh@20 -- # read -r var val
00:10:21.130   16:54:13	-- accel/accel.sh@21 -- # val=32
00:10:21.130   16:54:13	-- accel/accel.sh@22 -- # case "$var" in
00:10:21.130   16:54:13	-- accel/accel.sh@20 -- # IFS=:
00:10:21.130   16:54:13	-- accel/accel.sh@20 -- # read -r var val
00:10:21.130   16:54:13	-- accel/accel.sh@21 -- # val=1
00:10:21.130   16:54:13	-- accel/accel.sh@22 -- # case "$var" in
00:10:21.130   16:54:13	-- accel/accel.sh@20 -- # IFS=:
00:10:21.130   16:54:13	-- accel/accel.sh@20 -- # read -r var val
00:10:21.130   16:54:13	-- accel/accel.sh@21 -- # val='1 seconds'
00:10:21.130   16:54:13	-- accel/accel.sh@22 -- # case "$var" in
00:10:21.130   16:54:13	-- accel/accel.sh@20 -- # IFS=:
00:10:21.130   16:54:13	-- accel/accel.sh@20 -- # read -r var val
00:10:21.130   16:54:13	-- accel/accel.sh@21 -- # val=Yes
00:10:21.130   16:54:13	-- accel/accel.sh@22 -- # case "$var" in
00:10:21.130   16:54:13	-- accel/accel.sh@20 -- # IFS=:
00:10:21.130   16:54:13	-- accel/accel.sh@20 -- # read -r var val
00:10:21.130   16:54:13	-- accel/accel.sh@21 -- # val=
00:10:21.130   16:54:13	-- accel/accel.sh@22 -- # case "$var" in
00:10:21.130   16:54:13	-- accel/accel.sh@20 -- # IFS=:
00:10:21.130   16:54:13	-- accel/accel.sh@20 -- # read -r var val
00:10:21.130   16:54:13	-- accel/accel.sh@21 -- # val=
00:10:21.130   16:54:13	-- accel/accel.sh@22 -- # case "$var" in
00:10:21.130   16:54:13	-- accel/accel.sh@20 -- # IFS=:
00:10:21.130   16:54:13	-- accel/accel.sh@20 -- # read -r var val
00:10:22.506   16:54:14	-- accel/accel.sh@21 -- # val=
00:10:22.506   16:54:14	-- accel/accel.sh@22 -- # case "$var" in
00:10:22.506   16:54:14	-- accel/accel.sh@20 -- # IFS=:
00:10:22.506   16:54:14	-- accel/accel.sh@20 -- # read -r var val
00:10:22.506   16:54:14	-- accel/accel.sh@21 -- # val=
00:10:22.506   16:54:14	-- accel/accel.sh@22 -- # case "$var" in
00:10:22.506   16:54:14	-- accel/accel.sh@20 -- # IFS=:
00:10:22.506   16:54:14	-- accel/accel.sh@20 -- # read -r var val
00:10:22.506   16:54:14	-- accel/accel.sh@21 -- # val=
00:10:22.506   16:54:14	-- accel/accel.sh@22 -- # case "$var" in
00:10:22.506   16:54:14	-- accel/accel.sh@20 -- # IFS=:
00:10:22.506   16:54:14	-- accel/accel.sh@20 -- # read -r var val
00:10:22.506   16:54:14	-- accel/accel.sh@21 -- # val=
00:10:22.506   16:54:14	-- accel/accel.sh@22 -- # case "$var" in
00:10:22.506   16:54:14	-- accel/accel.sh@20 -- # IFS=:
00:10:22.507   16:54:14	-- accel/accel.sh@20 -- # read -r var val
00:10:22.507   16:54:14	-- accel/accel.sh@21 -- # val=
00:10:22.507   16:54:14	-- accel/accel.sh@22 -- # case "$var" in
00:10:22.507   16:54:14	-- accel/accel.sh@20 -- # IFS=:
00:10:22.507   16:54:14	-- accel/accel.sh@20 -- # read -r var val
00:10:22.507   16:54:14	-- accel/accel.sh@21 -- # val=
00:10:22.507   16:54:14	-- accel/accel.sh@22 -- # case "$var" in
00:10:22.507   16:54:14	-- accel/accel.sh@20 -- # IFS=:
00:10:22.507   16:54:14	-- accel/accel.sh@20 -- # read -r var val
00:10:22.507   16:54:14	-- accel/accel.sh@28 -- # [[ -n software ]]
00:10:22.507   16:54:14	-- accel/accel.sh@28 -- # [[ -n copy_crc32c ]]
00:10:22.507   16:54:14	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:10:22.507  
00:10:22.507  real	0m3.050s
00:10:22.507  user	0m2.570s
00:10:22.507  sys	0m0.307s
00:10:22.507   16:54:14	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:10:22.507   16:54:14	-- common/autotest_common.sh@10 -- # set +x
00:10:22.507  ************************************
00:10:22.507  END TEST accel_copy_crc32c
00:10:22.507  ************************************
00:10:22.507   16:54:14	-- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2
00:10:22.507   16:54:14	-- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']'
00:10:22.507   16:54:14	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:10:22.507   16:54:14	-- common/autotest_common.sh@10 -- # set +x
00:10:22.507  ************************************
00:10:22.507  START TEST accel_copy_crc32c_C2
00:10:22.507  ************************************
00:10:22.507   16:54:14	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y -C 2
00:10:22.507   16:54:14	-- accel/accel.sh@16 -- # local accel_opc
00:10:22.507   16:54:14	-- accel/accel.sh@17 -- # local accel_module
00:10:22.507    16:54:14	-- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2
00:10:22.507    16:54:14	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2
00:10:22.507     16:54:14	-- accel/accel.sh@12 -- # build_accel_config
00:10:22.507     16:54:14	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:10:22.507     16:54:14	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:10:22.507     16:54:15	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:10:22.507     16:54:15	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:10:22.507     16:54:15	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:10:22.507     16:54:15	-- accel/accel.sh@41 -- # local IFS=,
00:10:22.507     16:54:15	-- accel/accel.sh@42 -- # jq -r .
00:10:22.507  [2024-11-19 16:54:15.030741] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:10:22.507  [2024-11-19 16:54:15.030996] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118019 ]
00:10:22.507  [2024-11-19 16:54:15.182533] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:22.507  [2024-11-19 16:54:15.263473] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:10:23.906   16:54:16	-- accel/accel.sh@18 -- # out='
00:10:23.906  SPDK Configuration:
00:10:23.906  Core mask:      0x1
00:10:23.906  
00:10:23.906  Accel Perf Configuration:
00:10:23.906  Workload Type:  copy_crc32c
00:10:23.906  CRC-32C seed:   0
00:10:23.906  Vector size:    4096 bytes
00:10:23.906  Transfer size:  8192 bytes
00:10:23.906  Vector count    2
00:10:23.906  Module:         software
00:10:23.906  Queue depth:    32
00:10:23.906  Allocate depth: 32
00:10:23.906  # threads/core: 1
00:10:23.906  Run time:       1 seconds
00:10:23.906  Verify:         Yes
00:10:23.906  
00:10:23.906  Running for 1 seconds...
00:10:23.906  
00:10:23.906  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:10:23.906  ------------------------------------------------------------------------------------
00:10:23.906  0,0                      160928/s       1257 MiB/s                0                0
00:10:23.906  ====================================================================================
00:10:23.906  Total                    160928/s        628 MiB/s                0                0'
00:10:23.906   16:54:16	-- accel/accel.sh@20 -- # IFS=:
00:10:23.906    16:54:16	-- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2
00:10:23.906   16:54:16	-- accel/accel.sh@20 -- # read -r var val
00:10:23.906    16:54:16	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2
00:10:23.906     16:54:16	-- accel/accel.sh@12 -- # build_accel_config
00:10:23.906     16:54:16	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:10:23.906     16:54:16	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:10:23.906     16:54:16	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:10:23.906     16:54:16	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:10:23.906     16:54:16	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:10:23.906     16:54:16	-- accel/accel.sh@41 -- # local IFS=,
00:10:23.906     16:54:16	-- accel/accel.sh@42 -- # jq -r .
00:10:23.906  [2024-11-19 16:54:16.651699] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:10:23.906  [2024-11-19 16:54:16.651992] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118044 ]
00:10:24.164  [2024-11-19 16:54:16.809352] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:24.164  [2024-11-19 16:54:16.886758] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:10:24.164   16:54:16	-- accel/accel.sh@21 -- # val=
00:10:24.164   16:54:16	-- accel/accel.sh@22 -- # case "$var" in
00:10:24.164   16:54:16	-- accel/accel.sh@20 -- # IFS=:
00:10:24.164   16:54:16	-- accel/accel.sh@20 -- # read -r var val
00:10:24.164   16:54:16	-- accel/accel.sh@21 -- # val=
00:10:24.164   16:54:16	-- accel/accel.sh@22 -- # case "$var" in
00:10:24.164   16:54:16	-- accel/accel.sh@20 -- # IFS=:
00:10:24.164   16:54:16	-- accel/accel.sh@20 -- # read -r var val
00:10:24.164   16:54:16	-- accel/accel.sh@21 -- # val=0x1
00:10:24.164   16:54:16	-- accel/accel.sh@22 -- # case "$var" in
00:10:24.164   16:54:16	-- accel/accel.sh@20 -- # IFS=:
00:10:24.164   16:54:16	-- accel/accel.sh@20 -- # read -r var val
00:10:24.164   16:54:16	-- accel/accel.sh@21 -- # val=
00:10:24.164   16:54:16	-- accel/accel.sh@22 -- # case "$var" in
00:10:24.164   16:54:16	-- accel/accel.sh@20 -- # IFS=:
00:10:24.164   16:54:16	-- accel/accel.sh@20 -- # read -r var val
00:10:24.164   16:54:16	-- accel/accel.sh@21 -- # val=
00:10:24.164   16:54:16	-- accel/accel.sh@22 -- # case "$var" in
00:10:24.164   16:54:16	-- accel/accel.sh@20 -- # IFS=:
00:10:24.164   16:54:16	-- accel/accel.sh@20 -- # read -r var val
00:10:24.164   16:54:16	-- accel/accel.sh@21 -- # val=copy_crc32c
00:10:24.164   16:54:16	-- accel/accel.sh@22 -- # case "$var" in
00:10:24.164   16:54:16	-- accel/accel.sh@24 -- # accel_opc=copy_crc32c
00:10:24.164   16:54:16	-- accel/accel.sh@20 -- # IFS=:
00:10:24.164   16:54:16	-- accel/accel.sh@20 -- # read -r var val
00:10:24.164   16:54:16	-- accel/accel.sh@21 -- # val=0
00:10:24.164   16:54:16	-- accel/accel.sh@22 -- # case "$var" in
00:10:24.164   16:54:16	-- accel/accel.sh@20 -- # IFS=:
00:10:24.164   16:54:16	-- accel/accel.sh@20 -- # read -r var val
00:10:24.164   16:54:16	-- accel/accel.sh@21 -- # val='4096 bytes'
00:10:24.164   16:54:16	-- accel/accel.sh@22 -- # case "$var" in
00:10:24.165   16:54:16	-- accel/accel.sh@20 -- # IFS=:
00:10:24.165   16:54:16	-- accel/accel.sh@20 -- # read -r var val
00:10:24.165   16:54:16	-- accel/accel.sh@21 -- # val='8192 bytes'
00:10:24.165   16:54:16	-- accel/accel.sh@22 -- # case "$var" in
00:10:24.165   16:54:16	-- accel/accel.sh@20 -- # IFS=:
00:10:24.165   16:54:16	-- accel/accel.sh@20 -- # read -r var val
00:10:24.165   16:54:16	-- accel/accel.sh@21 -- # val=
00:10:24.165   16:54:16	-- accel/accel.sh@22 -- # case "$var" in
00:10:24.165   16:54:16	-- accel/accel.sh@20 -- # IFS=:
00:10:24.165   16:54:16	-- accel/accel.sh@20 -- # read -r var val
00:10:24.165   16:54:16	-- accel/accel.sh@21 -- # val=software
00:10:24.165   16:54:16	-- accel/accel.sh@22 -- # case "$var" in
00:10:24.165   16:54:16	-- accel/accel.sh@23 -- # accel_module=software
00:10:24.165   16:54:16	-- accel/accel.sh@20 -- # IFS=:
00:10:24.165   16:54:16	-- accel/accel.sh@20 -- # read -r var val
00:10:24.165   16:54:16	-- accel/accel.sh@21 -- # val=32
00:10:24.165   16:54:16	-- accel/accel.sh@22 -- # case "$var" in
00:10:24.165   16:54:16	-- accel/accel.sh@20 -- # IFS=:
00:10:24.165   16:54:16	-- accel/accel.sh@20 -- # read -r var val
00:10:24.165   16:54:16	-- accel/accel.sh@21 -- # val=32
00:10:24.165   16:54:16	-- accel/accel.sh@22 -- # case "$var" in
00:10:24.165   16:54:16	-- accel/accel.sh@20 -- # IFS=:
00:10:24.165   16:54:16	-- accel/accel.sh@20 -- # read -r var val
00:10:24.165   16:54:16	-- accel/accel.sh@21 -- # val=1
00:10:24.165   16:54:16	-- accel/accel.sh@22 -- # case "$var" in
00:10:24.165   16:54:16	-- accel/accel.sh@20 -- # IFS=:
00:10:24.165   16:54:16	-- accel/accel.sh@20 -- # read -r var val
00:10:24.165   16:54:16	-- accel/accel.sh@21 -- # val='1 seconds'
00:10:24.165   16:54:16	-- accel/accel.sh@22 -- # case "$var" in
00:10:24.165   16:54:16	-- accel/accel.sh@20 -- # IFS=:
00:10:24.165   16:54:16	-- accel/accel.sh@20 -- # read -r var val
00:10:24.165   16:54:16	-- accel/accel.sh@21 -- # val=Yes
00:10:24.165   16:54:16	-- accel/accel.sh@22 -- # case "$var" in
00:10:24.165   16:54:16	-- accel/accel.sh@20 -- # IFS=:
00:10:24.165   16:54:16	-- accel/accel.sh@20 -- # read -r var val
00:10:24.165   16:54:16	-- accel/accel.sh@21 -- # val=
00:10:24.165   16:54:16	-- accel/accel.sh@22 -- # case "$var" in
00:10:24.165   16:54:16	-- accel/accel.sh@20 -- # IFS=:
00:10:24.165   16:54:16	-- accel/accel.sh@20 -- # read -r var val
00:10:24.165   16:54:16	-- accel/accel.sh@21 -- # val=
00:10:24.165   16:54:16	-- accel/accel.sh@22 -- # case "$var" in
00:10:24.165   16:54:16	-- accel/accel.sh@20 -- # IFS=:
00:10:24.165   16:54:16	-- accel/accel.sh@20 -- # read -r var val
00:10:25.565   16:54:18	-- accel/accel.sh@21 -- # val=
00:10:25.565   16:54:18	-- accel/accel.sh@22 -- # case "$var" in
00:10:25.565   16:54:18	-- accel/accel.sh@20 -- # IFS=:
00:10:25.565   16:54:18	-- accel/accel.sh@20 -- # read -r var val
00:10:25.565   16:54:18	-- accel/accel.sh@21 -- # val=
00:10:25.565   16:54:18	-- accel/accel.sh@22 -- # case "$var" in
00:10:25.565   16:54:18	-- accel/accel.sh@20 -- # IFS=:
00:10:25.565   16:54:18	-- accel/accel.sh@20 -- # read -r var val
00:10:25.565   16:54:18	-- accel/accel.sh@21 -- # val=
00:10:25.565   16:54:18	-- accel/accel.sh@22 -- # case "$var" in
00:10:25.565   16:54:18	-- accel/accel.sh@20 -- # IFS=:
00:10:25.565   16:54:18	-- accel/accel.sh@20 -- # read -r var val
00:10:25.565   16:54:18	-- accel/accel.sh@21 -- # val=
00:10:25.565   16:54:18	-- accel/accel.sh@22 -- # case "$var" in
00:10:25.565   16:54:18	-- accel/accel.sh@20 -- # IFS=:
00:10:25.565   16:54:18	-- accel/accel.sh@20 -- # read -r var val
00:10:25.565   16:54:18	-- accel/accel.sh@21 -- # val=
00:10:25.565   16:54:18	-- accel/accel.sh@22 -- # case "$var" in
00:10:25.565   16:54:18	-- accel/accel.sh@20 -- # IFS=:
00:10:25.565   16:54:18	-- accel/accel.sh@20 -- # read -r var val
00:10:25.565   16:54:18	-- accel/accel.sh@21 -- # val=
00:10:25.565   16:54:18	-- accel/accel.sh@22 -- # case "$var" in
00:10:25.565   16:54:18	-- accel/accel.sh@20 -- # IFS=:
00:10:25.565   16:54:18	-- accel/accel.sh@20 -- # read -r var val
00:10:25.565   16:54:18	-- accel/accel.sh@28 -- # [[ -n software ]]
00:10:25.565   16:54:18	-- accel/accel.sh@28 -- # [[ -n copy_crc32c ]]
00:10:25.565  ************************************
00:10:25.565  END TEST accel_copy_crc32c_C2
00:10:25.565  ************************************
00:10:25.565   16:54:18	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:10:25.565  
00:10:25.565  real	0m3.208s
00:10:25.565  user	0m2.578s
00:10:25.565  sys	0m0.425s
00:10:25.565   16:54:18	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:10:25.565   16:54:18	-- common/autotest_common.sh@10 -- # set +x
00:10:25.565   16:54:18	-- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y
00:10:25.565   16:54:18	-- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']'
00:10:25.565   16:54:18	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:10:25.565   16:54:18	-- common/autotest_common.sh@10 -- # set +x
00:10:25.565  ************************************
00:10:25.565  START TEST accel_dualcast
00:10:25.565  ************************************
00:10:25.565   16:54:18	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dualcast -y
00:10:25.565   16:54:18	-- accel/accel.sh@16 -- # local accel_opc
00:10:25.565   16:54:18	-- accel/accel.sh@17 -- # local accel_module
00:10:25.565    16:54:18	-- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y
00:10:25.565    16:54:18	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y
00:10:25.565     16:54:18	-- accel/accel.sh@12 -- # build_accel_config
00:10:25.565     16:54:18	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:10:25.565     16:54:18	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:10:25.565     16:54:18	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:10:25.565     16:54:18	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:10:25.565     16:54:18	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:10:25.565     16:54:18	-- accel/accel.sh@41 -- # local IFS=,
00:10:25.565     16:54:18	-- accel/accel.sh@42 -- # jq -r .
00:10:25.565  [2024-11-19 16:54:18.302952] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:10:25.565  [2024-11-19 16:54:18.303208] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118089 ]
00:10:25.878  [2024-11-19 16:54:18.454599] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:25.878  [2024-11-19 16:54:18.515668] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:10:27.255   16:54:19	-- accel/accel.sh@18 -- # out='
00:10:27.255  SPDK Configuration:
00:10:27.255  Core mask:      0x1
00:10:27.255  
00:10:27.255  Accel Perf Configuration:
00:10:27.255  Workload Type:  dualcast
00:10:27.255  Transfer size:  4096 bytes
00:10:27.255  Vector count    1
00:10:27.255  Module:         software
00:10:27.255  Queue depth:    32
00:10:27.255  Allocate depth: 32
00:10:27.255  # threads/core: 1
00:10:27.255  Run time:       1 seconds
00:10:27.255  Verify:         Yes
00:10:27.255  
00:10:27.255  Running for 1 seconds...
00:10:27.255  
00:10:27.255  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:10:27.255  ------------------------------------------------------------------------------------
00:10:27.255  0,0                      341472/s       1333 MiB/s                0                0
00:10:27.255  ====================================================================================
00:10:27.255  Total                    341472/s       1333 MiB/s                0                0'
00:10:27.255   16:54:19	-- accel/accel.sh@20 -- # IFS=:
00:10:27.255   16:54:19	-- accel/accel.sh@20 -- # read -r var val
00:10:27.255    16:54:19	-- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y
00:10:27.255    16:54:19	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y
00:10:27.255     16:54:19	-- accel/accel.sh@12 -- # build_accel_config
00:10:27.255     16:54:19	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:10:27.255     16:54:19	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:10:27.255     16:54:19	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:10:27.255     16:54:19	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:10:27.255     16:54:19	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:10:27.255     16:54:19	-- accel/accel.sh@41 -- # local IFS=,
00:10:27.255     16:54:19	-- accel/accel.sh@42 -- # jq -r .
00:10:27.255  [2024-11-19 16:54:19.817446] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:10:27.255  [2024-11-19 16:54:19.817749] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118112 ]
00:10:27.255  [2024-11-19 16:54:19.976974] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:27.255  [2024-11-19 16:54:20.058270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:10:27.514   16:54:20	-- accel/accel.sh@21 -- # val=
00:10:27.514   16:54:20	-- accel/accel.sh@22 -- # case "$var" in
00:10:27.514   16:54:20	-- accel/accel.sh@20 -- # IFS=:
00:10:27.514   16:54:20	-- accel/accel.sh@20 -- # read -r var val
00:10:27.514   16:54:20	-- accel/accel.sh@21 -- # val=
00:10:27.514   16:54:20	-- accel/accel.sh@22 -- # case "$var" in
00:10:27.514   16:54:20	-- accel/accel.sh@20 -- # IFS=:
00:10:27.514   16:54:20	-- accel/accel.sh@20 -- # read -r var val
00:10:27.514   16:54:20	-- accel/accel.sh@21 -- # val=0x1
00:10:27.514   16:54:20	-- accel/accel.sh@22 -- # case "$var" in
00:10:27.514   16:54:20	-- accel/accel.sh@20 -- # IFS=:
00:10:27.514   16:54:20	-- accel/accel.sh@20 -- # read -r var val
00:10:27.514   16:54:20	-- accel/accel.sh@21 -- # val=
00:10:27.514   16:54:20	-- accel/accel.sh@22 -- # case "$var" in
00:10:27.514   16:54:20	-- accel/accel.sh@20 -- # IFS=:
00:10:27.514   16:54:20	-- accel/accel.sh@20 -- # read -r var val
00:10:27.514   16:54:20	-- accel/accel.sh@21 -- # val=
00:10:27.514   16:54:20	-- accel/accel.sh@22 -- # case "$var" in
00:10:27.514   16:54:20	-- accel/accel.sh@20 -- # IFS=:
00:10:27.514   16:54:20	-- accel/accel.sh@20 -- # read -r var val
00:10:27.514   16:54:20	-- accel/accel.sh@21 -- # val=dualcast
00:10:27.514   16:54:20	-- accel/accel.sh@22 -- # case "$var" in
00:10:27.514   16:54:20	-- accel/accel.sh@24 -- # accel_opc=dualcast
00:10:27.514   16:54:20	-- accel/accel.sh@20 -- # IFS=:
00:10:27.514   16:54:20	-- accel/accel.sh@20 -- # read -r var val
00:10:27.514   16:54:20	-- accel/accel.sh@21 -- # val='4096 bytes'
00:10:27.514   16:54:20	-- accel/accel.sh@22 -- # case "$var" in
00:10:27.514   16:54:20	-- accel/accel.sh@20 -- # IFS=:
00:10:27.514   16:54:20	-- accel/accel.sh@20 -- # read -r var val
00:10:27.514   16:54:20	-- accel/accel.sh@21 -- # val=
00:10:27.514   16:54:20	-- accel/accel.sh@22 -- # case "$var" in
00:10:27.514   16:54:20	-- accel/accel.sh@20 -- # IFS=:
00:10:27.514   16:54:20	-- accel/accel.sh@20 -- # read -r var val
00:10:27.514   16:54:20	-- accel/accel.sh@21 -- # val=software
00:10:27.514   16:54:20	-- accel/accel.sh@22 -- # case "$var" in
00:10:27.514   16:54:20	-- accel/accel.sh@23 -- # accel_module=software
00:10:27.514   16:54:20	-- accel/accel.sh@20 -- # IFS=:
00:10:27.514   16:54:20	-- accel/accel.sh@20 -- # read -r var val
00:10:27.514   16:54:20	-- accel/accel.sh@21 -- # val=32
00:10:27.514   16:54:20	-- accel/accel.sh@22 -- # case "$var" in
00:10:27.514   16:54:20	-- accel/accel.sh@20 -- # IFS=:
00:10:27.514   16:54:20	-- accel/accel.sh@20 -- # read -r var val
00:10:27.514   16:54:20	-- accel/accel.sh@21 -- # val=32
00:10:27.514   16:54:20	-- accel/accel.sh@22 -- # case "$var" in
00:10:27.514   16:54:20	-- accel/accel.sh@20 -- # IFS=:
00:10:27.514   16:54:20	-- accel/accel.sh@20 -- # read -r var val
00:10:27.514   16:54:20	-- accel/accel.sh@21 -- # val=1
00:10:27.514   16:54:20	-- accel/accel.sh@22 -- # case "$var" in
00:10:27.514   16:54:20	-- accel/accel.sh@20 -- # IFS=:
00:10:27.514   16:54:20	-- accel/accel.sh@20 -- # read -r var val
00:10:27.514   16:54:20	-- accel/accel.sh@21 -- # val='1 seconds'
00:10:27.514   16:54:20	-- accel/accel.sh@22 -- # case "$var" in
00:10:27.514   16:54:20	-- accel/accel.sh@20 -- # IFS=:
00:10:27.514   16:54:20	-- accel/accel.sh@20 -- # read -r var val
00:10:27.514   16:54:20	-- accel/accel.sh@21 -- # val=Yes
00:10:27.514   16:54:20	-- accel/accel.sh@22 -- # case "$var" in
00:10:27.514   16:54:20	-- accel/accel.sh@20 -- # IFS=:
00:10:27.514   16:54:20	-- accel/accel.sh@20 -- # read -r var val
00:10:27.514   16:54:20	-- accel/accel.sh@21 -- # val=
00:10:27.514   16:54:20	-- accel/accel.sh@22 -- # case "$var" in
00:10:27.514   16:54:20	-- accel/accel.sh@20 -- # IFS=:
00:10:27.514   16:54:20	-- accel/accel.sh@20 -- # read -r var val
00:10:27.514   16:54:20	-- accel/accel.sh@21 -- # val=
00:10:27.514   16:54:20	-- accel/accel.sh@22 -- # case "$var" in
00:10:27.514   16:54:20	-- accel/accel.sh@20 -- # IFS=:
00:10:27.514   16:54:20	-- accel/accel.sh@20 -- # read -r var val
00:10:28.890   16:54:21	-- accel/accel.sh@21 -- # val=
00:10:28.890   16:54:21	-- accel/accel.sh@22 -- # case "$var" in
00:10:28.890   16:54:21	-- accel/accel.sh@20 -- # IFS=:
00:10:28.890   16:54:21	-- accel/accel.sh@20 -- # read -r var val
00:10:28.890   16:54:21	-- accel/accel.sh@21 -- # val=
00:10:28.890   16:54:21	-- accel/accel.sh@22 -- # case "$var" in
00:10:28.890   16:54:21	-- accel/accel.sh@20 -- # IFS=:
00:10:28.890   16:54:21	-- accel/accel.sh@20 -- # read -r var val
00:10:28.890   16:54:21	-- accel/accel.sh@21 -- # val=
00:10:28.890   16:54:21	-- accel/accel.sh@22 -- # case "$var" in
00:10:28.890   16:54:21	-- accel/accel.sh@20 -- # IFS=:
00:10:28.890   16:54:21	-- accel/accel.sh@20 -- # read -r var val
00:10:28.890   16:54:21	-- accel/accel.sh@21 -- # val=
00:10:28.890   16:54:21	-- accel/accel.sh@22 -- # case "$var" in
00:10:28.890   16:54:21	-- accel/accel.sh@20 -- # IFS=:
00:10:28.890   16:54:21	-- accel/accel.sh@20 -- # read -r var val
00:10:28.890   16:54:21	-- accel/accel.sh@21 -- # val=
00:10:28.890   16:54:21	-- accel/accel.sh@22 -- # case "$var" in
00:10:28.890   16:54:21	-- accel/accel.sh@20 -- # IFS=:
00:10:28.890   16:54:21	-- accel/accel.sh@20 -- # read -r var val
00:10:28.890   16:54:21	-- accel/accel.sh@21 -- # val=
00:10:28.890   16:54:21	-- accel/accel.sh@22 -- # case "$var" in
00:10:28.890   16:54:21	-- accel/accel.sh@20 -- # IFS=:
00:10:28.890   16:54:21	-- accel/accel.sh@20 -- # read -r var val
00:10:28.890   16:54:21	-- accel/accel.sh@28 -- # [[ -n software ]]
00:10:28.890   16:54:21	-- accel/accel.sh@28 -- # [[ -n dualcast ]]
00:10:28.890   16:54:21	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:10:28.890  
00:10:28.890  real	0m3.075s
00:10:28.890  user	0m2.559s
00:10:28.890  sys	0m0.314s
00:10:28.890  ************************************
00:10:28.890  END TEST accel_dualcast
00:10:28.890  ************************************
00:10:28.890   16:54:21	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:10:28.890   16:54:21	-- common/autotest_common.sh@10 -- # set +x
00:10:28.890   16:54:21	-- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y
00:10:28.890   16:54:21	-- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']'
00:10:28.890   16:54:21	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:10:28.890   16:54:21	-- common/autotest_common.sh@10 -- # set +x
00:10:28.890  ************************************
00:10:28.890  START TEST accel_compare
00:10:28.890  ************************************
00:10:28.890   16:54:21	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compare -y
00:10:28.890   16:54:21	-- accel/accel.sh@16 -- # local accel_opc
00:10:28.890   16:54:21	-- accel/accel.sh@17 -- # local accel_module
00:10:28.890    16:54:21	-- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y
00:10:28.890    16:54:21	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y
00:10:28.890     16:54:21	-- accel/accel.sh@12 -- # build_accel_config
00:10:28.890     16:54:21	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:10:28.890     16:54:21	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:10:28.890     16:54:21	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:10:28.890     16:54:21	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:10:28.890     16:54:21	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:10:28.890     16:54:21	-- accel/accel.sh@41 -- # local IFS=,
00:10:28.890     16:54:21	-- accel/accel.sh@42 -- # jq -r .
00:10:28.890  [2024-11-19 16:54:21.444366] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:10:28.890  [2024-11-19 16:54:21.444657] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118157 ]
00:10:28.890  [2024-11-19 16:54:21.605004] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:28.890  [2024-11-19 16:54:21.686372] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:10:30.265   16:54:23	-- accel/accel.sh@18 -- # out='
00:10:30.265  SPDK Configuration:
00:10:30.265  Core mask:      0x1
00:10:30.265  
00:10:30.265  Accel Perf Configuration:
00:10:30.265  Workload Type:  compare
00:10:30.265  Transfer size:  4096 bytes
00:10:30.265  Vector count    1
00:10:30.265  Module:         software
00:10:30.265  Queue depth:    32
00:10:30.265  Allocate depth: 32
00:10:30.265  # threads/core: 1
00:10:30.265  Run time:       1 seconds
00:10:30.265  Verify:         Yes
00:10:30.265  
00:10:30.265  Running for 1 seconds...
00:10:30.265  
00:10:30.265  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:10:30.265  ------------------------------------------------------------------------------------
00:10:30.265  0,0                      434464/s       1697 MiB/s                0                0
00:10:30.265  ====================================================================================
00:10:30.265  Total                    434464/s       1697 MiB/s                0                0'
00:10:30.265   16:54:23	-- accel/accel.sh@20 -- # IFS=:
00:10:30.265   16:54:23	-- accel/accel.sh@20 -- # read -r var val
00:10:30.265    16:54:23	-- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y
00:10:30.265    16:54:23	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y
00:10:30.265     16:54:23	-- accel/accel.sh@12 -- # build_accel_config
00:10:30.265     16:54:23	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:10:30.265     16:54:23	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:10:30.265     16:54:23	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:10:30.265     16:54:23	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:10:30.265     16:54:23	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:10:30.265     16:54:23	-- accel/accel.sh@41 -- # local IFS=,
00:10:30.265     16:54:23	-- accel/accel.sh@42 -- # jq -r .
00:10:30.524  [2024-11-19 16:54:23.128006] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:10:30.524  [2024-11-19 16:54:23.128295] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118187 ]
00:10:30.524  [2024-11-19 16:54:23.287307] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:30.524  [2024-11-19 16:54:23.383197] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:10:30.782   16:54:23	-- accel/accel.sh@21 -- # val=
00:10:30.782   16:54:23	-- accel/accel.sh@22 -- # case "$var" in
00:10:30.782   16:54:23	-- accel/accel.sh@20 -- # IFS=:
00:10:30.782   16:54:23	-- accel/accel.sh@20 -- # read -r var val
00:10:30.782   16:54:23	-- accel/accel.sh@21 -- # val=
00:10:30.782   16:54:23	-- accel/accel.sh@22 -- # case "$var" in
00:10:30.782   16:54:23	-- accel/accel.sh@20 -- # IFS=:
00:10:30.782   16:54:23	-- accel/accel.sh@20 -- # read -r var val
00:10:30.782   16:54:23	-- accel/accel.sh@21 -- # val=0x1
00:10:30.782   16:54:23	-- accel/accel.sh@22 -- # case "$var" in
00:10:30.782   16:54:23	-- accel/accel.sh@20 -- # IFS=:
00:10:30.782   16:54:23	-- accel/accel.sh@20 -- # read -r var val
00:10:30.782   16:54:23	-- accel/accel.sh@21 -- # val=
00:10:30.782   16:54:23	-- accel/accel.sh@22 -- # case "$var" in
00:10:30.782   16:54:23	-- accel/accel.sh@20 -- # IFS=:
00:10:30.782   16:54:23	-- accel/accel.sh@20 -- # read -r var val
00:10:30.782   16:54:23	-- accel/accel.sh@21 -- # val=
00:10:30.782   16:54:23	-- accel/accel.sh@22 -- # case "$var" in
00:10:30.782   16:54:23	-- accel/accel.sh@20 -- # IFS=:
00:10:30.782   16:54:23	-- accel/accel.sh@20 -- # read -r var val
00:10:30.782   16:54:23	-- accel/accel.sh@21 -- # val=compare
00:10:30.782   16:54:23	-- accel/accel.sh@22 -- # case "$var" in
00:10:30.782   16:54:23	-- accel/accel.sh@24 -- # accel_opc=compare
00:10:30.782   16:54:23	-- accel/accel.sh@20 -- # IFS=:
00:10:30.782   16:54:23	-- accel/accel.sh@20 -- # read -r var val
00:10:30.782   16:54:23	-- accel/accel.sh@21 -- # val='4096 bytes'
00:10:30.782   16:54:23	-- accel/accel.sh@22 -- # case "$var" in
00:10:30.782   16:54:23	-- accel/accel.sh@20 -- # IFS=:
00:10:30.782   16:54:23	-- accel/accel.sh@20 -- # read -r var val
00:10:30.782   16:54:23	-- accel/accel.sh@21 -- # val=
00:10:30.782   16:54:23	-- accel/accel.sh@22 -- # case "$var" in
00:10:30.782   16:54:23	-- accel/accel.sh@20 -- # IFS=:
00:10:30.782   16:54:23	-- accel/accel.sh@20 -- # read -r var val
00:10:30.782   16:54:23	-- accel/accel.sh@21 -- # val=software
00:10:30.782   16:54:23	-- accel/accel.sh@22 -- # case "$var" in
00:10:30.782   16:54:23	-- accel/accel.sh@23 -- # accel_module=software
00:10:30.782   16:54:23	-- accel/accel.sh@20 -- # IFS=:
00:10:30.782   16:54:23	-- accel/accel.sh@20 -- # read -r var val
00:10:30.782   16:54:23	-- accel/accel.sh@21 -- # val=32
00:10:30.782   16:54:23	-- accel/accel.sh@22 -- # case "$var" in
00:10:30.782   16:54:23	-- accel/accel.sh@20 -- # IFS=:
00:10:30.782   16:54:23	-- accel/accel.sh@20 -- # read -r var val
00:10:30.782   16:54:23	-- accel/accel.sh@21 -- # val=32
00:10:30.782   16:54:23	-- accel/accel.sh@22 -- # case "$var" in
00:10:30.782   16:54:23	-- accel/accel.sh@20 -- # IFS=:
00:10:30.782   16:54:23	-- accel/accel.sh@20 -- # read -r var val
00:10:30.782   16:54:23	-- accel/accel.sh@21 -- # val=1
00:10:30.782   16:54:23	-- accel/accel.sh@22 -- # case "$var" in
00:10:30.782   16:54:23	-- accel/accel.sh@20 -- # IFS=:
00:10:30.782   16:54:23	-- accel/accel.sh@20 -- # read -r var val
00:10:30.782   16:54:23	-- accel/accel.sh@21 -- # val='1 seconds'
00:10:30.782   16:54:23	-- accel/accel.sh@22 -- # case "$var" in
00:10:30.782   16:54:23	-- accel/accel.sh@20 -- # IFS=:
00:10:30.782   16:54:23	-- accel/accel.sh@20 -- # read -r var val
00:10:30.782   16:54:23	-- accel/accel.sh@21 -- # val=Yes
00:10:30.782   16:54:23	-- accel/accel.sh@22 -- # case "$var" in
00:10:30.782   16:54:23	-- accel/accel.sh@20 -- # IFS=:
00:10:30.782   16:54:23	-- accel/accel.sh@20 -- # read -r var val
00:10:30.782   16:54:23	-- accel/accel.sh@21 -- # val=
00:10:30.782   16:54:23	-- accel/accel.sh@22 -- # case "$var" in
00:10:30.782   16:54:23	-- accel/accel.sh@20 -- # IFS=:
00:10:30.782   16:54:23	-- accel/accel.sh@20 -- # read -r var val
00:10:30.782   16:54:23	-- accel/accel.sh@21 -- # val=
00:10:30.782   16:54:23	-- accel/accel.sh@22 -- # case "$var" in
00:10:30.782   16:54:23	-- accel/accel.sh@20 -- # IFS=:
00:10:30.782   16:54:23	-- accel/accel.sh@20 -- # read -r var val
00:10:32.157   16:54:24	-- accel/accel.sh@21 -- # val=
00:10:32.157   16:54:24	-- accel/accel.sh@22 -- # case "$var" in
00:10:32.157   16:54:24	-- accel/accel.sh@20 -- # IFS=:
00:10:32.157   16:54:24	-- accel/accel.sh@20 -- # read -r var val
00:10:32.157   16:54:24	-- accel/accel.sh@21 -- # val=
00:10:32.157   16:54:24	-- accel/accel.sh@22 -- # case "$var" in
00:10:32.157   16:54:24	-- accel/accel.sh@20 -- # IFS=:
00:10:32.157   16:54:24	-- accel/accel.sh@20 -- # read -r var val
00:10:32.157   16:54:24	-- accel/accel.sh@21 -- # val=
00:10:32.157   16:54:24	-- accel/accel.sh@22 -- # case "$var" in
00:10:32.157   16:54:24	-- accel/accel.sh@20 -- # IFS=:
00:10:32.157   16:54:24	-- accel/accel.sh@20 -- # read -r var val
00:10:32.157   16:54:24	-- accel/accel.sh@21 -- # val=
00:10:32.157   16:54:24	-- accel/accel.sh@22 -- # case "$var" in
00:10:32.157   16:54:24	-- accel/accel.sh@20 -- # IFS=:
00:10:32.157   16:54:24	-- accel/accel.sh@20 -- # read -r var val
00:10:32.157   16:54:24	-- accel/accel.sh@21 -- # val=
00:10:32.157   16:54:24	-- accel/accel.sh@22 -- # case "$var" in
00:10:32.157   16:54:24	-- accel/accel.sh@20 -- # IFS=:
00:10:32.157   16:54:24	-- accel/accel.sh@20 -- # read -r var val
00:10:32.157   16:54:24	-- accel/accel.sh@21 -- # val=
00:10:32.157   16:54:24	-- accel/accel.sh@22 -- # case "$var" in
00:10:32.157   16:54:24	-- accel/accel.sh@20 -- # IFS=:
00:10:32.157   16:54:24	-- accel/accel.sh@20 -- # read -r var val
00:10:32.157   16:54:24	-- accel/accel.sh@28 -- # [[ -n software ]]
00:10:32.157   16:54:24	-- accel/accel.sh@28 -- # [[ -n compare ]]
00:10:32.157   16:54:24	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:10:32.157  ************************************
00:10:32.157  END TEST accel_compare
00:10:32.157  ************************************
00:10:32.157  
00:10:32.157  real	0m3.419s
00:10:32.157  user	0m2.758s
00:10:32.157  sys	0m0.462s
00:10:32.157   16:54:24	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:10:32.157   16:54:24	-- common/autotest_common.sh@10 -- # set +x
00:10:32.157   16:54:24	-- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y
00:10:32.157   16:54:24	-- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']'
00:10:32.157   16:54:24	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:10:32.157   16:54:24	-- common/autotest_common.sh@10 -- # set +x
00:10:32.157  ************************************
00:10:32.157  START TEST accel_xor
00:10:32.157  ************************************
00:10:32.157   16:54:24	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y
00:10:32.157   16:54:24	-- accel/accel.sh@16 -- # local accel_opc
00:10:32.157   16:54:24	-- accel/accel.sh@17 -- # local accel_module
00:10:32.157    16:54:24	-- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y
00:10:32.157    16:54:24	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y
00:10:32.157     16:54:24	-- accel/accel.sh@12 -- # build_accel_config
00:10:32.157     16:54:24	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:10:32.157     16:54:24	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:10:32.157     16:54:24	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:10:32.157     16:54:24	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:10:32.157     16:54:24	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:10:32.157     16:54:24	-- accel/accel.sh@41 -- # local IFS=,
00:10:32.157     16:54:24	-- accel/accel.sh@42 -- # jq -r .
00:10:32.157  [2024-11-19 16:54:24.928468] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:10:32.157  [2024-11-19 16:54:24.928861] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118225 ]
00:10:32.415  [2024-11-19 16:54:25.097894] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:32.415  [2024-11-19 16:54:25.174082] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:10:33.831   16:54:26	-- accel/accel.sh@18 -- # out='
00:10:33.831  SPDK Configuration:
00:10:33.831  Core mask:      0x1
00:10:33.831  
00:10:33.831  Accel Perf Configuration:
00:10:33.831  Workload Type:  xor
00:10:33.831  Source buffers: 2
00:10:33.831  Transfer size:  4096 bytes
00:10:33.831  Vector count    1
00:10:33.831  Module:         software
00:10:33.831  Queue depth:    32
00:10:33.831  Allocate depth: 32
00:10:33.831  # threads/core: 1
00:10:33.831  Run time:       1 seconds
00:10:33.831  Verify:         Yes
00:10:33.831  
00:10:33.831  Running for 1 seconds...
00:10:33.831  
00:10:33.831  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:10:33.831  ------------------------------------------------------------------------------------
00:10:33.831  0,0                      289824/s       1132 MiB/s                0                0
00:10:33.831  ====================================================================================
00:10:33.831  Total                    289824/s       1132 MiB/s                0                0'
00:10:33.831   16:54:26	-- accel/accel.sh@20 -- # IFS=:
00:10:33.831   16:54:26	-- accel/accel.sh@20 -- # read -r var val
00:10:33.831    16:54:26	-- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y
00:10:33.831     16:54:26	-- accel/accel.sh@12 -- # build_accel_config
00:10:33.831    16:54:26	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y
00:10:33.831     16:54:26	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:10:33.831     16:54:26	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:10:33.831     16:54:26	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:10:33.831     16:54:26	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:10:33.831     16:54:26	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:10:33.831     16:54:26	-- accel/accel.sh@41 -- # local IFS=,
00:10:33.831     16:54:26	-- accel/accel.sh@42 -- # jq -r .
00:10:33.831  [2024-11-19 16:54:26.619122] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:10:33.831  [2024-11-19 16:54:26.619381] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118255 ]
00:10:34.089  [2024-11-19 16:54:26.776896] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:34.089  [2024-11-19 16:54:26.869738] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:10:34.346   16:54:26	-- accel/accel.sh@21 -- # val=
00:10:34.346   16:54:26	-- accel/accel.sh@22 -- # case "$var" in
00:10:34.346   16:54:26	-- accel/accel.sh@20 -- # IFS=:
00:10:34.346   16:54:26	-- accel/accel.sh@20 -- # read -r var val
00:10:34.346   16:54:26	-- accel/accel.sh@21 -- # val=
00:10:34.346   16:54:26	-- accel/accel.sh@22 -- # case "$var" in
00:10:34.346   16:54:26	-- accel/accel.sh@20 -- # IFS=:
00:10:34.346   16:54:26	-- accel/accel.sh@20 -- # read -r var val
00:10:34.346   16:54:26	-- accel/accel.sh@21 -- # val=0x1
00:10:34.346   16:54:26	-- accel/accel.sh@22 -- # case "$var" in
00:10:34.346   16:54:26	-- accel/accel.sh@20 -- # IFS=:
00:10:34.346   16:54:26	-- accel/accel.sh@20 -- # read -r var val
00:10:34.346   16:54:26	-- accel/accel.sh@21 -- # val=
00:10:34.346   16:54:26	-- accel/accel.sh@22 -- # case "$var" in
00:10:34.346   16:54:26	-- accel/accel.sh@20 -- # IFS=:
00:10:34.346   16:54:26	-- accel/accel.sh@20 -- # read -r var val
00:10:34.346   16:54:26	-- accel/accel.sh@21 -- # val=
00:10:34.346   16:54:26	-- accel/accel.sh@22 -- # case "$var" in
00:10:34.346   16:54:26	-- accel/accel.sh@20 -- # IFS=:
00:10:34.346   16:54:26	-- accel/accel.sh@20 -- # read -r var val
00:10:34.346   16:54:26	-- accel/accel.sh@21 -- # val=xor
00:10:34.346   16:54:26	-- accel/accel.sh@22 -- # case "$var" in
00:10:34.346   16:54:26	-- accel/accel.sh@24 -- # accel_opc=xor
00:10:34.346   16:54:26	-- accel/accel.sh@20 -- # IFS=:
00:10:34.346   16:54:26	-- accel/accel.sh@20 -- # read -r var val
00:10:34.346   16:54:26	-- accel/accel.sh@21 -- # val=2
00:10:34.346   16:54:26	-- accel/accel.sh@22 -- # case "$var" in
00:10:34.346   16:54:26	-- accel/accel.sh@20 -- # IFS=:
00:10:34.346   16:54:26	-- accel/accel.sh@20 -- # read -r var val
00:10:34.346   16:54:26	-- accel/accel.sh@21 -- # val='4096 bytes'
00:10:34.346   16:54:26	-- accel/accel.sh@22 -- # case "$var" in
00:10:34.346   16:54:26	-- accel/accel.sh@20 -- # IFS=:
00:10:34.346   16:54:26	-- accel/accel.sh@20 -- # read -r var val
00:10:34.346   16:54:26	-- accel/accel.sh@21 -- # val=
00:10:34.346   16:54:26	-- accel/accel.sh@22 -- # case "$var" in
00:10:34.346   16:54:26	-- accel/accel.sh@20 -- # IFS=:
00:10:34.346   16:54:26	-- accel/accel.sh@20 -- # read -r var val
00:10:34.346   16:54:26	-- accel/accel.sh@21 -- # val=software
00:10:34.346   16:54:26	-- accel/accel.sh@22 -- # case "$var" in
00:10:34.346   16:54:26	-- accel/accel.sh@23 -- # accel_module=software
00:10:34.346   16:54:26	-- accel/accel.sh@20 -- # IFS=:
00:10:34.346   16:54:26	-- accel/accel.sh@20 -- # read -r var val
00:10:34.346   16:54:26	-- accel/accel.sh@21 -- # val=32
00:10:34.346   16:54:26	-- accel/accel.sh@22 -- # case "$var" in
00:10:34.346   16:54:26	-- accel/accel.sh@20 -- # IFS=:
00:10:34.346   16:54:26	-- accel/accel.sh@20 -- # read -r var val
00:10:34.346   16:54:26	-- accel/accel.sh@21 -- # val=32
00:10:34.346   16:54:26	-- accel/accel.sh@22 -- # case "$var" in
00:10:34.346   16:54:26	-- accel/accel.sh@20 -- # IFS=:
00:10:34.346   16:54:26	-- accel/accel.sh@20 -- # read -r var val
00:10:34.346   16:54:26	-- accel/accel.sh@21 -- # val=1
00:10:34.346   16:54:26	-- accel/accel.sh@22 -- # case "$var" in
00:10:34.346   16:54:26	-- accel/accel.sh@20 -- # IFS=:
00:10:34.346   16:54:26	-- accel/accel.sh@20 -- # read -r var val
00:10:34.346   16:54:26	-- accel/accel.sh@21 -- # val='1 seconds'
00:10:34.346   16:54:26	-- accel/accel.sh@22 -- # case "$var" in
00:10:34.346   16:54:26	-- accel/accel.sh@20 -- # IFS=:
00:10:34.346   16:54:26	-- accel/accel.sh@20 -- # read -r var val
00:10:34.346   16:54:26	-- accel/accel.sh@21 -- # val=Yes
00:10:34.346   16:54:26	-- accel/accel.sh@22 -- # case "$var" in
00:10:34.346   16:54:26	-- accel/accel.sh@20 -- # IFS=:
00:10:34.346   16:54:26	-- accel/accel.sh@20 -- # read -r var val
00:10:34.346   16:54:26	-- accel/accel.sh@21 -- # val=
00:10:34.346   16:54:26	-- accel/accel.sh@22 -- # case "$var" in
00:10:34.346   16:54:26	-- accel/accel.sh@20 -- # IFS=:
00:10:34.346   16:54:26	-- accel/accel.sh@20 -- # read -r var val
00:10:34.346   16:54:26	-- accel/accel.sh@21 -- # val=
00:10:34.346   16:54:26	-- accel/accel.sh@22 -- # case "$var" in
00:10:34.346   16:54:26	-- accel/accel.sh@20 -- # IFS=:
00:10:34.346   16:54:26	-- accel/accel.sh@20 -- # read -r var val
00:10:35.717   16:54:28	-- accel/accel.sh@21 -- # val=
00:10:35.717   16:54:28	-- accel/accel.sh@22 -- # case "$var" in
00:10:35.717   16:54:28	-- accel/accel.sh@20 -- # IFS=:
00:10:35.717   16:54:28	-- accel/accel.sh@20 -- # read -r var val
00:10:35.717   16:54:28	-- accel/accel.sh@21 -- # val=
00:10:35.717   16:54:28	-- accel/accel.sh@22 -- # case "$var" in
00:10:35.717   16:54:28	-- accel/accel.sh@20 -- # IFS=:
00:10:35.717   16:54:28	-- accel/accel.sh@20 -- # read -r var val
00:10:35.717   16:54:28	-- accel/accel.sh@21 -- # val=
00:10:35.717   16:54:28	-- accel/accel.sh@22 -- # case "$var" in
00:10:35.717   16:54:28	-- accel/accel.sh@20 -- # IFS=:
00:10:35.717   16:54:28	-- accel/accel.sh@20 -- # read -r var val
00:10:35.717   16:54:28	-- accel/accel.sh@21 -- # val=
00:10:35.717   16:54:28	-- accel/accel.sh@22 -- # case "$var" in
00:10:35.717   16:54:28	-- accel/accel.sh@20 -- # IFS=:
00:10:35.717   16:54:28	-- accel/accel.sh@20 -- # read -r var val
00:10:35.717   16:54:28	-- accel/accel.sh@21 -- # val=
00:10:35.717   16:54:28	-- accel/accel.sh@22 -- # case "$var" in
00:10:35.717   16:54:28	-- accel/accel.sh@20 -- # IFS=:
00:10:35.717   16:54:28	-- accel/accel.sh@20 -- # read -r var val
00:10:35.717   16:54:28	-- accel/accel.sh@21 -- # val=
00:10:35.717   16:54:28	-- accel/accel.sh@22 -- # case "$var" in
00:10:35.717   16:54:28	-- accel/accel.sh@20 -- # IFS=:
00:10:35.717   16:54:28	-- accel/accel.sh@20 -- # read -r var val
00:10:35.717   16:54:28	-- accel/accel.sh@28 -- # [[ -n software ]]
00:10:35.717   16:54:28	-- accel/accel.sh@28 -- # [[ -n xor ]]
00:10:35.717   16:54:28	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:10:35.717  
00:10:35.717  real	0m3.424s
00:10:35.717  user	0m2.782s
00:10:35.717  sys	0m0.461s
00:10:35.717   16:54:28	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:10:35.717   16:54:28	-- common/autotest_common.sh@10 -- # set +x
00:10:35.717  ************************************
00:10:35.717  END TEST accel_xor
00:10:35.717  ************************************
00:10:35.717   16:54:28	-- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3
00:10:35.717   16:54:28	-- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']'
00:10:35.717   16:54:28	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:10:35.717   16:54:28	-- common/autotest_common.sh@10 -- # set +x
00:10:35.717  ************************************
00:10:35.717  START TEST accel_xor
00:10:35.717  ************************************
00:10:35.717   16:54:28	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y -x 3
00:10:35.717   16:54:28	-- accel/accel.sh@16 -- # local accel_opc
00:10:35.717   16:54:28	-- accel/accel.sh@17 -- # local accel_module
00:10:35.717    16:54:28	-- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3
00:10:35.717    16:54:28	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3
00:10:35.717     16:54:28	-- accel/accel.sh@12 -- # build_accel_config
00:10:35.717     16:54:28	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:10:35.717     16:54:28	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:10:35.717     16:54:28	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:10:35.717     16:54:28	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:10:35.717     16:54:28	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:10:35.717     16:54:28	-- accel/accel.sh@41 -- # local IFS=,
00:10:35.717     16:54:28	-- accel/accel.sh@42 -- # jq -r .
00:10:35.717  [2024-11-19 16:54:28.412113] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:10:35.717  [2024-11-19 16:54:28.412385] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118302 ]
00:10:35.717  [2024-11-19 16:54:28.565914] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:35.974  [2024-11-19 16:54:28.640187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:10:37.345   16:54:30	-- accel/accel.sh@18 -- # out='
00:10:37.345  SPDK Configuration:
00:10:37.345  Core mask:      0x1
00:10:37.345  
00:10:37.345  Accel Perf Configuration:
00:10:37.345  Workload Type:  xor
00:10:37.345  Source buffers: 3
00:10:37.345  Transfer size:  4096 bytes
00:10:37.345  Vector count    1
00:10:37.345  Module:         software
00:10:37.345  Queue depth:    32
00:10:37.345  Allocate depth: 32
00:10:37.345  # threads/core: 1
00:10:37.345  Run time:       1 seconds
00:10:37.345  Verify:         Yes
00:10:37.345  
00:10:37.345  Running for 1 seconds...
00:10:37.345  
00:10:37.345  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:10:37.345  ------------------------------------------------------------------------------------
00:10:37.345  0,0                      277216/s       1082 MiB/s                0                0
00:10:37.345  ====================================================================================
00:10:37.345  Total                    277216/s       1082 MiB/s                0                0'
00:10:37.345   16:54:30	-- accel/accel.sh@20 -- # IFS=:
00:10:37.345   16:54:30	-- accel/accel.sh@20 -- # read -r var val
00:10:37.345    16:54:30	-- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3
00:10:37.345    16:54:30	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3
00:10:37.345     16:54:30	-- accel/accel.sh@12 -- # build_accel_config
00:10:37.345     16:54:30	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:10:37.345     16:54:30	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:10:37.345     16:54:30	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:10:37.345     16:54:30	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:10:37.345     16:54:30	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:10:37.345     16:54:30	-- accel/accel.sh@41 -- # local IFS=,
00:10:37.345     16:54:30	-- accel/accel.sh@42 -- # jq -r .
00:10:37.345  [2024-11-19 16:54:30.093451] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:10:37.345  [2024-11-19 16:54:30.094321] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118324 ]
00:10:37.602  [2024-11-19 16:54:30.254729] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:37.602  [2024-11-19 16:54:30.365339] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:10:37.860   16:54:30	-- accel/accel.sh@21 -- # val=
00:10:37.860   16:54:30	-- accel/accel.sh@22 -- # case "$var" in
00:10:37.860   16:54:30	-- accel/accel.sh@20 -- # IFS=:
00:10:37.860   16:54:30	-- accel/accel.sh@20 -- # read -r var val
00:10:37.860   16:54:30	-- accel/accel.sh@21 -- # val=
00:10:37.860   16:54:30	-- accel/accel.sh@22 -- # case "$var" in
00:10:37.860   16:54:30	-- accel/accel.sh@20 -- # IFS=:
00:10:37.860   16:54:30	-- accel/accel.sh@20 -- # read -r var val
00:10:37.860   16:54:30	-- accel/accel.sh@21 -- # val=0x1
00:10:37.860   16:54:30	-- accel/accel.sh@22 -- # case "$var" in
00:10:37.860   16:54:30	-- accel/accel.sh@20 -- # IFS=:
00:10:37.860   16:54:30	-- accel/accel.sh@20 -- # read -r var val
00:10:37.860   16:54:30	-- accel/accel.sh@21 -- # val=
00:10:37.860   16:54:30	-- accel/accel.sh@22 -- # case "$var" in
00:10:37.860   16:54:30	-- accel/accel.sh@20 -- # IFS=:
00:10:37.860   16:54:30	-- accel/accel.sh@20 -- # read -r var val
00:10:37.860   16:54:30	-- accel/accel.sh@21 -- # val=
00:10:37.860   16:54:30	-- accel/accel.sh@22 -- # case "$var" in
00:10:37.860   16:54:30	-- accel/accel.sh@20 -- # IFS=:
00:10:37.860   16:54:30	-- accel/accel.sh@20 -- # read -r var val
00:10:37.860   16:54:30	-- accel/accel.sh@21 -- # val=xor
00:10:37.860   16:54:30	-- accel/accel.sh@22 -- # case "$var" in
00:10:37.860   16:54:30	-- accel/accel.sh@24 -- # accel_opc=xor
00:10:37.860   16:54:30	-- accel/accel.sh@20 -- # IFS=:
00:10:37.860   16:54:30	-- accel/accel.sh@20 -- # read -r var val
00:10:37.860   16:54:30	-- accel/accel.sh@21 -- # val=3
00:10:37.860   16:54:30	-- accel/accel.sh@22 -- # case "$var" in
00:10:37.860   16:54:30	-- accel/accel.sh@20 -- # IFS=:
00:10:37.860   16:54:30	-- accel/accel.sh@20 -- # read -r var val
00:10:37.860   16:54:30	-- accel/accel.sh@21 -- # val='4096 bytes'
00:10:37.860   16:54:30	-- accel/accel.sh@22 -- # case "$var" in
00:10:37.860   16:54:30	-- accel/accel.sh@20 -- # IFS=:
00:10:37.860   16:54:30	-- accel/accel.sh@20 -- # read -r var val
00:10:37.860   16:54:30	-- accel/accel.sh@21 -- # val=
00:10:37.860   16:54:30	-- accel/accel.sh@22 -- # case "$var" in
00:10:37.860   16:54:30	-- accel/accel.sh@20 -- # IFS=:
00:10:37.860   16:54:30	-- accel/accel.sh@20 -- # read -r var val
00:10:37.860   16:54:30	-- accel/accel.sh@21 -- # val=software
00:10:37.860   16:54:30	-- accel/accel.sh@22 -- # case "$var" in
00:10:37.860   16:54:30	-- accel/accel.sh@23 -- # accel_module=software
00:10:37.860   16:54:30	-- accel/accel.sh@20 -- # IFS=:
00:10:37.860   16:54:30	-- accel/accel.sh@20 -- # read -r var val
00:10:37.860   16:54:30	-- accel/accel.sh@21 -- # val=32
00:10:37.860   16:54:30	-- accel/accel.sh@22 -- # case "$var" in
00:10:37.860   16:54:30	-- accel/accel.sh@20 -- # IFS=:
00:10:37.860   16:54:30	-- accel/accel.sh@20 -- # read -r var val
00:10:37.860   16:54:30	-- accel/accel.sh@21 -- # val=32
00:10:37.860   16:54:30	-- accel/accel.sh@22 -- # case "$var" in
00:10:37.860   16:54:30	-- accel/accel.sh@20 -- # IFS=:
00:10:37.860   16:54:30	-- accel/accel.sh@20 -- # read -r var val
00:10:37.860   16:54:30	-- accel/accel.sh@21 -- # val=1
00:10:37.860   16:54:30	-- accel/accel.sh@22 -- # case "$var" in
00:10:37.860   16:54:30	-- accel/accel.sh@20 -- # IFS=:
00:10:37.860   16:54:30	-- accel/accel.sh@20 -- # read -r var val
00:10:37.860   16:54:30	-- accel/accel.sh@21 -- # val='1 seconds'
00:10:37.860   16:54:30	-- accel/accel.sh@22 -- # case "$var" in
00:10:37.860   16:54:30	-- accel/accel.sh@20 -- # IFS=:
00:10:37.860   16:54:30	-- accel/accel.sh@20 -- # read -r var val
00:10:37.860   16:54:30	-- accel/accel.sh@21 -- # val=Yes
00:10:37.860   16:54:30	-- accel/accel.sh@22 -- # case "$var" in
00:10:37.860   16:54:30	-- accel/accel.sh@20 -- # IFS=:
00:10:37.860   16:54:30	-- accel/accel.sh@20 -- # read -r var val
00:10:37.860   16:54:30	-- accel/accel.sh@21 -- # val=
00:10:37.860   16:54:30	-- accel/accel.sh@22 -- # case "$var" in
00:10:37.860   16:54:30	-- accel/accel.sh@20 -- # IFS=:
00:10:37.860   16:54:30	-- accel/accel.sh@20 -- # read -r var val
00:10:37.860   16:54:30	-- accel/accel.sh@21 -- # val=
00:10:37.860   16:54:30	-- accel/accel.sh@22 -- # case "$var" in
00:10:37.860   16:54:30	-- accel/accel.sh@20 -- # IFS=:
00:10:37.860   16:54:30	-- accel/accel.sh@20 -- # read -r var val
00:10:39.231   16:54:31	-- accel/accel.sh@21 -- # val=
00:10:39.231   16:54:31	-- accel/accel.sh@22 -- # case "$var" in
00:10:39.231   16:54:31	-- accel/accel.sh@20 -- # IFS=:
00:10:39.231   16:54:31	-- accel/accel.sh@20 -- # read -r var val
00:10:39.231   16:54:31	-- accel/accel.sh@21 -- # val=
00:10:39.231   16:54:31	-- accel/accel.sh@22 -- # case "$var" in
00:10:39.231   16:54:31	-- accel/accel.sh@20 -- # IFS=:
00:10:39.231   16:54:31	-- accel/accel.sh@20 -- # read -r var val
00:10:39.231   16:54:31	-- accel/accel.sh@21 -- # val=
00:10:39.231   16:54:31	-- accel/accel.sh@22 -- # case "$var" in
00:10:39.231   16:54:31	-- accel/accel.sh@20 -- # IFS=:
00:10:39.231   16:54:31	-- accel/accel.sh@20 -- # read -r var val
00:10:39.231   16:54:31	-- accel/accel.sh@21 -- # val=
00:10:39.231   16:54:31	-- accel/accel.sh@22 -- # case "$var" in
00:10:39.231   16:54:31	-- accel/accel.sh@20 -- # IFS=:
00:10:39.231   16:54:31	-- accel/accel.sh@20 -- # read -r var val
00:10:39.231   16:54:31	-- accel/accel.sh@21 -- # val=
00:10:39.231   16:54:31	-- accel/accel.sh@22 -- # case "$var" in
00:10:39.231   16:54:31	-- accel/accel.sh@20 -- # IFS=:
00:10:39.231   16:54:31	-- accel/accel.sh@20 -- # read -r var val
00:10:39.231   16:54:31	-- accel/accel.sh@21 -- # val=
00:10:39.231   16:54:31	-- accel/accel.sh@22 -- # case "$var" in
00:10:39.231   16:54:31	-- accel/accel.sh@20 -- # IFS=:
00:10:39.231   16:54:31	-- accel/accel.sh@20 -- # read -r var val
00:10:39.231   16:54:31	-- accel/accel.sh@28 -- # [[ -n software ]]
00:10:39.231   16:54:31	-- accel/accel.sh@28 -- # [[ -n xor ]]
00:10:39.231   16:54:31	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:10:39.231  
00:10:39.231  real	0m3.433s
00:10:39.231  user	0m2.816s
00:10:39.231  sys	0m0.430s
00:10:39.231   16:54:31	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:10:39.231   16:54:31	-- common/autotest_common.sh@10 -- # set +x
00:10:39.231  ************************************
00:10:39.231  END TEST accel_xor
00:10:39.231  ************************************
00:10:39.231   16:54:31	-- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify
00:10:39.231   16:54:31	-- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']'
00:10:39.231   16:54:31	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:10:39.231   16:54:31	-- common/autotest_common.sh@10 -- # set +x
00:10:39.231  ************************************
00:10:39.231  START TEST accel_dif_verify
00:10:39.231  ************************************
00:10:39.231   16:54:31	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_verify
00:10:39.231   16:54:31	-- accel/accel.sh@16 -- # local accel_opc
00:10:39.231   16:54:31	-- accel/accel.sh@17 -- # local accel_module
00:10:39.231    16:54:31	-- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify
00:10:39.231    16:54:31	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify
00:10:39.231     16:54:31	-- accel/accel.sh@12 -- # build_accel_config
00:10:39.231     16:54:31	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:10:39.231     16:54:31	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:10:39.231     16:54:31	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:10:39.231     16:54:31	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:10:39.231     16:54:31	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:10:39.231     16:54:31	-- accel/accel.sh@41 -- # local IFS=,
00:10:39.231     16:54:31	-- accel/accel.sh@42 -- # jq -r .
00:10:39.231  [2024-11-19 16:54:31.900716] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:10:39.231  [2024-11-19 16:54:31.900915] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118368 ]
00:10:39.231  [2024-11-19 16:54:32.046919] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:39.490  [2024-11-19 16:54:32.123926] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:10:40.865   16:54:33	-- accel/accel.sh@18 -- # out='
00:10:40.865  SPDK Configuration:
00:10:40.865  Core mask:      0x1
00:10:40.865  
00:10:40.865  Accel Perf Configuration:
00:10:40.865  Workload Type:  dif_verify
00:10:40.865  Vector size:    4096 bytes
00:10:40.865  Transfer size:  4096 bytes
00:10:40.865  Block size:     512 bytes
00:10:40.865  Metadata size:  8 bytes
00:10:40.865  Vector count    1
00:10:40.865  Module:         software
00:10:40.865  Queue depth:    32
00:10:40.865  Allocate depth: 32
00:10:40.865  # threads/core: 1
00:10:40.865  Run time:       1 seconds
00:10:40.865  Verify:         No
00:10:40.865  
00:10:40.865  Running for 1 seconds...
00:10:40.865  
00:10:40.865  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:10:40.865  ------------------------------------------------------------------------------------
00:10:40.865  0,0                      103424/s        410 MiB/s                0                0
00:10:40.865  ====================================================================================
00:10:40.865  Total                    103424/s        404 MiB/s                0                0'
00:10:40.865   16:54:33	-- accel/accel.sh@20 -- # IFS=:
00:10:40.865   16:54:33	-- accel/accel.sh@20 -- # read -r var val
00:10:40.865    16:54:33	-- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify
00:10:40.865    16:54:33	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify
00:10:40.865     16:54:33	-- accel/accel.sh@12 -- # build_accel_config
00:10:40.865     16:54:33	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:10:40.865     16:54:33	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:10:40.865     16:54:33	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:10:40.865     16:54:33	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:10:40.865     16:54:33	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:10:40.865     16:54:33	-- accel/accel.sh@41 -- # local IFS=,
00:10:40.865     16:54:33	-- accel/accel.sh@42 -- # jq -r .
00:10:40.865  [2024-11-19 16:54:33.430990] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:10:40.865  [2024-11-19 16:54:33.431574] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118402 ]
00:10:40.865  [2024-11-19 16:54:33.572916] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:40.865  [2024-11-19 16:54:33.635582] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:10:40.865   16:54:33	-- accel/accel.sh@21 -- # val=
00:10:40.865   16:54:33	-- accel/accel.sh@22 -- # case "$var" in
00:10:40.865   16:54:33	-- accel/accel.sh@20 -- # IFS=:
00:10:40.865   16:54:33	-- accel/accel.sh@20 -- # read -r var val
00:10:40.865   16:54:33	-- accel/accel.sh@21 -- # val=
00:10:40.865   16:54:33	-- accel/accel.sh@22 -- # case "$var" in
00:10:40.865   16:54:33	-- accel/accel.sh@20 -- # IFS=:
00:10:40.865   16:54:33	-- accel/accel.sh@20 -- # read -r var val
00:10:40.865   16:54:33	-- accel/accel.sh@21 -- # val=0x1
00:10:40.865   16:54:33	-- accel/accel.sh@22 -- # case "$var" in
00:10:40.865   16:54:33	-- accel/accel.sh@20 -- # IFS=:
00:10:40.865   16:54:33	-- accel/accel.sh@20 -- # read -r var val
00:10:40.865   16:54:33	-- accel/accel.sh@21 -- # val=
00:10:40.865   16:54:33	-- accel/accel.sh@22 -- # case "$var" in
00:10:40.865   16:54:33	-- accel/accel.sh@20 -- # IFS=:
00:10:40.865   16:54:33	-- accel/accel.sh@20 -- # read -r var val
00:10:40.865   16:54:33	-- accel/accel.sh@21 -- # val=
00:10:40.865   16:54:33	-- accel/accel.sh@22 -- # case "$var" in
00:10:40.865   16:54:33	-- accel/accel.sh@20 -- # IFS=:
00:10:40.865   16:54:33	-- accel/accel.sh@20 -- # read -r var val
00:10:40.865   16:54:33	-- accel/accel.sh@21 -- # val=dif_verify
00:10:40.865   16:54:33	-- accel/accel.sh@22 -- # case "$var" in
00:10:40.865   16:54:33	-- accel/accel.sh@24 -- # accel_opc=dif_verify
00:10:40.865   16:54:33	-- accel/accel.sh@20 -- # IFS=:
00:10:40.865   16:54:33	-- accel/accel.sh@20 -- # read -r var val
00:10:40.865   16:54:33	-- accel/accel.sh@21 -- # val='4096 bytes'
00:10:40.865   16:54:33	-- accel/accel.sh@22 -- # case "$var" in
00:10:40.865   16:54:33	-- accel/accel.sh@20 -- # IFS=:
00:10:40.865   16:54:33	-- accel/accel.sh@20 -- # read -r var val
00:10:40.865   16:54:33	-- accel/accel.sh@21 -- # val='4096 bytes'
00:10:40.865   16:54:33	-- accel/accel.sh@22 -- # case "$var" in
00:10:40.865   16:54:33	-- accel/accel.sh@20 -- # IFS=:
00:10:40.865   16:54:33	-- accel/accel.sh@20 -- # read -r var val
00:10:40.865   16:54:33	-- accel/accel.sh@21 -- # val='512 bytes'
00:10:40.865   16:54:33	-- accel/accel.sh@22 -- # case "$var" in
00:10:40.865   16:54:33	-- accel/accel.sh@20 -- # IFS=:
00:10:40.865   16:54:33	-- accel/accel.sh@20 -- # read -r var val
00:10:40.865   16:54:33	-- accel/accel.sh@21 -- # val='8 bytes'
00:10:40.865   16:54:33	-- accel/accel.sh@22 -- # case "$var" in
00:10:40.865   16:54:33	-- accel/accel.sh@20 -- # IFS=:
00:10:40.865   16:54:33	-- accel/accel.sh@20 -- # read -r var val
00:10:40.865   16:54:33	-- accel/accel.sh@21 -- # val=
00:10:40.865   16:54:33	-- accel/accel.sh@22 -- # case "$var" in
00:10:40.865   16:54:33	-- accel/accel.sh@20 -- # IFS=:
00:10:40.865   16:54:33	-- accel/accel.sh@20 -- # read -r var val
00:10:40.865   16:54:33	-- accel/accel.sh@21 -- # val=software
00:10:40.865   16:54:33	-- accel/accel.sh@22 -- # case "$var" in
00:10:40.865   16:54:33	-- accel/accel.sh@23 -- # accel_module=software
00:10:40.865   16:54:33	-- accel/accel.sh@20 -- # IFS=:
00:10:40.865   16:54:33	-- accel/accel.sh@20 -- # read -r var val
00:10:40.865   16:54:33	-- accel/accel.sh@21 -- # val=32
00:10:40.865   16:54:33	-- accel/accel.sh@22 -- # case "$var" in
00:10:40.865   16:54:33	-- accel/accel.sh@20 -- # IFS=:
00:10:40.865   16:54:33	-- accel/accel.sh@20 -- # read -r var val
00:10:40.865   16:54:33	-- accel/accel.sh@21 -- # val=32
00:10:40.865   16:54:33	-- accel/accel.sh@22 -- # case "$var" in
00:10:40.865   16:54:33	-- accel/accel.sh@20 -- # IFS=:
00:10:40.865   16:54:33	-- accel/accel.sh@20 -- # read -r var val
00:10:40.865   16:54:33	-- accel/accel.sh@21 -- # val=1
00:10:40.865   16:54:33	-- accel/accel.sh@22 -- # case "$var" in
00:10:40.865   16:54:33	-- accel/accel.sh@20 -- # IFS=:
00:10:40.865   16:54:33	-- accel/accel.sh@20 -- # read -r var val
00:10:40.865   16:54:33	-- accel/accel.sh@21 -- # val='1 seconds'
00:10:40.865   16:54:33	-- accel/accel.sh@22 -- # case "$var" in
00:10:40.865   16:54:33	-- accel/accel.sh@20 -- # IFS=:
00:10:40.865   16:54:33	-- accel/accel.sh@20 -- # read -r var val
00:10:40.865   16:54:33	-- accel/accel.sh@21 -- # val=No
00:10:40.865   16:54:33	-- accel/accel.sh@22 -- # case "$var" in
00:10:40.865   16:54:33	-- accel/accel.sh@20 -- # IFS=:
00:10:40.865   16:54:33	-- accel/accel.sh@20 -- # read -r var val
00:10:40.865   16:54:33	-- accel/accel.sh@21 -- # val=
00:10:40.865   16:54:33	-- accel/accel.sh@22 -- # case "$var" in
00:10:40.865   16:54:33	-- accel/accel.sh@20 -- # IFS=:
00:10:40.865   16:54:33	-- accel/accel.sh@20 -- # read -r var val
00:10:40.865   16:54:33	-- accel/accel.sh@21 -- # val=
00:10:40.865   16:54:33	-- accel/accel.sh@22 -- # case "$var" in
00:10:40.865   16:54:33	-- accel/accel.sh@20 -- # IFS=:
00:10:40.865   16:54:33	-- accel/accel.sh@20 -- # read -r var val
00:10:42.240   16:54:34	-- accel/accel.sh@21 -- # val=
00:10:42.240   16:54:34	-- accel/accel.sh@22 -- # case "$var" in
00:10:42.240   16:54:34	-- accel/accel.sh@20 -- # IFS=:
00:10:42.240   16:54:34	-- accel/accel.sh@20 -- # read -r var val
00:10:42.240   16:54:34	-- accel/accel.sh@21 -- # val=
00:10:42.240   16:54:34	-- accel/accel.sh@22 -- # case "$var" in
00:10:42.240   16:54:34	-- accel/accel.sh@20 -- # IFS=:
00:10:42.240   16:54:34	-- accel/accel.sh@20 -- # read -r var val
00:10:42.240   16:54:34	-- accel/accel.sh@21 -- # val=
00:10:42.240   16:54:34	-- accel/accel.sh@22 -- # case "$var" in
00:10:42.240   16:54:34	-- accel/accel.sh@20 -- # IFS=:
00:10:42.240   16:54:34	-- accel/accel.sh@20 -- # read -r var val
00:10:42.240   16:54:34	-- accel/accel.sh@21 -- # val=
00:10:42.240   16:54:34	-- accel/accel.sh@22 -- # case "$var" in
00:10:42.240   16:54:34	-- accel/accel.sh@20 -- # IFS=:
00:10:42.240   16:54:34	-- accel/accel.sh@20 -- # read -r var val
00:10:42.240   16:54:34	-- accel/accel.sh@21 -- # val=
00:10:42.240   16:54:34	-- accel/accel.sh@22 -- # case "$var" in
00:10:42.240   16:54:34	-- accel/accel.sh@20 -- # IFS=:
00:10:42.240   16:54:34	-- accel/accel.sh@20 -- # read -r var val
00:10:42.240   16:54:34	-- accel/accel.sh@21 -- # val=
00:10:42.240   16:54:34	-- accel/accel.sh@22 -- # case "$var" in
00:10:42.240   16:54:34	-- accel/accel.sh@20 -- # IFS=:
00:10:42.240   16:54:34	-- accel/accel.sh@20 -- # read -r var val
00:10:42.240   16:54:34	-- accel/accel.sh@28 -- # [[ -n software ]]
00:10:42.240   16:54:34	-- accel/accel.sh@28 -- # [[ -n dif_verify ]]
00:10:42.240   16:54:34	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:10:42.240  
00:10:42.240  real	0m3.040s
00:10:42.240  user	0m2.531s
00:10:42.240  sys	0m0.315s
00:10:42.240   16:54:34	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:10:42.240   16:54:34	-- common/autotest_common.sh@10 -- # set +x
00:10:42.240  ************************************
00:10:42.240  END TEST accel_dif_verify
00:10:42.240  ************************************
00:10:42.240   16:54:34	-- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate
00:10:42.240   16:54:34	-- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']'
00:10:42.240   16:54:34	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:10:42.240   16:54:34	-- common/autotest_common.sh@10 -- # set +x
00:10:42.240  ************************************
00:10:42.240  START TEST accel_dif_generate
00:10:42.240  ************************************
00:10:42.240   16:54:34	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate
00:10:42.240   16:54:34	-- accel/accel.sh@16 -- # local accel_opc
00:10:42.240   16:54:34	-- accel/accel.sh@17 -- # local accel_module
00:10:42.240    16:54:34	-- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate
00:10:42.240    16:54:34	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate
00:10:42.240     16:54:34	-- accel/accel.sh@12 -- # build_accel_config
00:10:42.240     16:54:34	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:10:42.240     16:54:34	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:10:42.240     16:54:34	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:10:42.240     16:54:34	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:10:42.240     16:54:34	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:10:42.240     16:54:34	-- accel/accel.sh@41 -- # local IFS=,
00:10:42.240     16:54:34	-- accel/accel.sh@42 -- # jq -r .
00:10:42.240  [2024-11-19 16:54:35.017534] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:10:42.240  [2024-11-19 16:54:35.017804] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118436 ]
00:10:42.499  [2024-11-19 16:54:35.172683] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:42.499  [2024-11-19 16:54:35.219633] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:10:43.875   16:54:36	-- accel/accel.sh@18 -- # out='
00:10:43.875  SPDK Configuration:
00:10:43.875  Core mask:      0x1
00:10:43.875  
00:10:43.875  Accel Perf Configuration:
00:10:43.875  Workload Type:  dif_generate
00:10:43.875  Vector size:    4096 bytes
00:10:43.875  Transfer size:  4096 bytes
00:10:43.875  Block size:     512 bytes
00:10:43.875  Metadata size:  8 bytes
00:10:43.875  Vector count    1
00:10:43.875  Module:         software
00:10:43.875  Queue depth:    32
00:10:43.875  Allocate depth: 32
00:10:43.875  # threads/core: 1
00:10:43.875  Run time:       1 seconds
00:10:43.875  Verify:         No
00:10:43.875  
00:10:43.875  Running for 1 seconds...
00:10:43.875  
00:10:43.875  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:10:43.875  ------------------------------------------------------------------------------------
00:10:43.875  0,0                      139168/s        552 MiB/s                0                0
00:10:43.875  ====================================================================================
00:10:43.875  Total                    139168/s        543 MiB/s                0                0'
00:10:43.875   16:54:36	-- accel/accel.sh@20 -- # IFS=:
00:10:43.875   16:54:36	-- accel/accel.sh@20 -- # read -r var val
00:10:43.875    16:54:36	-- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate
00:10:43.875    16:54:36	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate
00:10:43.875     16:54:36	-- accel/accel.sh@12 -- # build_accel_config
00:10:43.875     16:54:36	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:10:43.875     16:54:36	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:10:43.875     16:54:36	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:10:43.875     16:54:36	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:10:43.875     16:54:36	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:10:43.875     16:54:36	-- accel/accel.sh@41 -- # local IFS=,
00:10:43.875     16:54:36	-- accel/accel.sh@42 -- # jq -r .
00:10:43.875  [2024-11-19 16:54:36.505688] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:10:43.875  [2024-11-19 16:54:36.505965] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118466 ]
00:10:43.875  [2024-11-19 16:54:36.660750] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:43.875  [2024-11-19 16:54:36.725501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:10:44.136   16:54:36	-- accel/accel.sh@21 -- # val=
00:10:44.136   16:54:36	-- accel/accel.sh@22 -- # case "$var" in
00:10:44.136   16:54:36	-- accel/accel.sh@20 -- # IFS=:
00:10:44.136   16:54:36	-- accel/accel.sh@20 -- # read -r var val
00:10:44.136   16:54:36	-- accel/accel.sh@21 -- # val=
00:10:44.136   16:54:36	-- accel/accel.sh@22 -- # case "$var" in
00:10:44.136   16:54:36	-- accel/accel.sh@20 -- # IFS=:
00:10:44.136   16:54:36	-- accel/accel.sh@20 -- # read -r var val
00:10:44.136   16:54:36	-- accel/accel.sh@21 -- # val=0x1
00:10:44.136   16:54:36	-- accel/accel.sh@22 -- # case "$var" in
00:10:44.136   16:54:36	-- accel/accel.sh@20 -- # IFS=:
00:10:44.136   16:54:36	-- accel/accel.sh@20 -- # read -r var val
00:10:44.136   16:54:36	-- accel/accel.sh@21 -- # val=
00:10:44.136   16:54:36	-- accel/accel.sh@22 -- # case "$var" in
00:10:44.136   16:54:36	-- accel/accel.sh@20 -- # IFS=:
00:10:44.136   16:54:36	-- accel/accel.sh@20 -- # read -r var val
00:10:44.136   16:54:36	-- accel/accel.sh@21 -- # val=
00:10:44.136   16:54:36	-- accel/accel.sh@22 -- # case "$var" in
00:10:44.136   16:54:36	-- accel/accel.sh@20 -- # IFS=:
00:10:44.136   16:54:36	-- accel/accel.sh@20 -- # read -r var val
00:10:44.136   16:54:36	-- accel/accel.sh@21 -- # val=dif_generate
00:10:44.136   16:54:36	-- accel/accel.sh@22 -- # case "$var" in
00:10:44.136   16:54:36	-- accel/accel.sh@24 -- # accel_opc=dif_generate
00:10:44.136   16:54:36	-- accel/accel.sh@20 -- # IFS=:
00:10:44.136   16:54:36	-- accel/accel.sh@20 -- # read -r var val
00:10:44.136   16:54:36	-- accel/accel.sh@21 -- # val='4096 bytes'
00:10:44.136   16:54:36	-- accel/accel.sh@22 -- # case "$var" in
00:10:44.136   16:54:36	-- accel/accel.sh@20 -- # IFS=:
00:10:44.136   16:54:36	-- accel/accel.sh@20 -- # read -r var val
00:10:44.136   16:54:36	-- accel/accel.sh@21 -- # val='4096 bytes'
00:10:44.136   16:54:36	-- accel/accel.sh@22 -- # case "$var" in
00:10:44.136   16:54:36	-- accel/accel.sh@20 -- # IFS=:
00:10:44.136   16:54:36	-- accel/accel.sh@20 -- # read -r var val
00:10:44.136   16:54:36	-- accel/accel.sh@21 -- # val='512 bytes'
00:10:44.136   16:54:36	-- accel/accel.sh@22 -- # case "$var" in
00:10:44.136   16:54:36	-- accel/accel.sh@20 -- # IFS=:
00:10:44.136   16:54:36	-- accel/accel.sh@20 -- # read -r var val
00:10:44.136   16:54:36	-- accel/accel.sh@21 -- # val='8 bytes'
00:10:44.136   16:54:36	-- accel/accel.sh@22 -- # case "$var" in
00:10:44.136   16:54:36	-- accel/accel.sh@20 -- # IFS=:
00:10:44.136   16:54:36	-- accel/accel.sh@20 -- # read -r var val
00:10:44.136   16:54:36	-- accel/accel.sh@21 -- # val=
00:10:44.136   16:54:36	-- accel/accel.sh@22 -- # case "$var" in
00:10:44.136   16:54:36	-- accel/accel.sh@20 -- # IFS=:
00:10:44.136   16:54:36	-- accel/accel.sh@20 -- # read -r var val
00:10:44.136   16:54:36	-- accel/accel.sh@21 -- # val=software
00:10:44.136   16:54:36	-- accel/accel.sh@22 -- # case "$var" in
00:10:44.136   16:54:36	-- accel/accel.sh@23 -- # accel_module=software
00:10:44.136   16:54:36	-- accel/accel.sh@20 -- # IFS=:
00:10:44.136   16:54:36	-- accel/accel.sh@20 -- # read -r var val
00:10:44.136   16:54:36	-- accel/accel.sh@21 -- # val=32
00:10:44.136   16:54:36	-- accel/accel.sh@22 -- # case "$var" in
00:10:44.136   16:54:36	-- accel/accel.sh@20 -- # IFS=:
00:10:44.136   16:54:36	-- accel/accel.sh@20 -- # read -r var val
00:10:44.136   16:54:36	-- accel/accel.sh@21 -- # val=32
00:10:44.136   16:54:36	-- accel/accel.sh@22 -- # case "$var" in
00:10:44.136   16:54:36	-- accel/accel.sh@20 -- # IFS=:
00:10:44.136   16:54:36	-- accel/accel.sh@20 -- # read -r var val
00:10:44.136   16:54:36	-- accel/accel.sh@21 -- # val=1
00:10:44.136   16:54:36	-- accel/accel.sh@22 -- # case "$var" in
00:10:44.136   16:54:36	-- accel/accel.sh@20 -- # IFS=:
00:10:44.136   16:54:36	-- accel/accel.sh@20 -- # read -r var val
00:10:44.136   16:54:36	-- accel/accel.sh@21 -- # val='1 seconds'
00:10:44.136   16:54:36	-- accel/accel.sh@22 -- # case "$var" in
00:10:44.136   16:54:36	-- accel/accel.sh@20 -- # IFS=:
00:10:44.136   16:54:36	-- accel/accel.sh@20 -- # read -r var val
00:10:44.136   16:54:36	-- accel/accel.sh@21 -- # val=No
00:10:44.136   16:54:36	-- accel/accel.sh@22 -- # case "$var" in
00:10:44.136   16:54:36	-- accel/accel.sh@20 -- # IFS=:
00:10:44.136   16:54:36	-- accel/accel.sh@20 -- # read -r var val
00:10:44.136   16:54:36	-- accel/accel.sh@21 -- # val=
00:10:44.136   16:54:36	-- accel/accel.sh@22 -- # case "$var" in
00:10:44.136   16:54:36	-- accel/accel.sh@20 -- # IFS=:
00:10:44.136   16:54:36	-- accel/accel.sh@20 -- # read -r var val
00:10:44.136   16:54:36	-- accel/accel.sh@21 -- # val=
00:10:44.136   16:54:36	-- accel/accel.sh@22 -- # case "$var" in
00:10:44.136   16:54:36	-- accel/accel.sh@20 -- # IFS=:
00:10:44.136   16:54:36	-- accel/accel.sh@20 -- # read -r var val
00:10:45.548   16:54:37	-- accel/accel.sh@21 -- # val=
00:10:45.548   16:54:37	-- accel/accel.sh@22 -- # case "$var" in
00:10:45.548   16:54:37	-- accel/accel.sh@20 -- # IFS=:
00:10:45.548   16:54:37	-- accel/accel.sh@20 -- # read -r var val
00:10:45.548   16:54:37	-- accel/accel.sh@21 -- # val=
00:10:45.548   16:54:37	-- accel/accel.sh@22 -- # case "$var" in
00:10:45.548   16:54:37	-- accel/accel.sh@20 -- # IFS=:
00:10:45.548   16:54:37	-- accel/accel.sh@20 -- # read -r var val
00:10:45.548   16:54:37	-- accel/accel.sh@21 -- # val=
00:10:45.548   16:54:37	-- accel/accel.sh@22 -- # case "$var" in
00:10:45.548   16:54:37	-- accel/accel.sh@20 -- # IFS=:
00:10:45.548   16:54:37	-- accel/accel.sh@20 -- # read -r var val
00:10:45.548   16:54:37	-- accel/accel.sh@21 -- # val=
00:10:45.548   16:54:37	-- accel/accel.sh@22 -- # case "$var" in
00:10:45.548   16:54:37	-- accel/accel.sh@20 -- # IFS=:
00:10:45.548   16:54:37	-- accel/accel.sh@20 -- # read -r var val
00:10:45.548   16:54:37	-- accel/accel.sh@21 -- # val=
00:10:45.548   16:54:37	-- accel/accel.sh@22 -- # case "$var" in
00:10:45.548   16:54:37	-- accel/accel.sh@20 -- # IFS=:
00:10:45.548   16:54:37	-- accel/accel.sh@20 -- # read -r var val
00:10:45.548   16:54:37	-- accel/accel.sh@21 -- # val=
00:10:45.548   16:54:37	-- accel/accel.sh@22 -- # case "$var" in
00:10:45.548   16:54:37	-- accel/accel.sh@20 -- # IFS=:
00:10:45.548   16:54:37	-- accel/accel.sh@20 -- # read -r var val
00:10:45.548   16:54:37	-- accel/accel.sh@28 -- # [[ -n software ]]
00:10:45.548   16:54:37	-- accel/accel.sh@28 -- # [[ -n dif_generate ]]
00:10:45.548  ************************************
00:10:45.548  END TEST accel_dif_generate
00:10:45.548  ************************************
00:10:45.548   16:54:37	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:10:45.548  
00:10:45.548  real	0m3.014s
00:10:45.548  user	0m2.517s
00:10:45.548  sys	0m0.320s
00:10:45.548   16:54:37	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:10:45.548   16:54:37	-- common/autotest_common.sh@10 -- # set +x
00:10:45.548   16:54:38	-- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy
00:10:45.548   16:54:38	-- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']'
00:10:45.548   16:54:38	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:10:45.548   16:54:38	-- common/autotest_common.sh@10 -- # set +x
00:10:45.548  ************************************
00:10:45.548  START TEST accel_dif_generate_copy
00:10:45.548  ************************************
00:10:45.548   16:54:38	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate_copy
00:10:45.548   16:54:38	-- accel/accel.sh@16 -- # local accel_opc
00:10:45.548   16:54:38	-- accel/accel.sh@17 -- # local accel_module
00:10:45.548    16:54:38	-- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy
00:10:45.548    16:54:38	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy
00:10:45.548     16:54:38	-- accel/accel.sh@12 -- # build_accel_config
00:10:45.548     16:54:38	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:10:45.548     16:54:38	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:10:45.548     16:54:38	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:10:45.548     16:54:38	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:10:45.548     16:54:38	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:10:45.548     16:54:38	-- accel/accel.sh@41 -- # local IFS=,
00:10:45.548     16:54:38	-- accel/accel.sh@42 -- # jq -r .
00:10:45.548  [2024-11-19 16:54:38.102780] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:10:45.548  [2024-11-19 16:54:38.103387] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118506 ]
00:10:45.548  [2024-11-19 16:54:38.262029] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:45.548  [2024-11-19 16:54:38.312730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:10:46.922   16:54:39	-- accel/accel.sh@18 -- # out='
00:10:46.922  SPDK Configuration:
00:10:46.922  Core mask:      0x1
00:10:46.922  
00:10:46.922  Accel Perf Configuration:
00:10:46.922  Workload Type:  dif_generate_copy
00:10:46.922  Vector size:    4096 bytes
00:10:46.922  Transfer size:  4096 bytes
00:10:46.922  Vector count    1
00:10:46.922  Module:         software
00:10:46.922  Queue depth:    32
00:10:46.922  Allocate depth: 32
00:10:46.922  # threads/core: 1
00:10:46.922  Run time:       1 seconds
00:10:46.922  Verify:         No
00:10:46.922  
00:10:46.922  Running for 1 seconds...
00:10:46.922  
00:10:46.922  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:10:46.922  ------------------------------------------------------------------------------------
00:10:46.922  0,0                      105504/s        418 MiB/s                0                0
00:10:46.922  ====================================================================================
00:10:46.922  Total                    105504/s        412 MiB/s                0                0'
00:10:46.922   16:54:39	-- accel/accel.sh@20 -- # IFS=:
00:10:46.922   16:54:39	-- accel/accel.sh@20 -- # read -r var val
00:10:46.922    16:54:39	-- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy
00:10:46.922     16:54:39	-- accel/accel.sh@12 -- # build_accel_config
00:10:46.922    16:54:39	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy
00:10:46.922     16:54:39	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:10:46.922     16:54:39	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:10:46.922     16:54:39	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:10:46.922     16:54:39	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:10:46.922     16:54:39	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:10:46.922     16:54:39	-- accel/accel.sh@41 -- # local IFS=,
00:10:46.922     16:54:39	-- accel/accel.sh@42 -- # jq -r .
00:10:46.922  [2024-11-19 16:54:39.602702] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:10:46.922  [2024-11-19 16:54:39.603420] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118536 ]
00:10:46.922  [2024-11-19 16:54:39.756208] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:47.181  [2024-11-19 16:54:39.815309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:10:47.181   16:54:39	-- accel/accel.sh@21 -- # val=
00:10:47.181   16:54:39	-- accel/accel.sh@22 -- # case "$var" in
00:10:47.181   16:54:39	-- accel/accel.sh@20 -- # IFS=:
00:10:47.181   16:54:39	-- accel/accel.sh@20 -- # read -r var val
00:10:47.181   16:54:39	-- accel/accel.sh@21 -- # val=
00:10:47.181   16:54:39	-- accel/accel.sh@22 -- # case "$var" in
00:10:47.181   16:54:39	-- accel/accel.sh@20 -- # IFS=:
00:10:47.181   16:54:39	-- accel/accel.sh@20 -- # read -r var val
00:10:47.181   16:54:39	-- accel/accel.sh@21 -- # val=0x1
00:10:47.181   16:54:39	-- accel/accel.sh@22 -- # case "$var" in
00:10:47.181   16:54:39	-- accel/accel.sh@20 -- # IFS=:
00:10:47.181   16:54:39	-- accel/accel.sh@20 -- # read -r var val
00:10:47.181   16:54:39	-- accel/accel.sh@21 -- # val=
00:10:47.181   16:54:39	-- accel/accel.sh@22 -- # case "$var" in
00:10:47.181   16:54:39	-- accel/accel.sh@20 -- # IFS=:
00:10:47.181   16:54:39	-- accel/accel.sh@20 -- # read -r var val
00:10:47.181   16:54:39	-- accel/accel.sh@21 -- # val=
00:10:47.181   16:54:39	-- accel/accel.sh@22 -- # case "$var" in
00:10:47.181   16:54:39	-- accel/accel.sh@20 -- # IFS=:
00:10:47.181   16:54:39	-- accel/accel.sh@20 -- # read -r var val
00:10:47.181   16:54:39	-- accel/accel.sh@21 -- # val=dif_generate_copy
00:10:47.181   16:54:39	-- accel/accel.sh@22 -- # case "$var" in
00:10:47.181   16:54:39	-- accel/accel.sh@24 -- # accel_opc=dif_generate_copy
00:10:47.181   16:54:39	-- accel/accel.sh@20 -- # IFS=:
00:10:47.181   16:54:39	-- accel/accel.sh@20 -- # read -r var val
00:10:47.181   16:54:39	-- accel/accel.sh@21 -- # val='4096 bytes'
00:10:47.181   16:54:39	-- accel/accel.sh@22 -- # case "$var" in
00:10:47.181   16:54:39	-- accel/accel.sh@20 -- # IFS=:
00:10:47.181   16:54:39	-- accel/accel.sh@20 -- # read -r var val
00:10:47.181   16:54:39	-- accel/accel.sh@21 -- # val='4096 bytes'
00:10:47.181   16:54:39	-- accel/accel.sh@22 -- # case "$var" in
00:10:47.181   16:54:39	-- accel/accel.sh@20 -- # IFS=:
00:10:47.181   16:54:39	-- accel/accel.sh@20 -- # read -r var val
00:10:47.181   16:54:39	-- accel/accel.sh@21 -- # val=
00:10:47.181   16:54:39	-- accel/accel.sh@22 -- # case "$var" in
00:10:47.181   16:54:39	-- accel/accel.sh@20 -- # IFS=:
00:10:47.181   16:54:39	-- accel/accel.sh@20 -- # read -r var val
00:10:47.181   16:54:39	-- accel/accel.sh@21 -- # val=software
00:10:47.181   16:54:39	-- accel/accel.sh@22 -- # case "$var" in
00:10:47.181   16:54:39	-- accel/accel.sh@23 -- # accel_module=software
00:10:47.181   16:54:39	-- accel/accel.sh@20 -- # IFS=:
00:10:47.181   16:54:39	-- accel/accel.sh@20 -- # read -r var val
00:10:47.181   16:54:39	-- accel/accel.sh@21 -- # val=32
00:10:47.181   16:54:39	-- accel/accel.sh@22 -- # case "$var" in
00:10:47.181   16:54:39	-- accel/accel.sh@20 -- # IFS=:
00:10:47.181   16:54:39	-- accel/accel.sh@20 -- # read -r var val
00:10:47.181   16:54:39	-- accel/accel.sh@21 -- # val=32
00:10:47.181   16:54:39	-- accel/accel.sh@22 -- # case "$var" in
00:10:47.181   16:54:39	-- accel/accel.sh@20 -- # IFS=:
00:10:47.181   16:54:39	-- accel/accel.sh@20 -- # read -r var val
00:10:47.181   16:54:39	-- accel/accel.sh@21 -- # val=1
00:10:47.181   16:54:39	-- accel/accel.sh@22 -- # case "$var" in
00:10:47.181   16:54:39	-- accel/accel.sh@20 -- # IFS=:
00:10:47.181   16:54:39	-- accel/accel.sh@20 -- # read -r var val
00:10:47.181   16:54:39	-- accel/accel.sh@21 -- # val='1 seconds'
00:10:47.181   16:54:39	-- accel/accel.sh@22 -- # case "$var" in
00:10:47.181   16:54:39	-- accel/accel.sh@20 -- # IFS=:
00:10:47.181   16:54:39	-- accel/accel.sh@20 -- # read -r var val
00:10:47.181   16:54:39	-- accel/accel.sh@21 -- # val=No
00:10:47.181   16:54:39	-- accel/accel.sh@22 -- # case "$var" in
00:10:47.181   16:54:39	-- accel/accel.sh@20 -- # IFS=:
00:10:47.181   16:54:39	-- accel/accel.sh@20 -- # read -r var val
00:10:47.181   16:54:39	-- accel/accel.sh@21 -- # val=
00:10:47.181   16:54:39	-- accel/accel.sh@22 -- # case "$var" in
00:10:47.181   16:54:39	-- accel/accel.sh@20 -- # IFS=:
00:10:47.181   16:54:39	-- accel/accel.sh@20 -- # read -r var val
00:10:47.181   16:54:39	-- accel/accel.sh@21 -- # val=
00:10:47.181   16:54:39	-- accel/accel.sh@22 -- # case "$var" in
00:10:47.181   16:54:39	-- accel/accel.sh@20 -- # IFS=:
00:10:47.181   16:54:39	-- accel/accel.sh@20 -- # read -r var val
00:10:48.559   16:54:41	-- accel/accel.sh@21 -- # val=
00:10:48.559   16:54:41	-- accel/accel.sh@22 -- # case "$var" in
00:10:48.559   16:54:41	-- accel/accel.sh@20 -- # IFS=:
00:10:48.559   16:54:41	-- accel/accel.sh@20 -- # read -r var val
00:10:48.559   16:54:41	-- accel/accel.sh@21 -- # val=
00:10:48.559   16:54:41	-- accel/accel.sh@22 -- # case "$var" in
00:10:48.559   16:54:41	-- accel/accel.sh@20 -- # IFS=:
00:10:48.559   16:54:41	-- accel/accel.sh@20 -- # read -r var val
00:10:48.559   16:54:41	-- accel/accel.sh@21 -- # val=
00:10:48.559   16:54:41	-- accel/accel.sh@22 -- # case "$var" in
00:10:48.559   16:54:41	-- accel/accel.sh@20 -- # IFS=:
00:10:48.559   16:54:41	-- accel/accel.sh@20 -- # read -r var val
00:10:48.559   16:54:41	-- accel/accel.sh@21 -- # val=
00:10:48.559   16:54:41	-- accel/accel.sh@22 -- # case "$var" in
00:10:48.559   16:54:41	-- accel/accel.sh@20 -- # IFS=:
00:10:48.559   16:54:41	-- accel/accel.sh@20 -- # read -r var val
00:10:48.559   16:54:41	-- accel/accel.sh@21 -- # val=
00:10:48.559   16:54:41	-- accel/accel.sh@22 -- # case "$var" in
00:10:48.559   16:54:41	-- accel/accel.sh@20 -- # IFS=:
00:10:48.559   16:54:41	-- accel/accel.sh@20 -- # read -r var val
00:10:48.559   16:54:41	-- accel/accel.sh@21 -- # val=
00:10:48.559   16:54:41	-- accel/accel.sh@22 -- # case "$var" in
00:10:48.559   16:54:41	-- accel/accel.sh@20 -- # IFS=:
00:10:48.559   16:54:41	-- accel/accel.sh@20 -- # read -r var val
00:10:48.559   16:54:41	-- accel/accel.sh@28 -- # [[ -n software ]]
00:10:48.559   16:54:41	-- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]]
00:10:48.559   16:54:41	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:10:48.559  
00:10:48.559  real	0m3.018s
00:10:48.559  user	0m2.553s
00:10:48.559  sys	0m0.295s
00:10:48.559   16:54:41	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:10:48.559   16:54:41	-- common/autotest_common.sh@10 -- # set +x
00:10:48.559  ************************************
00:10:48.559  END TEST accel_dif_generate_copy
00:10:48.559  ************************************
00:10:48.559   16:54:41	-- accel/accel.sh@107 -- # [[ y == y ]]
00:10:48.559   16:54:41	-- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib
00:10:48.559   16:54:41	-- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']'
00:10:48.559   16:54:41	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:10:48.559   16:54:41	-- common/autotest_common.sh@10 -- # set +x
00:10:48.559  ************************************
00:10:48.559  START TEST accel_comp
00:10:48.559  ************************************
00:10:48.559   16:54:41	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib
00:10:48.559   16:54:41	-- accel/accel.sh@16 -- # local accel_opc
00:10:48.559   16:54:41	-- accel/accel.sh@17 -- # local accel_module
00:10:48.559    16:54:41	-- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib
00:10:48.559    16:54:41	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib
00:10:48.559     16:54:41	-- accel/accel.sh@12 -- # build_accel_config
00:10:48.559     16:54:41	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:10:48.559     16:54:41	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:10:48.559     16:54:41	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:10:48.559     16:54:41	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:10:48.559     16:54:41	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:10:48.559     16:54:41	-- accel/accel.sh@41 -- # local IFS=,
00:10:48.559     16:54:41	-- accel/accel.sh@42 -- # jq -r .
00:10:48.559  [2024-11-19 16:54:41.195125] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:10:48.559  [2024-11-19 16:54:41.195636] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118575 ]
00:10:48.559  [2024-11-19 16:54:41.357423] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:48.559  [2024-11-19 16:54:41.402910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:10:49.935   16:54:42	-- accel/accel.sh@18 -- # out='Preparing input file...
00:10:49.935  
00:10:49.935  SPDK Configuration:
00:10:49.935  Core mask:      0x1
00:10:49.935  
00:10:49.935  Accel Perf Configuration:
00:10:49.935  Workload Type:  compress
00:10:49.935  Transfer size:  4096 bytes
00:10:49.935  Vector count    1
00:10:49.935  Module:         software
00:10:49.935  File Name:      /home/vagrant/spdk_repo/spdk/test/accel/bib
00:10:49.935  Queue depth:    32
00:10:49.935  Allocate depth: 32
00:10:49.935  # threads/core: 1
00:10:49.935  Run time:       1 seconds
00:10:49.935  Verify:         No
00:10:49.935  
00:10:49.935  Running for 1 seconds...
00:10:49.935  
00:10:49.935  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:10:49.935  ------------------------------------------------------------------------------------
00:10:49.935  0,0                       55872/s        232 MiB/s                0                0
00:10:49.935  ====================================================================================
00:10:49.935  Total                     55872/s        218 MiB/s                0                0'
00:10:49.935   16:54:42	-- accel/accel.sh@20 -- # IFS=:
00:10:49.935   16:54:42	-- accel/accel.sh@20 -- # read -r var val
00:10:49.935    16:54:42	-- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib
00:10:49.935    16:54:42	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib
00:10:49.935     16:54:42	-- accel/accel.sh@12 -- # build_accel_config
00:10:49.935     16:54:42	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:10:49.935     16:54:42	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:10:49.935     16:54:42	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:10:49.935     16:54:42	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:10:49.935     16:54:42	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:10:49.936     16:54:42	-- accel/accel.sh@41 -- # local IFS=,
00:10:49.936     16:54:42	-- accel/accel.sh@42 -- # jq -r .
00:10:49.936  [2024-11-19 16:54:42.688142] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:10:49.936  [2024-11-19 16:54:42.688610] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118604 ]
00:10:50.193  [2024-11-19 16:54:42.845087] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:50.193  [2024-11-19 16:54:42.907885] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:10:50.193   16:54:42	-- accel/accel.sh@21 -- # val=
00:10:50.193   16:54:42	-- accel/accel.sh@22 -- # case "$var" in
00:10:50.193   16:54:42	-- accel/accel.sh@20 -- # IFS=:
00:10:50.193   16:54:42	-- accel/accel.sh@20 -- # read -r var val
00:10:50.193   16:54:42	-- accel/accel.sh@21 -- # val=
00:10:50.193   16:54:42	-- accel/accel.sh@22 -- # case "$var" in
00:10:50.193   16:54:42	-- accel/accel.sh@20 -- # IFS=:
00:10:50.193   16:54:42	-- accel/accel.sh@20 -- # read -r var val
00:10:50.193   16:54:42	-- accel/accel.sh@21 -- # val=
00:10:50.193   16:54:42	-- accel/accel.sh@22 -- # case "$var" in
00:10:50.193   16:54:42	-- accel/accel.sh@20 -- # IFS=:
00:10:50.194   16:54:42	-- accel/accel.sh@20 -- # read -r var val
00:10:50.194   16:54:42	-- accel/accel.sh@21 -- # val=0x1
00:10:50.194   16:54:42	-- accel/accel.sh@22 -- # case "$var" in
00:10:50.194   16:54:42	-- accel/accel.sh@20 -- # IFS=:
00:10:50.194   16:54:42	-- accel/accel.sh@20 -- # read -r var val
00:10:50.194   16:54:42	-- accel/accel.sh@21 -- # val=
00:10:50.194   16:54:42	-- accel/accel.sh@22 -- # case "$var" in
00:10:50.194   16:54:42	-- accel/accel.sh@20 -- # IFS=:
00:10:50.194   16:54:42	-- accel/accel.sh@20 -- # read -r var val
00:10:50.194   16:54:42	-- accel/accel.sh@21 -- # val=
00:10:50.194   16:54:42	-- accel/accel.sh@22 -- # case "$var" in
00:10:50.194   16:54:42	-- accel/accel.sh@20 -- # IFS=:
00:10:50.194   16:54:42	-- accel/accel.sh@20 -- # read -r var val
00:10:50.194   16:54:42	-- accel/accel.sh@21 -- # val=compress
00:10:50.194   16:54:42	-- accel/accel.sh@22 -- # case "$var" in
00:10:50.194   16:54:42	-- accel/accel.sh@24 -- # accel_opc=compress
00:10:50.194   16:54:42	-- accel/accel.sh@20 -- # IFS=:
00:10:50.194   16:54:42	-- accel/accel.sh@20 -- # read -r var val
00:10:50.194   16:54:42	-- accel/accel.sh@21 -- # val='4096 bytes'
00:10:50.194   16:54:42	-- accel/accel.sh@22 -- # case "$var" in
00:10:50.194   16:54:42	-- accel/accel.sh@20 -- # IFS=:
00:10:50.194   16:54:42	-- accel/accel.sh@20 -- # read -r var val
00:10:50.194   16:54:42	-- accel/accel.sh@21 -- # val=
00:10:50.194   16:54:42	-- accel/accel.sh@22 -- # case "$var" in
00:10:50.194   16:54:42	-- accel/accel.sh@20 -- # IFS=:
00:10:50.194   16:54:42	-- accel/accel.sh@20 -- # read -r var val
00:10:50.194   16:54:42	-- accel/accel.sh@21 -- # val=software
00:10:50.194   16:54:42	-- accel/accel.sh@22 -- # case "$var" in
00:10:50.194   16:54:42	-- accel/accel.sh@23 -- # accel_module=software
00:10:50.194   16:54:42	-- accel/accel.sh@20 -- # IFS=:
00:10:50.194   16:54:42	-- accel/accel.sh@20 -- # read -r var val
00:10:50.194   16:54:42	-- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib
00:10:50.194   16:54:42	-- accel/accel.sh@22 -- # case "$var" in
00:10:50.194   16:54:42	-- accel/accel.sh@20 -- # IFS=:
00:10:50.194   16:54:42	-- accel/accel.sh@20 -- # read -r var val
00:10:50.194   16:54:42	-- accel/accel.sh@21 -- # val=32
00:10:50.194   16:54:42	-- accel/accel.sh@22 -- # case "$var" in
00:10:50.194   16:54:42	-- accel/accel.sh@20 -- # IFS=:
00:10:50.194   16:54:42	-- accel/accel.sh@20 -- # read -r var val
00:10:50.194   16:54:42	-- accel/accel.sh@21 -- # val=32
00:10:50.194   16:54:42	-- accel/accel.sh@22 -- # case "$var" in
00:10:50.194   16:54:42	-- accel/accel.sh@20 -- # IFS=:
00:10:50.194   16:54:42	-- accel/accel.sh@20 -- # read -r var val
00:10:50.194   16:54:42	-- accel/accel.sh@21 -- # val=1
00:10:50.194   16:54:42	-- accel/accel.sh@22 -- # case "$var" in
00:10:50.194   16:54:42	-- accel/accel.sh@20 -- # IFS=:
00:10:50.194   16:54:42	-- accel/accel.sh@20 -- # read -r var val
00:10:50.194   16:54:42	-- accel/accel.sh@21 -- # val='1 seconds'
00:10:50.194   16:54:42	-- accel/accel.sh@22 -- # case "$var" in
00:10:50.194   16:54:42	-- accel/accel.sh@20 -- # IFS=:
00:10:50.194   16:54:42	-- accel/accel.sh@20 -- # read -r var val
00:10:50.194   16:54:42	-- accel/accel.sh@21 -- # val=No
00:10:50.194   16:54:42	-- accel/accel.sh@22 -- # case "$var" in
00:10:50.194   16:54:42	-- accel/accel.sh@20 -- # IFS=:
00:10:50.194   16:54:42	-- accel/accel.sh@20 -- # read -r var val
00:10:50.194   16:54:42	-- accel/accel.sh@21 -- # val=
00:10:50.194   16:54:42	-- accel/accel.sh@22 -- # case "$var" in
00:10:50.194   16:54:42	-- accel/accel.sh@20 -- # IFS=:
00:10:50.194   16:54:42	-- accel/accel.sh@20 -- # read -r var val
00:10:50.194   16:54:42	-- accel/accel.sh@21 -- # val=
00:10:50.194   16:54:42	-- accel/accel.sh@22 -- # case "$var" in
00:10:50.194   16:54:42	-- accel/accel.sh@20 -- # IFS=:
00:10:50.194   16:54:42	-- accel/accel.sh@20 -- # read -r var val
00:10:51.570   16:54:44	-- accel/accel.sh@21 -- # val=
00:10:51.570   16:54:44	-- accel/accel.sh@22 -- # case "$var" in
00:10:51.570   16:54:44	-- accel/accel.sh@20 -- # IFS=:
00:10:51.571   16:54:44	-- accel/accel.sh@20 -- # read -r var val
00:10:51.571   16:54:44	-- accel/accel.sh@21 -- # val=
00:10:51.571   16:54:44	-- accel/accel.sh@22 -- # case "$var" in
00:10:51.571   16:54:44	-- accel/accel.sh@20 -- # IFS=:
00:10:51.571   16:54:44	-- accel/accel.sh@20 -- # read -r var val
00:10:51.571   16:54:44	-- accel/accel.sh@21 -- # val=
00:10:51.571   16:54:44	-- accel/accel.sh@22 -- # case "$var" in
00:10:51.571   16:54:44	-- accel/accel.sh@20 -- # IFS=:
00:10:51.571   16:54:44	-- accel/accel.sh@20 -- # read -r var val
00:10:51.571   16:54:44	-- accel/accel.sh@21 -- # val=
00:10:51.571   16:54:44	-- accel/accel.sh@22 -- # case "$var" in
00:10:51.571   16:54:44	-- accel/accel.sh@20 -- # IFS=:
00:10:51.571   16:54:44	-- accel/accel.sh@20 -- # read -r var val
00:10:51.571   16:54:44	-- accel/accel.sh@21 -- # val=
00:10:51.571   16:54:44	-- accel/accel.sh@22 -- # case "$var" in
00:10:51.571   16:54:44	-- accel/accel.sh@20 -- # IFS=:
00:10:51.571   16:54:44	-- accel/accel.sh@20 -- # read -r var val
00:10:51.571   16:54:44	-- accel/accel.sh@21 -- # val=
00:10:51.571   16:54:44	-- accel/accel.sh@22 -- # case "$var" in
00:10:51.571   16:54:44	-- accel/accel.sh@20 -- # IFS=:
00:10:51.571   16:54:44	-- accel/accel.sh@20 -- # read -r var val
00:10:51.571   16:54:44	-- accel/accel.sh@28 -- # [[ -n software ]]
00:10:51.571   16:54:44	-- accel/accel.sh@28 -- # [[ -n compress ]]
00:10:51.571   16:54:44	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:10:51.571  
00:10:51.571  real	0m3.022s
00:10:51.571  user	0m2.518s
00:10:51.571  sys	0m0.337s
00:10:51.571   16:54:44	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:10:51.571   16:54:44	-- common/autotest_common.sh@10 -- # set +x
00:10:51.571  ************************************
00:10:51.571  END TEST accel_comp
00:10:51.571  ************************************
00:10:51.571   16:54:44	-- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y
00:10:51.571   16:54:44	-- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']'
00:10:51.571   16:54:44	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:10:51.571   16:54:44	-- common/autotest_common.sh@10 -- # set +x
00:10:51.571  ************************************
00:10:51.571  START TEST accel_decomp
00:10:51.571  ************************************
00:10:51.571   16:54:44	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y
00:10:51.571   16:54:44	-- accel/accel.sh@16 -- # local accel_opc
00:10:51.571   16:54:44	-- accel/accel.sh@17 -- # local accel_module
00:10:51.571    16:54:44	-- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y
00:10:51.571    16:54:44	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y
00:10:51.571     16:54:44	-- accel/accel.sh@12 -- # build_accel_config
00:10:51.571     16:54:44	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:10:51.571     16:54:44	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:10:51.571     16:54:44	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:10:51.571     16:54:44	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:10:51.571     16:54:44	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:10:51.571     16:54:44	-- accel/accel.sh@41 -- # local IFS=,
00:10:51.571     16:54:44	-- accel/accel.sh@42 -- # jq -r .
00:10:51.571  [2024-11-19 16:54:44.277798] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:10:51.571  [2024-11-19 16:54:44.278244] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118637 ]
00:10:51.829  [2024-11-19 16:54:44.431043] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:51.829  [2024-11-19 16:54:44.476743] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:10:53.206   16:54:45	-- accel/accel.sh@18 -- # out='Preparing input file...
00:10:53.206  
00:10:53.206  SPDK Configuration:
00:10:53.206  Core mask:      0x1
00:10:53.206  
00:10:53.206  Accel Perf Configuration:
00:10:53.206  Workload Type:  decompress
00:10:53.206  Transfer size:  4096 bytes
00:10:53.206  Vector count    1
00:10:53.206  Module:         software
00:10:53.206  File Name:      /home/vagrant/spdk_repo/spdk/test/accel/bib
00:10:53.206  Queue depth:    32
00:10:53.206  Allocate depth: 32
00:10:53.207  # threads/core: 1
00:10:53.207  Run time:       1 seconds
00:10:53.207  Verify:         Yes
00:10:53.207  
00:10:53.207  Running for 1 seconds...
00:10:53.207  
00:10:53.207  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:10:53.207  ------------------------------------------------------------------------------------
00:10:53.207  0,0                       62560/s        115 MiB/s                0                0
00:10:53.207  ====================================================================================
00:10:53.207  Total                     62560/s        244 MiB/s                0                0'
00:10:53.207   16:54:45	-- accel/accel.sh@20 -- # IFS=:
00:10:53.207   16:54:45	-- accel/accel.sh@20 -- # read -r var val
00:10:53.207    16:54:45	-- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y
00:10:53.207     16:54:45	-- accel/accel.sh@12 -- # build_accel_config
00:10:53.207    16:54:45	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y
00:10:53.207     16:54:45	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:10:53.207     16:54:45	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:10:53.207     16:54:45	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:10:53.207     16:54:45	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:10:53.207     16:54:45	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:10:53.207     16:54:45	-- accel/accel.sh@41 -- # local IFS=,
00:10:53.207     16:54:45	-- accel/accel.sh@42 -- # jq -r .
00:10:53.207  [2024-11-19 16:54:45.770353] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:10:53.207  [2024-11-19 16:54:45.770816] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118672 ]
00:10:53.207  [2024-11-19 16:54:45.928636] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:53.207  [2024-11-19 16:54:45.990827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:10:53.207   16:54:46	-- accel/accel.sh@21 -- # val=
00:10:53.207   16:54:46	-- accel/accel.sh@22 -- # case "$var" in
00:10:53.207   16:54:46	-- accel/accel.sh@20 -- # IFS=:
00:10:53.207   16:54:46	-- accel/accel.sh@20 -- # read -r var val
00:10:53.207   16:54:46	-- accel/accel.sh@21 -- # val=
00:10:53.207   16:54:46	-- accel/accel.sh@22 -- # case "$var" in
00:10:53.207   16:54:46	-- accel/accel.sh@20 -- # IFS=:
00:10:53.207   16:54:46	-- accel/accel.sh@20 -- # read -r var val
00:10:53.207   16:54:46	-- accel/accel.sh@21 -- # val=
00:10:53.207   16:54:46	-- accel/accel.sh@22 -- # case "$var" in
00:10:53.207   16:54:46	-- accel/accel.sh@20 -- # IFS=:
00:10:53.207   16:54:46	-- accel/accel.sh@20 -- # read -r var val
00:10:53.207   16:54:46	-- accel/accel.sh@21 -- # val=0x1
00:10:53.207   16:54:46	-- accel/accel.sh@22 -- # case "$var" in
00:10:53.207   16:54:46	-- accel/accel.sh@20 -- # IFS=:
00:10:53.207   16:54:46	-- accel/accel.sh@20 -- # read -r var val
00:10:53.207   16:54:46	-- accel/accel.sh@21 -- # val=
00:10:53.207   16:54:46	-- accel/accel.sh@22 -- # case "$var" in
00:10:53.207   16:54:46	-- accel/accel.sh@20 -- # IFS=:
00:10:53.207   16:54:46	-- accel/accel.sh@20 -- # read -r var val
00:10:53.207   16:54:46	-- accel/accel.sh@21 -- # val=
00:10:53.207   16:54:46	-- accel/accel.sh@22 -- # case "$var" in
00:10:53.207   16:54:46	-- accel/accel.sh@20 -- # IFS=:
00:10:53.207   16:54:46	-- accel/accel.sh@20 -- # read -r var val
00:10:53.207   16:54:46	-- accel/accel.sh@21 -- # val=decompress
00:10:53.207   16:54:46	-- accel/accel.sh@22 -- # case "$var" in
00:10:53.207   16:54:46	-- accel/accel.sh@24 -- # accel_opc=decompress
00:10:53.207   16:54:46	-- accel/accel.sh@20 -- # IFS=:
00:10:53.207   16:54:46	-- accel/accel.sh@20 -- # read -r var val
00:10:53.207   16:54:46	-- accel/accel.sh@21 -- # val='4096 bytes'
00:10:53.207   16:54:46	-- accel/accel.sh@22 -- # case "$var" in
00:10:53.207   16:54:46	-- accel/accel.sh@20 -- # IFS=:
00:10:53.207   16:54:46	-- accel/accel.sh@20 -- # read -r var val
00:10:53.207   16:54:46	-- accel/accel.sh@21 -- # val=
00:10:53.207   16:54:46	-- accel/accel.sh@22 -- # case "$var" in
00:10:53.207   16:54:46	-- accel/accel.sh@20 -- # IFS=:
00:10:53.207   16:54:46	-- accel/accel.sh@20 -- # read -r var val
00:10:53.207   16:54:46	-- accel/accel.sh@21 -- # val=software
00:10:53.207   16:54:46	-- accel/accel.sh@22 -- # case "$var" in
00:10:53.207   16:54:46	-- accel/accel.sh@23 -- # accel_module=software
00:10:53.207   16:54:46	-- accel/accel.sh@20 -- # IFS=:
00:10:53.207   16:54:46	-- accel/accel.sh@20 -- # read -r var val
00:10:53.207   16:54:46	-- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib
00:10:53.207   16:54:46	-- accel/accel.sh@22 -- # case "$var" in
00:10:53.207   16:54:46	-- accel/accel.sh@20 -- # IFS=:
00:10:53.207   16:54:46	-- accel/accel.sh@20 -- # read -r var val
00:10:53.207   16:54:46	-- accel/accel.sh@21 -- # val=32
00:10:53.207   16:54:46	-- accel/accel.sh@22 -- # case "$var" in
00:10:53.207   16:54:46	-- accel/accel.sh@20 -- # IFS=:
00:10:53.207   16:54:46	-- accel/accel.sh@20 -- # read -r var val
00:10:53.207   16:54:46	-- accel/accel.sh@21 -- # val=32
00:10:53.207   16:54:46	-- accel/accel.sh@22 -- # case "$var" in
00:10:53.207   16:54:46	-- accel/accel.sh@20 -- # IFS=:
00:10:53.467   16:54:46	-- accel/accel.sh@20 -- # read -r var val
00:10:53.467   16:54:46	-- accel/accel.sh@21 -- # val=1
00:10:53.467   16:54:46	-- accel/accel.sh@22 -- # case "$var" in
00:10:53.467   16:54:46	-- accel/accel.sh@20 -- # IFS=:
00:10:53.467   16:54:46	-- accel/accel.sh@20 -- # read -r var val
00:10:53.467   16:54:46	-- accel/accel.sh@21 -- # val='1 seconds'
00:10:53.467   16:54:46	-- accel/accel.sh@22 -- # case "$var" in
00:10:53.467   16:54:46	-- accel/accel.sh@20 -- # IFS=:
00:10:53.467   16:54:46	-- accel/accel.sh@20 -- # read -r var val
00:10:53.467   16:54:46	-- accel/accel.sh@21 -- # val=Yes
00:10:53.467   16:54:46	-- accel/accel.sh@22 -- # case "$var" in
00:10:53.467   16:54:46	-- accel/accel.sh@20 -- # IFS=:
00:10:53.467   16:54:46	-- accel/accel.sh@20 -- # read -r var val
00:10:53.467   16:54:46	-- accel/accel.sh@21 -- # val=
00:10:53.467   16:54:46	-- accel/accel.sh@22 -- # case "$var" in
00:10:53.467   16:54:46	-- accel/accel.sh@20 -- # IFS=:
00:10:53.467   16:54:46	-- accel/accel.sh@20 -- # read -r var val
00:10:53.467   16:54:46	-- accel/accel.sh@21 -- # val=
00:10:53.467   16:54:46	-- accel/accel.sh@22 -- # case "$var" in
00:10:53.467   16:54:46	-- accel/accel.sh@20 -- # IFS=:
00:10:53.467   16:54:46	-- accel/accel.sh@20 -- # read -r var val
00:10:54.404   16:54:47	-- accel/accel.sh@21 -- # val=
00:10:54.404   16:54:47	-- accel/accel.sh@22 -- # case "$var" in
00:10:54.404   16:54:47	-- accel/accel.sh@20 -- # IFS=:
00:10:54.404   16:54:47	-- accel/accel.sh@20 -- # read -r var val
00:10:54.404   16:54:47	-- accel/accel.sh@21 -- # val=
00:10:54.404   16:54:47	-- accel/accel.sh@22 -- # case "$var" in
00:10:54.404   16:54:47	-- accel/accel.sh@20 -- # IFS=:
00:10:54.404   16:54:47	-- accel/accel.sh@20 -- # read -r var val
00:10:54.404   16:54:47	-- accel/accel.sh@21 -- # val=
00:10:54.404   16:54:47	-- accel/accel.sh@22 -- # case "$var" in
00:10:54.404   16:54:47	-- accel/accel.sh@20 -- # IFS=:
00:10:54.404   16:54:47	-- accel/accel.sh@20 -- # read -r var val
00:10:54.404   16:54:47	-- accel/accel.sh@21 -- # val=
00:10:54.404   16:54:47	-- accel/accel.sh@22 -- # case "$var" in
00:10:54.404   16:54:47	-- accel/accel.sh@20 -- # IFS=:
00:10:54.404   16:54:47	-- accel/accel.sh@20 -- # read -r var val
00:10:54.404   16:54:47	-- accel/accel.sh@21 -- # val=
00:10:54.404   16:54:47	-- accel/accel.sh@22 -- # case "$var" in
00:10:54.404   16:54:47	-- accel/accel.sh@20 -- # IFS=:
00:10:54.404   16:54:47	-- accel/accel.sh@20 -- # read -r var val
00:10:54.404   16:54:47	-- accel/accel.sh@21 -- # val=
00:10:54.404   16:54:47	-- accel/accel.sh@22 -- # case "$var" in
00:10:54.404   16:54:47	-- accel/accel.sh@20 -- # IFS=:
00:10:54.404   16:54:47	-- accel/accel.sh@20 -- # read -r var val
00:10:54.404   16:54:47	-- accel/accel.sh@28 -- # [[ -n software ]]
00:10:54.404   16:54:47	-- accel/accel.sh@28 -- # [[ -n decompress ]]
00:10:54.404   16:54:47	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:10:54.404  
00:10:54.404  real	0m3.018s
00:10:54.404  user	0m2.531s
00:10:54.404  sys	0m0.333s
00:10:54.404   16:54:47	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:10:54.404   16:54:47	-- common/autotest_common.sh@10 -- # set +x
00:10:54.404  ************************************
00:10:54.404  END TEST accel_decomp
00:10:54.404  ************************************
00:10:54.663   16:54:47	-- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0
00:10:54.663   16:54:47	-- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']'
00:10:54.663   16:54:47	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:10:54.663   16:54:47	-- common/autotest_common.sh@10 -- # set +x
00:10:54.663  ************************************
00:10:54.663  START TEST accel_decmop_full
00:10:54.663  ************************************
00:10:54.663   16:54:47	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0
00:10:54.663   16:54:47	-- accel/accel.sh@16 -- # local accel_opc
00:10:54.663   16:54:47	-- accel/accel.sh@17 -- # local accel_module
00:10:54.663    16:54:47	-- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0
00:10:54.663    16:54:47	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0
00:10:54.663     16:54:47	-- accel/accel.sh@12 -- # build_accel_config
00:10:54.663     16:54:47	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:10:54.663     16:54:47	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:10:54.663     16:54:47	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:10:54.663     16:54:47	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:10:54.663     16:54:47	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:10:54.663     16:54:47	-- accel/accel.sh@41 -- # local IFS=,
00:10:54.663     16:54:47	-- accel/accel.sh@42 -- # jq -r .
00:10:54.663  [2024-11-19 16:54:47.368126] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:10:54.663  [2024-11-19 16:54:47.368557] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118705 ]
00:10:54.922  [2024-11-19 16:54:47.527960] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:54.922  [2024-11-19 16:54:47.617462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:10:56.299   16:54:49	-- accel/accel.sh@18 -- # out='Preparing input file...
00:10:56.299  
00:10:56.299  SPDK Configuration:
00:10:56.299  Core mask:      0x1
00:10:56.299  
00:10:56.299  Accel Perf Configuration:
00:10:56.299  Workload Type:  decompress
00:10:56.299  Transfer size:  111250 bytes
00:10:56.299  Vector count    1
00:10:56.299  Module:         software
00:10:56.299  File Name:      /home/vagrant/spdk_repo/spdk/test/accel/bib
00:10:56.299  Queue depth:    32
00:10:56.299  Allocate depth: 32
00:10:56.299  # threads/core: 1
00:10:56.299  Run time:       1 seconds
00:10:56.299  Verify:         Yes
00:10:56.299  
00:10:56.299  Running for 1 seconds...
00:10:56.299  
00:10:56.299  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:10:56.299  ------------------------------------------------------------------------------------
00:10:56.299  0,0                        3264/s        134 MiB/s                0                0
00:10:56.299  ====================================================================================
00:10:56.299  Total                      3264/s        346 MiB/s                0                0'
00:10:56.299   16:54:49	-- accel/accel.sh@20 -- # IFS=:
00:10:56.299   16:54:49	-- accel/accel.sh@20 -- # read -r var val
00:10:56.299    16:54:49	-- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0
00:10:56.299    16:54:49	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0
00:10:56.299     16:54:49	-- accel/accel.sh@12 -- # build_accel_config
00:10:56.299     16:54:49	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:10:56.299     16:54:49	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:10:56.299     16:54:49	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:10:56.299     16:54:49	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:10:56.299     16:54:49	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:10:56.299     16:54:49	-- accel/accel.sh@41 -- # local IFS=,
00:10:56.299     16:54:49	-- accel/accel.sh@42 -- # jq -r .
00:10:56.299  [2024-11-19 16:54:49.103551] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:10:56.299  [2024-11-19 16:54:49.103993] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118742 ]
00:10:56.556  [2024-11-19 16:54:49.257658] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:56.556  [2024-11-19 16:54:49.350032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:10:56.813   16:54:49	-- accel/accel.sh@21 -- # val=
00:10:56.813   16:54:49	-- accel/accel.sh@22 -- # case "$var" in
00:10:56.813   16:54:49	-- accel/accel.sh@20 -- # IFS=:
00:10:56.813   16:54:49	-- accel/accel.sh@20 -- # read -r var val
00:10:56.813   16:54:49	-- accel/accel.sh@21 -- # val=
00:10:56.813   16:54:49	-- accel/accel.sh@22 -- # case "$var" in
00:10:56.813   16:54:49	-- accel/accel.sh@20 -- # IFS=:
00:10:56.813   16:54:49	-- accel/accel.sh@20 -- # read -r var val
00:10:56.813   16:54:49	-- accel/accel.sh@21 -- # val=
00:10:56.813   16:54:49	-- accel/accel.sh@22 -- # case "$var" in
00:10:56.813   16:54:49	-- accel/accel.sh@20 -- # IFS=:
00:10:56.813   16:54:49	-- accel/accel.sh@20 -- # read -r var val
00:10:56.813   16:54:49	-- accel/accel.sh@21 -- # val=0x1
00:10:56.813   16:54:49	-- accel/accel.sh@22 -- # case "$var" in
00:10:56.813   16:54:49	-- accel/accel.sh@20 -- # IFS=:
00:10:56.813   16:54:49	-- accel/accel.sh@20 -- # read -r var val
00:10:56.813   16:54:49	-- accel/accel.sh@21 -- # val=
00:10:56.813   16:54:49	-- accel/accel.sh@22 -- # case "$var" in
00:10:56.813   16:54:49	-- accel/accel.sh@20 -- # IFS=:
00:10:56.813   16:54:49	-- accel/accel.sh@20 -- # read -r var val
00:10:56.813   16:54:49	-- accel/accel.sh@21 -- # val=
00:10:56.813   16:54:49	-- accel/accel.sh@22 -- # case "$var" in
00:10:56.813   16:54:49	-- accel/accel.sh@20 -- # IFS=:
00:10:56.813   16:54:49	-- accel/accel.sh@20 -- # read -r var val
00:10:56.813   16:54:49	-- accel/accel.sh@21 -- # val=decompress
00:10:56.813   16:54:49	-- accel/accel.sh@22 -- # case "$var" in
00:10:56.813   16:54:49	-- accel/accel.sh@24 -- # accel_opc=decompress
00:10:56.813   16:54:49	-- accel/accel.sh@20 -- # IFS=:
00:10:56.813   16:54:49	-- accel/accel.sh@20 -- # read -r var val
00:10:56.813   16:54:49	-- accel/accel.sh@21 -- # val='111250 bytes'
00:10:56.813   16:54:49	-- accel/accel.sh@22 -- # case "$var" in
00:10:56.813   16:54:49	-- accel/accel.sh@20 -- # IFS=:
00:10:56.814   16:54:49	-- accel/accel.sh@20 -- # read -r var val
00:10:56.814   16:54:49	-- accel/accel.sh@21 -- # val=
00:10:56.814   16:54:49	-- accel/accel.sh@22 -- # case "$var" in
00:10:56.814   16:54:49	-- accel/accel.sh@20 -- # IFS=:
00:10:56.814   16:54:49	-- accel/accel.sh@20 -- # read -r var val
00:10:56.814   16:54:49	-- accel/accel.sh@21 -- # val=software
00:10:56.814   16:54:49	-- accel/accel.sh@22 -- # case "$var" in
00:10:56.814   16:54:49	-- accel/accel.sh@23 -- # accel_module=software
00:10:56.814   16:54:49	-- accel/accel.sh@20 -- # IFS=:
00:10:56.814   16:54:49	-- accel/accel.sh@20 -- # read -r var val
00:10:56.814   16:54:49	-- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib
00:10:56.814   16:54:49	-- accel/accel.sh@22 -- # case "$var" in
00:10:56.814   16:54:49	-- accel/accel.sh@20 -- # IFS=:
00:10:56.814   16:54:49	-- accel/accel.sh@20 -- # read -r var val
00:10:56.814   16:54:49	-- accel/accel.sh@21 -- # val=32
00:10:56.814   16:54:49	-- accel/accel.sh@22 -- # case "$var" in
00:10:56.814   16:54:49	-- accel/accel.sh@20 -- # IFS=:
00:10:56.814   16:54:49	-- accel/accel.sh@20 -- # read -r var val
00:10:56.814   16:54:49	-- accel/accel.sh@21 -- # val=32
00:10:56.814   16:54:49	-- accel/accel.sh@22 -- # case "$var" in
00:10:56.814   16:54:49	-- accel/accel.sh@20 -- # IFS=:
00:10:56.814   16:54:49	-- accel/accel.sh@20 -- # read -r var val
00:10:56.814   16:54:49	-- accel/accel.sh@21 -- # val=1
00:10:56.814   16:54:49	-- accel/accel.sh@22 -- # case "$var" in
00:10:56.814   16:54:49	-- accel/accel.sh@20 -- # IFS=:
00:10:56.814   16:54:49	-- accel/accel.sh@20 -- # read -r var val
00:10:56.814   16:54:49	-- accel/accel.sh@21 -- # val='1 seconds'
00:10:56.814   16:54:49	-- accel/accel.sh@22 -- # case "$var" in
00:10:56.814   16:54:49	-- accel/accel.sh@20 -- # IFS=:
00:10:56.814   16:54:49	-- accel/accel.sh@20 -- # read -r var val
00:10:56.814   16:54:49	-- accel/accel.sh@21 -- # val=Yes
00:10:56.814   16:54:49	-- accel/accel.sh@22 -- # case "$var" in
00:10:56.814   16:54:49	-- accel/accel.sh@20 -- # IFS=:
00:10:56.814   16:54:49	-- accel/accel.sh@20 -- # read -r var val
00:10:56.814   16:54:49	-- accel/accel.sh@21 -- # val=
00:10:56.814   16:54:49	-- accel/accel.sh@22 -- # case "$var" in
00:10:56.814   16:54:49	-- accel/accel.sh@20 -- # IFS=:
00:10:56.814   16:54:49	-- accel/accel.sh@20 -- # read -r var val
00:10:56.814   16:54:49	-- accel/accel.sh@21 -- # val=
00:10:56.814   16:54:49	-- accel/accel.sh@22 -- # case "$var" in
00:10:56.814   16:54:49	-- accel/accel.sh@20 -- # IFS=:
00:10:56.814   16:54:49	-- accel/accel.sh@20 -- # read -r var val
00:10:58.192   16:54:50	-- accel/accel.sh@21 -- # val=
00:10:58.192   16:54:50	-- accel/accel.sh@22 -- # case "$var" in
00:10:58.192   16:54:50	-- accel/accel.sh@20 -- # IFS=:
00:10:58.192   16:54:50	-- accel/accel.sh@20 -- # read -r var val
00:10:58.192   16:54:50	-- accel/accel.sh@21 -- # val=
00:10:58.192   16:54:50	-- accel/accel.sh@22 -- # case "$var" in
00:10:58.192   16:54:50	-- accel/accel.sh@20 -- # IFS=:
00:10:58.192   16:54:50	-- accel/accel.sh@20 -- # read -r var val
00:10:58.192   16:54:50	-- accel/accel.sh@21 -- # val=
00:10:58.192   16:54:50	-- accel/accel.sh@22 -- # case "$var" in
00:10:58.192   16:54:50	-- accel/accel.sh@20 -- # IFS=:
00:10:58.192   16:54:50	-- accel/accel.sh@20 -- # read -r var val
00:10:58.192   16:54:50	-- accel/accel.sh@21 -- # val=
00:10:58.192   16:54:50	-- accel/accel.sh@22 -- # case "$var" in
00:10:58.192   16:54:50	-- accel/accel.sh@20 -- # IFS=:
00:10:58.192   16:54:50	-- accel/accel.sh@20 -- # read -r var val
00:10:58.192   16:54:50	-- accel/accel.sh@21 -- # val=
00:10:58.192   16:54:50	-- accel/accel.sh@22 -- # case "$var" in
00:10:58.192   16:54:50	-- accel/accel.sh@20 -- # IFS=:
00:10:58.192   16:54:50	-- accel/accel.sh@20 -- # read -r var val
00:10:58.192   16:54:50	-- accel/accel.sh@21 -- # val=
00:10:58.192   16:54:50	-- accel/accel.sh@22 -- # case "$var" in
00:10:58.192   16:54:50	-- accel/accel.sh@20 -- # IFS=:
00:10:58.192   16:54:50	-- accel/accel.sh@20 -- # read -r var val
00:10:58.192   16:54:50	-- accel/accel.sh@28 -- # [[ -n software ]]
00:10:58.192   16:54:50	-- accel/accel.sh@28 -- # [[ -n decompress ]]
00:10:58.192   16:54:50	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:10:58.192  
00:10:58.192  real	0m3.471s
00:10:58.192  user	0m2.871s
00:10:58.192  sys	0m0.427s
00:10:58.192   16:54:50	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:10:58.192   16:54:50	-- common/autotest_common.sh@10 -- # set +x
00:10:58.192  ************************************
00:10:58.192  END TEST accel_decmop_full
00:10:58.192  ************************************
00:10:58.192   16:54:50	-- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf
00:10:58.192   16:54:50	-- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']'
00:10:58.192   16:54:50	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:10:58.192   16:54:50	-- common/autotest_common.sh@10 -- # set +x
00:10:58.192  ************************************
00:10:58.192  START TEST accel_decomp_mcore
00:10:58.192  ************************************
00:10:58.192   16:54:50	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf
00:10:58.192   16:54:50	-- accel/accel.sh@16 -- # local accel_opc
00:10:58.192   16:54:50	-- accel/accel.sh@17 -- # local accel_module
00:10:58.192    16:54:50	-- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf
00:10:58.192    16:54:50	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf
00:10:58.192     16:54:50	-- accel/accel.sh@12 -- # build_accel_config
00:10:58.192     16:54:50	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:10:58.192     16:54:50	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:10:58.192     16:54:50	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:10:58.192     16:54:50	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:10:58.192     16:54:50	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:10:58.192     16:54:50	-- accel/accel.sh@41 -- # local IFS=,
00:10:58.192     16:54:50	-- accel/accel.sh@42 -- # jq -r .
00:10:58.192  [2024-11-19 16:54:50.899198] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:10:58.192  [2024-11-19 16:54:50.899453] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118782 ]
00:10:58.452  [2024-11-19 16:54:51.072670] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4
00:10:58.452  [2024-11-19 16:54:51.159699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:10:58.452  [2024-11-19 16:54:51.159788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:10:58.452  [2024-11-19 16:54:51.159974] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:10:58.452  [2024-11-19 16:54:51.159971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3
00:10:59.830   16:54:52	-- accel/accel.sh@18 -- # out='Preparing input file...
00:10:59.830  
00:10:59.830  SPDK Configuration:
00:10:59.830  Core mask:      0xf
00:10:59.830  
00:10:59.830  Accel Perf Configuration:
00:10:59.830  Workload Type:  decompress
00:10:59.830  Transfer size:  4096 bytes
00:10:59.830  Vector count    1
00:10:59.830  Module:         software
00:10:59.830  File Name:      /home/vagrant/spdk_repo/spdk/test/accel/bib
00:10:59.830  Queue depth:    32
00:10:59.830  Allocate depth: 32
00:10:59.830  # threads/core: 1
00:10:59.830  Run time:       1 seconds
00:10:59.830  Verify:         Yes
00:10:59.830  
00:10:59.830  Running for 1 seconds...
00:10:59.830  
00:10:59.830  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:10:59.830  ------------------------------------------------------------------------------------
00:10:59.830  0,0                       36960/s         68 MiB/s                0                0
00:10:59.830  3,0                       45760/s         84 MiB/s                0                0
00:10:59.830  2,0                       47328/s         87 MiB/s                0                0
00:10:59.830  1,0                       49152/s         90 MiB/s                0                0
00:10:59.830  ====================================================================================
00:10:59.830  Total                    179200/s        700 MiB/s                0                0'
00:10:59.830   16:54:52	-- accel/accel.sh@20 -- # IFS=:
00:10:59.830    16:54:52	-- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf
00:10:59.830   16:54:52	-- accel/accel.sh@20 -- # read -r var val
00:10:59.830    16:54:52	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf
00:10:59.830     16:54:52	-- accel/accel.sh@12 -- # build_accel_config
00:10:59.830     16:54:52	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:10:59.830     16:54:52	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:10:59.830     16:54:52	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:10:59.830     16:54:52	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:10:59.830     16:54:52	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:10:59.830     16:54:52	-- accel/accel.sh@41 -- # local IFS=,
00:10:59.830     16:54:52	-- accel/accel.sh@42 -- # jq -r .
00:10:59.830  [2024-11-19 16:54:52.631544] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:10:59.830  [2024-11-19 16:54:52.631812] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118813 ]
00:11:00.089  [2024-11-19 16:54:52.805190] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4
00:11:00.089  [2024-11-19 16:54:52.927782] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:11:00.089  [2024-11-19 16:54:52.927997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:11:00.089  [2024-11-19 16:54:52.928176] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3
00:11:00.089  [2024-11-19 16:54:52.928181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:11:00.349   16:54:53	-- accel/accel.sh@21 -- # val=
00:11:00.349   16:54:53	-- accel/accel.sh@22 -- # case "$var" in
00:11:00.349   16:54:53	-- accel/accel.sh@20 -- # IFS=:
00:11:00.349   16:54:53	-- accel/accel.sh@20 -- # read -r var val
00:11:00.349   16:54:53	-- accel/accel.sh@21 -- # val=
00:11:00.349   16:54:53	-- accel/accel.sh@22 -- # case "$var" in
00:11:00.349   16:54:53	-- accel/accel.sh@20 -- # IFS=:
00:11:00.349   16:54:53	-- accel/accel.sh@20 -- # read -r var val
00:11:00.349   16:54:53	-- accel/accel.sh@21 -- # val=
00:11:00.349   16:54:53	-- accel/accel.sh@22 -- # case "$var" in
00:11:00.349   16:54:53	-- accel/accel.sh@20 -- # IFS=:
00:11:00.349   16:54:53	-- accel/accel.sh@20 -- # read -r var val
00:11:00.349   16:54:53	-- accel/accel.sh@21 -- # val=0xf
00:11:00.349   16:54:53	-- accel/accel.sh@22 -- # case "$var" in
00:11:00.349   16:54:53	-- accel/accel.sh@20 -- # IFS=:
00:11:00.349   16:54:53	-- accel/accel.sh@20 -- # read -r var val
00:11:00.349   16:54:53	-- accel/accel.sh@21 -- # val=
00:11:00.349   16:54:53	-- accel/accel.sh@22 -- # case "$var" in
00:11:00.349   16:54:53	-- accel/accel.sh@20 -- # IFS=:
00:11:00.349   16:54:53	-- accel/accel.sh@20 -- # read -r var val
00:11:00.349   16:54:53	-- accel/accel.sh@21 -- # val=
00:11:00.349   16:54:53	-- accel/accel.sh@22 -- # case "$var" in
00:11:00.349   16:54:53	-- accel/accel.sh@20 -- # IFS=:
00:11:00.349   16:54:53	-- accel/accel.sh@20 -- # read -r var val
00:11:00.349   16:54:53	-- accel/accel.sh@21 -- # val=decompress
00:11:00.349   16:54:53	-- accel/accel.sh@22 -- # case "$var" in
00:11:00.349   16:54:53	-- accel/accel.sh@24 -- # accel_opc=decompress
00:11:00.349   16:54:53	-- accel/accel.sh@20 -- # IFS=:
00:11:00.349   16:54:53	-- accel/accel.sh@20 -- # read -r var val
00:11:00.349   16:54:53	-- accel/accel.sh@21 -- # val='4096 bytes'
00:11:00.349   16:54:53	-- accel/accel.sh@22 -- # case "$var" in
00:11:00.349   16:54:53	-- accel/accel.sh@20 -- # IFS=:
00:11:00.349   16:54:53	-- accel/accel.sh@20 -- # read -r var val
00:11:00.349   16:54:53	-- accel/accel.sh@21 -- # val=
00:11:00.349   16:54:53	-- accel/accel.sh@22 -- # case "$var" in
00:11:00.349   16:54:53	-- accel/accel.sh@20 -- # IFS=:
00:11:00.349   16:54:53	-- accel/accel.sh@20 -- # read -r var val
00:11:00.349   16:54:53	-- accel/accel.sh@21 -- # val=software
00:11:00.349   16:54:53	-- accel/accel.sh@22 -- # case "$var" in
00:11:00.349   16:54:53	-- accel/accel.sh@23 -- # accel_module=software
00:11:00.349   16:54:53	-- accel/accel.sh@20 -- # IFS=:
00:11:00.349   16:54:53	-- accel/accel.sh@20 -- # read -r var val
00:11:00.349   16:54:53	-- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib
00:11:00.349   16:54:53	-- accel/accel.sh@22 -- # case "$var" in
00:11:00.349   16:54:53	-- accel/accel.sh@20 -- # IFS=:
00:11:00.349   16:54:53	-- accel/accel.sh@20 -- # read -r var val
00:11:00.349   16:54:53	-- accel/accel.sh@21 -- # val=32
00:11:00.349   16:54:53	-- accel/accel.sh@22 -- # case "$var" in
00:11:00.349   16:54:53	-- accel/accel.sh@20 -- # IFS=:
00:11:00.349   16:54:53	-- accel/accel.sh@20 -- # read -r var val
00:11:00.349   16:54:53	-- accel/accel.sh@21 -- # val=32
00:11:00.349   16:54:53	-- accel/accel.sh@22 -- # case "$var" in
00:11:00.349   16:54:53	-- accel/accel.sh@20 -- # IFS=:
00:11:00.349   16:54:53	-- accel/accel.sh@20 -- # read -r var val
00:11:00.349   16:54:53	-- accel/accel.sh@21 -- # val=1
00:11:00.349   16:54:53	-- accel/accel.sh@22 -- # case "$var" in
00:11:00.349   16:54:53	-- accel/accel.sh@20 -- # IFS=:
00:11:00.349   16:54:53	-- accel/accel.sh@20 -- # read -r var val
00:11:00.349   16:54:53	-- accel/accel.sh@21 -- # val='1 seconds'
00:11:00.349   16:54:53	-- accel/accel.sh@22 -- # case "$var" in
00:11:00.349   16:54:53	-- accel/accel.sh@20 -- # IFS=:
00:11:00.349   16:54:53	-- accel/accel.sh@20 -- # read -r var val
00:11:00.349   16:54:53	-- accel/accel.sh@21 -- # val=Yes
00:11:00.349   16:54:53	-- accel/accel.sh@22 -- # case "$var" in
00:11:00.349   16:54:53	-- accel/accel.sh@20 -- # IFS=:
00:11:00.349   16:54:53	-- accel/accel.sh@20 -- # read -r var val
00:11:00.349   16:54:53	-- accel/accel.sh@21 -- # val=
00:11:00.349   16:54:53	-- accel/accel.sh@22 -- # case "$var" in
00:11:00.349   16:54:53	-- accel/accel.sh@20 -- # IFS=:
00:11:00.349   16:54:53	-- accel/accel.sh@20 -- # read -r var val
00:11:00.349   16:54:53	-- accel/accel.sh@21 -- # val=
00:11:00.349   16:54:53	-- accel/accel.sh@22 -- # case "$var" in
00:11:00.349   16:54:53	-- accel/accel.sh@20 -- # IFS=:
00:11:00.349   16:54:53	-- accel/accel.sh@20 -- # read -r var val
00:11:01.726   16:54:54	-- accel/accel.sh@21 -- # val=
00:11:01.726   16:54:54	-- accel/accel.sh@22 -- # case "$var" in
00:11:01.726   16:54:54	-- accel/accel.sh@20 -- # IFS=:
00:11:01.726   16:54:54	-- accel/accel.sh@20 -- # read -r var val
00:11:01.726   16:54:54	-- accel/accel.sh@21 -- # val=
00:11:01.726   16:54:54	-- accel/accel.sh@22 -- # case "$var" in
00:11:01.726   16:54:54	-- accel/accel.sh@20 -- # IFS=:
00:11:01.726   16:54:54	-- accel/accel.sh@20 -- # read -r var val
00:11:01.726   16:54:54	-- accel/accel.sh@21 -- # val=
00:11:01.726   16:54:54	-- accel/accel.sh@22 -- # case "$var" in
00:11:01.726   16:54:54	-- accel/accel.sh@20 -- # IFS=:
00:11:01.726   16:54:54	-- accel/accel.sh@20 -- # read -r var val
00:11:01.726   16:54:54	-- accel/accel.sh@21 -- # val=
00:11:01.726   16:54:54	-- accel/accel.sh@22 -- # case "$var" in
00:11:01.726   16:54:54	-- accel/accel.sh@20 -- # IFS=:
00:11:01.726   16:54:54	-- accel/accel.sh@20 -- # read -r var val
00:11:01.726   16:54:54	-- accel/accel.sh@21 -- # val=
00:11:01.726   16:54:54	-- accel/accel.sh@22 -- # case "$var" in
00:11:01.726   16:54:54	-- accel/accel.sh@20 -- # IFS=:
00:11:01.726   16:54:54	-- accel/accel.sh@20 -- # read -r var val
00:11:01.726   16:54:54	-- accel/accel.sh@21 -- # val=
00:11:01.726   16:54:54	-- accel/accel.sh@22 -- # case "$var" in
00:11:01.726   16:54:54	-- accel/accel.sh@20 -- # IFS=:
00:11:01.726   16:54:54	-- accel/accel.sh@20 -- # read -r var val
00:11:01.726   16:54:54	-- accel/accel.sh@21 -- # val=
00:11:01.726   16:54:54	-- accel/accel.sh@22 -- # case "$var" in
00:11:01.726   16:54:54	-- accel/accel.sh@20 -- # IFS=:
00:11:01.726   16:54:54	-- accel/accel.sh@20 -- # read -r var val
00:11:01.726   16:54:54	-- accel/accel.sh@21 -- # val=
00:11:01.726   16:54:54	-- accel/accel.sh@22 -- # case "$var" in
00:11:01.726   16:54:54	-- accel/accel.sh@20 -- # IFS=:
00:11:01.726   16:54:54	-- accel/accel.sh@20 -- # read -r var val
00:11:01.726   16:54:54	-- accel/accel.sh@21 -- # val=
00:11:01.726   16:54:54	-- accel/accel.sh@22 -- # case "$var" in
00:11:01.726   16:54:54	-- accel/accel.sh@20 -- # IFS=:
00:11:01.726   16:54:54	-- accel/accel.sh@20 -- # read -r var val
00:11:01.726   16:54:54	-- accel/accel.sh@28 -- # [[ -n software ]]
00:11:01.726   16:54:54	-- accel/accel.sh@28 -- # [[ -n decompress ]]
00:11:01.726   16:54:54	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:11:01.726  
00:11:01.726  real	0m3.514s
00:11:01.726  user	0m10.284s
00:11:01.726  sys	0m0.499s
00:11:01.726   16:54:54	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:11:01.726  ************************************
00:11:01.726  END TEST accel_decomp_mcore
00:11:01.726  ************************************
00:11:01.726   16:54:54	-- common/autotest_common.sh@10 -- # set +x
00:11:01.726   16:54:54	-- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf
00:11:01.726   16:54:54	-- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']'
00:11:01.726   16:54:54	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:11:01.726   16:54:54	-- common/autotest_common.sh@10 -- # set +x
00:11:01.726  ************************************
00:11:01.726  START TEST accel_decomp_full_mcore
00:11:01.726  ************************************
00:11:01.726   16:54:54	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf
00:11:01.726   16:54:54	-- accel/accel.sh@16 -- # local accel_opc
00:11:01.726   16:54:54	-- accel/accel.sh@17 -- # local accel_module
00:11:01.726    16:54:54	-- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf
00:11:01.726    16:54:54	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf
00:11:01.726     16:54:54	-- accel/accel.sh@12 -- # build_accel_config
00:11:01.726     16:54:54	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:11:01.726     16:54:54	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:11:01.726     16:54:54	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:11:01.726     16:54:54	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:11:01.726     16:54:54	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:11:01.726     16:54:54	-- accel/accel.sh@41 -- # local IFS=,
00:11:01.726     16:54:54	-- accel/accel.sh@42 -- # jq -r .
00:11:01.726  [2024-11-19 16:54:54.479474] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:11:01.726  [2024-11-19 16:54:54.479739] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118861 ]
00:11:01.985  [2024-11-19 16:54:54.653891] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4
00:11:01.985  [2024-11-19 16:54:54.741621] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:11:01.985  [2024-11-19 16:54:54.741794] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:11:01.985  [2024-11-19 16:54:54.741978] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3
00:11:01.985  [2024-11-19 16:54:54.742077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:11:03.388   16:54:56	-- accel/accel.sh@18 -- # out='Preparing input file...
00:11:03.388  
00:11:03.388  SPDK Configuration:
00:11:03.388  Core mask:      0xf
00:11:03.388  
00:11:03.388  Accel Perf Configuration:
00:11:03.388  Workload Type:  decompress
00:11:03.388  Transfer size:  111250 bytes
00:11:03.388  Vector count    1
00:11:03.388  Module:         software
00:11:03.388  File Name:      /home/vagrant/spdk_repo/spdk/test/accel/bib
00:11:03.388  Queue depth:    32
00:11:03.388  Allocate depth: 32
00:11:03.388  # threads/core: 1
00:11:03.388  Run time:       1 seconds
00:11:03.388  Verify:         Yes
00:11:03.388  
00:11:03.388  Running for 1 seconds...
00:11:03.388  
00:11:03.388  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:11:03.388  ------------------------------------------------------------------------------------
00:11:03.388  0,0                        3424/s        141 MiB/s                0                0
00:11:03.388  3,0                        4192/s        173 MiB/s                0                0
00:11:03.388  2,0                        4416/s        182 MiB/s                0                0
00:11:03.388  1,0                        4640/s        191 MiB/s                0                0
00:11:03.388  ====================================================================================
00:11:03.388  Total                     16672/s       1768 MiB/s                0                0'
00:11:03.388   16:54:56	-- accel/accel.sh@20 -- # IFS=:
00:11:03.389   16:54:56	-- accel/accel.sh@20 -- # read -r var val
00:11:03.389    16:54:56	-- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf
00:11:03.389    16:54:56	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf
00:11:03.389     16:54:56	-- accel/accel.sh@12 -- # build_accel_config
00:11:03.389     16:54:56	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:11:03.389     16:54:56	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:11:03.389     16:54:56	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:11:03.389     16:54:56	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:11:03.389     16:54:56	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:11:03.389     16:54:56	-- accel/accel.sh@41 -- # local IFS=,
00:11:03.389     16:54:56	-- accel/accel.sh@42 -- # jq -r .
00:11:03.389  [2024-11-19 16:54:56.229857] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:11:03.389  [2024-11-19 16:54:56.230760] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118894 ]
00:11:03.647  [2024-11-19 16:54:56.406692] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4
00:11:03.907  [2024-11-19 16:54:56.514772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:11:03.907  [2024-11-19 16:54:56.514948] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:11:03.907  [2024-11-19 16:54:56.515979] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3
00:11:03.907  [2024-11-19 16:54:56.515984] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:11:03.907   16:54:56	-- accel/accel.sh@21 -- # val=
00:11:03.907   16:54:56	-- accel/accel.sh@22 -- # case "$var" in
00:11:03.907   16:54:56	-- accel/accel.sh@20 -- # IFS=:
00:11:03.907   16:54:56	-- accel/accel.sh@20 -- # read -r var val
00:11:03.907   16:54:56	-- accel/accel.sh@21 -- # val=
00:11:03.907   16:54:56	-- accel/accel.sh@22 -- # case "$var" in
00:11:03.907   16:54:56	-- accel/accel.sh@20 -- # IFS=:
00:11:03.907   16:54:56	-- accel/accel.sh@20 -- # read -r var val
00:11:03.907   16:54:56	-- accel/accel.sh@21 -- # val=
00:11:03.907   16:54:56	-- accel/accel.sh@22 -- # case "$var" in
00:11:03.907   16:54:56	-- accel/accel.sh@20 -- # IFS=:
00:11:03.907   16:54:56	-- accel/accel.sh@20 -- # read -r var val
00:11:03.907   16:54:56	-- accel/accel.sh@21 -- # val=0xf
00:11:03.907   16:54:56	-- accel/accel.sh@22 -- # case "$var" in
00:11:03.907   16:54:56	-- accel/accel.sh@20 -- # IFS=:
00:11:03.907   16:54:56	-- accel/accel.sh@20 -- # read -r var val
00:11:03.907   16:54:56	-- accel/accel.sh@21 -- # val=
00:11:03.907   16:54:56	-- accel/accel.sh@22 -- # case "$var" in
00:11:03.907   16:54:56	-- accel/accel.sh@20 -- # IFS=:
00:11:03.907   16:54:56	-- accel/accel.sh@20 -- # read -r var val
00:11:03.907   16:54:56	-- accel/accel.sh@21 -- # val=
00:11:03.907   16:54:56	-- accel/accel.sh@22 -- # case "$var" in
00:11:03.907   16:54:56	-- accel/accel.sh@20 -- # IFS=:
00:11:03.907   16:54:56	-- accel/accel.sh@20 -- # read -r var val
00:11:03.907   16:54:56	-- accel/accel.sh@21 -- # val=decompress
00:11:03.907   16:54:56	-- accel/accel.sh@22 -- # case "$var" in
00:11:03.907   16:54:56	-- accel/accel.sh@24 -- # accel_opc=decompress
00:11:03.907   16:54:56	-- accel/accel.sh@20 -- # IFS=:
00:11:03.907   16:54:56	-- accel/accel.sh@20 -- # read -r var val
00:11:03.907   16:54:56	-- accel/accel.sh@21 -- # val='111250 bytes'
00:11:03.907   16:54:56	-- accel/accel.sh@22 -- # case "$var" in
00:11:03.907   16:54:56	-- accel/accel.sh@20 -- # IFS=:
00:11:03.907   16:54:56	-- accel/accel.sh@20 -- # read -r var val
00:11:03.907   16:54:56	-- accel/accel.sh@21 -- # val=
00:11:03.907   16:54:56	-- accel/accel.sh@22 -- # case "$var" in
00:11:03.907   16:54:56	-- accel/accel.sh@20 -- # IFS=:
00:11:03.907   16:54:56	-- accel/accel.sh@20 -- # read -r var val
00:11:03.907   16:54:56	-- accel/accel.sh@21 -- # val=software
00:11:03.907   16:54:56	-- accel/accel.sh@22 -- # case "$var" in
00:11:03.907   16:54:56	-- accel/accel.sh@23 -- # accel_module=software
00:11:03.907   16:54:56	-- accel/accel.sh@20 -- # IFS=:
00:11:03.907   16:54:56	-- accel/accel.sh@20 -- # read -r var val
00:11:03.907   16:54:56	-- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib
00:11:03.907   16:54:56	-- accel/accel.sh@22 -- # case "$var" in
00:11:03.907   16:54:56	-- accel/accel.sh@20 -- # IFS=:
00:11:03.907   16:54:56	-- accel/accel.sh@20 -- # read -r var val
00:11:03.907   16:54:56	-- accel/accel.sh@21 -- # val=32
00:11:03.907   16:54:56	-- accel/accel.sh@22 -- # case "$var" in
00:11:03.907   16:54:56	-- accel/accel.sh@20 -- # IFS=:
00:11:03.907   16:54:56	-- accel/accel.sh@20 -- # read -r var val
00:11:03.907   16:54:56	-- accel/accel.sh@21 -- # val=32
00:11:03.907   16:54:56	-- accel/accel.sh@22 -- # case "$var" in
00:11:03.907   16:54:56	-- accel/accel.sh@20 -- # IFS=:
00:11:03.907   16:54:56	-- accel/accel.sh@20 -- # read -r var val
00:11:03.907   16:54:56	-- accel/accel.sh@21 -- # val=1
00:11:03.907   16:54:56	-- accel/accel.sh@22 -- # case "$var" in
00:11:03.907   16:54:56	-- accel/accel.sh@20 -- # IFS=:
00:11:03.907   16:54:56	-- accel/accel.sh@20 -- # read -r var val
00:11:03.907   16:54:56	-- accel/accel.sh@21 -- # val='1 seconds'
00:11:03.907   16:54:56	-- accel/accel.sh@22 -- # case "$var" in
00:11:03.907   16:54:56	-- accel/accel.sh@20 -- # IFS=:
00:11:03.907   16:54:56	-- accel/accel.sh@20 -- # read -r var val
00:11:03.907   16:54:56	-- accel/accel.sh@21 -- # val=Yes
00:11:03.907   16:54:56	-- accel/accel.sh@22 -- # case "$var" in
00:11:03.907   16:54:56	-- accel/accel.sh@20 -- # IFS=:
00:11:03.907   16:54:56	-- accel/accel.sh@20 -- # read -r var val
00:11:03.907   16:54:56	-- accel/accel.sh@21 -- # val=
00:11:03.907   16:54:56	-- accel/accel.sh@22 -- # case "$var" in
00:11:03.907   16:54:56	-- accel/accel.sh@20 -- # IFS=:
00:11:03.907   16:54:56	-- accel/accel.sh@20 -- # read -r var val
00:11:03.907   16:54:56	-- accel/accel.sh@21 -- # val=
00:11:03.907   16:54:56	-- accel/accel.sh@22 -- # case "$var" in
00:11:03.907   16:54:56	-- accel/accel.sh@20 -- # IFS=:
00:11:03.907   16:54:56	-- accel/accel.sh@20 -- # read -r var val
00:11:05.287   16:54:57	-- accel/accel.sh@21 -- # val=
00:11:05.287   16:54:57	-- accel/accel.sh@22 -- # case "$var" in
00:11:05.287   16:54:57	-- accel/accel.sh@20 -- # IFS=:
00:11:05.287   16:54:57	-- accel/accel.sh@20 -- # read -r var val
00:11:05.287   16:54:57	-- accel/accel.sh@21 -- # val=
00:11:05.287   16:54:57	-- accel/accel.sh@22 -- # case "$var" in
00:11:05.287   16:54:57	-- accel/accel.sh@20 -- # IFS=:
00:11:05.287   16:54:57	-- accel/accel.sh@20 -- # read -r var val
00:11:05.287   16:54:57	-- accel/accel.sh@21 -- # val=
00:11:05.287   16:54:57	-- accel/accel.sh@22 -- # case "$var" in
00:11:05.287   16:54:57	-- accel/accel.sh@20 -- # IFS=:
00:11:05.287   16:54:57	-- accel/accel.sh@20 -- # read -r var val
00:11:05.287   16:54:57	-- accel/accel.sh@21 -- # val=
00:11:05.287   16:54:57	-- accel/accel.sh@22 -- # case "$var" in
00:11:05.287   16:54:57	-- accel/accel.sh@20 -- # IFS=:
00:11:05.287   16:54:57	-- accel/accel.sh@20 -- # read -r var val
00:11:05.287   16:54:57	-- accel/accel.sh@21 -- # val=
00:11:05.287   16:54:57	-- accel/accel.sh@22 -- # case "$var" in
00:11:05.287   16:54:57	-- accel/accel.sh@20 -- # IFS=:
00:11:05.287   16:54:57	-- accel/accel.sh@20 -- # read -r var val
00:11:05.287   16:54:57	-- accel/accel.sh@21 -- # val=
00:11:05.287   16:54:57	-- accel/accel.sh@22 -- # case "$var" in
00:11:05.287   16:54:57	-- accel/accel.sh@20 -- # IFS=:
00:11:05.287   16:54:57	-- accel/accel.sh@20 -- # read -r var val
00:11:05.287   16:54:57	-- accel/accel.sh@21 -- # val=
00:11:05.287   16:54:57	-- accel/accel.sh@22 -- # case "$var" in
00:11:05.287   16:54:57	-- accel/accel.sh@20 -- # IFS=:
00:11:05.287   16:54:57	-- accel/accel.sh@20 -- # read -r var val
00:11:05.287   16:54:57	-- accel/accel.sh@21 -- # val=
00:11:05.287   16:54:57	-- accel/accel.sh@22 -- # case "$var" in
00:11:05.287   16:54:57	-- accel/accel.sh@20 -- # IFS=:
00:11:05.287   16:54:57	-- accel/accel.sh@20 -- # read -r var val
00:11:05.287   16:54:57	-- accel/accel.sh@21 -- # val=
00:11:05.287   16:54:57	-- accel/accel.sh@22 -- # case "$var" in
00:11:05.287   16:54:57	-- accel/accel.sh@20 -- # IFS=:
00:11:05.287   16:54:57	-- accel/accel.sh@20 -- # read -r var val
00:11:05.287   16:54:57	-- accel/accel.sh@28 -- # [[ -n software ]]
00:11:05.287   16:54:57	-- accel/accel.sh@28 -- # [[ -n decompress ]]
00:11:05.287   16:54:57	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:11:05.287  
00:11:05.287  real	0m3.530s
00:11:05.287  user	0m10.293s
00:11:05.287  sys	0m0.573s
00:11:05.287   16:54:57	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:11:05.287   16:54:57	-- common/autotest_common.sh@10 -- # set +x
00:11:05.287  ************************************
00:11:05.287  END TEST accel_decomp_full_mcore
00:11:05.287  ************************************
00:11:05.287   16:54:58	-- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2
00:11:05.287   16:54:58	-- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']'
00:11:05.287   16:54:58	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:11:05.287   16:54:58	-- common/autotest_common.sh@10 -- # set +x
00:11:05.287  ************************************
00:11:05.287  START TEST accel_decomp_mthread
00:11:05.287  ************************************
00:11:05.287   16:54:58	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2
00:11:05.287   16:54:58	-- accel/accel.sh@16 -- # local accel_opc
00:11:05.287   16:54:58	-- accel/accel.sh@17 -- # local accel_module
00:11:05.287    16:54:58	-- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2
00:11:05.287    16:54:58	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2
00:11:05.287     16:54:58	-- accel/accel.sh@12 -- # build_accel_config
00:11:05.287     16:54:58	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:11:05.287     16:54:58	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:11:05.287     16:54:58	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:11:05.287     16:54:58	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:11:05.287     16:54:58	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:11:05.287     16:54:58	-- accel/accel.sh@41 -- # local IFS=,
00:11:05.287     16:54:58	-- accel/accel.sh@42 -- # jq -r .
00:11:05.287  [2024-11-19 16:54:58.077775] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:11:05.287  [2024-11-19 16:54:58.078028] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118944 ]
00:11:05.547  [2024-11-19 16:54:58.231338] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:11:05.547  [2024-11-19 16:54:58.312087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:11:06.930   16:54:59	-- accel/accel.sh@18 -- # out='Preparing input file...
00:11:06.930  
00:11:06.930  SPDK Configuration:
00:11:06.930  Core mask:      0x1
00:11:06.930  
00:11:06.930  Accel Perf Configuration:
00:11:06.930  Workload Type:  decompress
00:11:06.930  Transfer size:  4096 bytes
00:11:06.930  Vector count    1
00:11:06.930  Module:         software
00:11:06.930  File Name:      /home/vagrant/spdk_repo/spdk/test/accel/bib
00:11:06.930  Queue depth:    32
00:11:06.930  Allocate depth: 32
00:11:06.930  # threads/core: 2
00:11:06.930  Run time:       1 seconds
00:11:06.930  Verify:         Yes
00:11:06.930  
00:11:06.930  Running for 1 seconds...
00:11:06.930  
00:11:06.930  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:11:06.930  ------------------------------------------------------------------------------------
00:11:06.930  0,1                       23712/s         43 MiB/s                0                0
00:11:06.930  0,0                       23584/s         43 MiB/s                0                0
00:11:06.930  ====================================================================================
00:11:06.930  Total                     47296/s        184 MiB/s                0                0'
00:11:06.930   16:54:59	-- accel/accel.sh@20 -- # IFS=:
00:11:06.930    16:54:59	-- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2
00:11:06.930   16:54:59	-- accel/accel.sh@20 -- # read -r var val
00:11:06.930    16:54:59	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2
00:11:06.930     16:54:59	-- accel/accel.sh@12 -- # build_accel_config
00:11:06.930     16:54:59	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:11:06.930     16:54:59	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:11:06.930     16:54:59	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:11:06.930     16:54:59	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:11:06.930     16:54:59	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:11:06.930     16:54:59	-- accel/accel.sh@41 -- # local IFS=,
00:11:06.930     16:54:59	-- accel/accel.sh@42 -- # jq -r .
00:11:06.930  [2024-11-19 16:54:59.775415] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:11:06.930  [2024-11-19 16:54:59.776257] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118967 ]
00:11:07.190  [2024-11-19 16:54:59.934404] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:11:07.190  [2024-11-19 16:55:00.034563] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:11:07.449   16:55:00	-- accel/accel.sh@21 -- # val=
00:11:07.449   16:55:00	-- accel/accel.sh@22 -- # case "$var" in
00:11:07.449   16:55:00	-- accel/accel.sh@20 -- # IFS=:
00:11:07.449   16:55:00	-- accel/accel.sh@20 -- # read -r var val
00:11:07.449   16:55:00	-- accel/accel.sh@21 -- # val=
00:11:07.449   16:55:00	-- accel/accel.sh@22 -- # case "$var" in
00:11:07.449   16:55:00	-- accel/accel.sh@20 -- # IFS=:
00:11:07.449   16:55:00	-- accel/accel.sh@20 -- # read -r var val
00:11:07.449   16:55:00	-- accel/accel.sh@21 -- # val=
00:11:07.449   16:55:00	-- accel/accel.sh@22 -- # case "$var" in
00:11:07.449   16:55:00	-- accel/accel.sh@20 -- # IFS=:
00:11:07.449   16:55:00	-- accel/accel.sh@20 -- # read -r var val
00:11:07.449   16:55:00	-- accel/accel.sh@21 -- # val=0x1
00:11:07.449   16:55:00	-- accel/accel.sh@22 -- # case "$var" in
00:11:07.449   16:55:00	-- accel/accel.sh@20 -- # IFS=:
00:11:07.449   16:55:00	-- accel/accel.sh@20 -- # read -r var val
00:11:07.449   16:55:00	-- accel/accel.sh@21 -- # val=
00:11:07.449   16:55:00	-- accel/accel.sh@22 -- # case "$var" in
00:11:07.449   16:55:00	-- accel/accel.sh@20 -- # IFS=:
00:11:07.449   16:55:00	-- accel/accel.sh@20 -- # read -r var val
00:11:07.449   16:55:00	-- accel/accel.sh@21 -- # val=
00:11:07.449   16:55:00	-- accel/accel.sh@22 -- # case "$var" in
00:11:07.449   16:55:00	-- accel/accel.sh@20 -- # IFS=:
00:11:07.449   16:55:00	-- accel/accel.sh@20 -- # read -r var val
00:11:07.449   16:55:00	-- accel/accel.sh@21 -- # val=decompress
00:11:07.449   16:55:00	-- accel/accel.sh@22 -- # case "$var" in
00:11:07.449   16:55:00	-- accel/accel.sh@24 -- # accel_opc=decompress
00:11:07.449   16:55:00	-- accel/accel.sh@20 -- # IFS=:
00:11:07.449   16:55:00	-- accel/accel.sh@20 -- # read -r var val
00:11:07.449   16:55:00	-- accel/accel.sh@21 -- # val='4096 bytes'
00:11:07.449   16:55:00	-- accel/accel.sh@22 -- # case "$var" in
00:11:07.449   16:55:00	-- accel/accel.sh@20 -- # IFS=:
00:11:07.449   16:55:00	-- accel/accel.sh@20 -- # read -r var val
00:11:07.449   16:55:00	-- accel/accel.sh@21 -- # val=
00:11:07.449   16:55:00	-- accel/accel.sh@22 -- # case "$var" in
00:11:07.449   16:55:00	-- accel/accel.sh@20 -- # IFS=:
00:11:07.449   16:55:00	-- accel/accel.sh@20 -- # read -r var val
00:11:07.449   16:55:00	-- accel/accel.sh@21 -- # val=software
00:11:07.449   16:55:00	-- accel/accel.sh@22 -- # case "$var" in
00:11:07.449   16:55:00	-- accel/accel.sh@23 -- # accel_module=software
00:11:07.449   16:55:00	-- accel/accel.sh@20 -- # IFS=:
00:11:07.449   16:55:00	-- accel/accel.sh@20 -- # read -r var val
00:11:07.449   16:55:00	-- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib
00:11:07.449   16:55:00	-- accel/accel.sh@22 -- # case "$var" in
00:11:07.449   16:55:00	-- accel/accel.sh@20 -- # IFS=:
00:11:07.450   16:55:00	-- accel/accel.sh@20 -- # read -r var val
00:11:07.450   16:55:00	-- accel/accel.sh@21 -- # val=32
00:11:07.450   16:55:00	-- accel/accel.sh@22 -- # case "$var" in
00:11:07.450   16:55:00	-- accel/accel.sh@20 -- # IFS=:
00:11:07.450   16:55:00	-- accel/accel.sh@20 -- # read -r var val
00:11:07.450   16:55:00	-- accel/accel.sh@21 -- # val=32
00:11:07.450   16:55:00	-- accel/accel.sh@22 -- # case "$var" in
00:11:07.450   16:55:00	-- accel/accel.sh@20 -- # IFS=:
00:11:07.450   16:55:00	-- accel/accel.sh@20 -- # read -r var val
00:11:07.450   16:55:00	-- accel/accel.sh@21 -- # val=2
00:11:07.450   16:55:00	-- accel/accel.sh@22 -- # case "$var" in
00:11:07.450   16:55:00	-- accel/accel.sh@20 -- # IFS=:
00:11:07.450   16:55:00	-- accel/accel.sh@20 -- # read -r var val
00:11:07.450   16:55:00	-- accel/accel.sh@21 -- # val='1 seconds'
00:11:07.450   16:55:00	-- accel/accel.sh@22 -- # case "$var" in
00:11:07.450   16:55:00	-- accel/accel.sh@20 -- # IFS=:
00:11:07.450   16:55:00	-- accel/accel.sh@20 -- # read -r var val
00:11:07.450   16:55:00	-- accel/accel.sh@21 -- # val=Yes
00:11:07.450   16:55:00	-- accel/accel.sh@22 -- # case "$var" in
00:11:07.450   16:55:00	-- accel/accel.sh@20 -- # IFS=:
00:11:07.450   16:55:00	-- accel/accel.sh@20 -- # read -r var val
00:11:07.450   16:55:00	-- accel/accel.sh@21 -- # val=
00:11:07.450   16:55:00	-- accel/accel.sh@22 -- # case "$var" in
00:11:07.450   16:55:00	-- accel/accel.sh@20 -- # IFS=:
00:11:07.450   16:55:00	-- accel/accel.sh@20 -- # read -r var val
00:11:07.450   16:55:00	-- accel/accel.sh@21 -- # val=
00:11:07.450   16:55:00	-- accel/accel.sh@22 -- # case "$var" in
00:11:07.450   16:55:00	-- accel/accel.sh@20 -- # IFS=:
00:11:07.450   16:55:00	-- accel/accel.sh@20 -- # read -r var val
00:11:08.828   16:55:01	-- accel/accel.sh@21 -- # val=
00:11:08.828   16:55:01	-- accel/accel.sh@22 -- # case "$var" in
00:11:08.828   16:55:01	-- accel/accel.sh@20 -- # IFS=:
00:11:08.828   16:55:01	-- accel/accel.sh@20 -- # read -r var val
00:11:08.828   16:55:01	-- accel/accel.sh@21 -- # val=
00:11:08.828   16:55:01	-- accel/accel.sh@22 -- # case "$var" in
00:11:08.828   16:55:01	-- accel/accel.sh@20 -- # IFS=:
00:11:08.828   16:55:01	-- accel/accel.sh@20 -- # read -r var val
00:11:08.828   16:55:01	-- accel/accel.sh@21 -- # val=
00:11:08.828   16:55:01	-- accel/accel.sh@22 -- # case "$var" in
00:11:08.828   16:55:01	-- accel/accel.sh@20 -- # IFS=:
00:11:08.828   16:55:01	-- accel/accel.sh@20 -- # read -r var val
00:11:08.828   16:55:01	-- accel/accel.sh@21 -- # val=
00:11:08.828   16:55:01	-- accel/accel.sh@22 -- # case "$var" in
00:11:08.828   16:55:01	-- accel/accel.sh@20 -- # IFS=:
00:11:08.828   16:55:01	-- accel/accel.sh@20 -- # read -r var val
00:11:08.828   16:55:01	-- accel/accel.sh@21 -- # val=
00:11:08.828   16:55:01	-- accel/accel.sh@22 -- # case "$var" in
00:11:08.828   16:55:01	-- accel/accel.sh@20 -- # IFS=:
00:11:08.828   16:55:01	-- accel/accel.sh@20 -- # read -r var val
00:11:08.828   16:55:01	-- accel/accel.sh@21 -- # val=
00:11:08.828   16:55:01	-- accel/accel.sh@22 -- # case "$var" in
00:11:08.828   16:55:01	-- accel/accel.sh@20 -- # IFS=:
00:11:08.828   16:55:01	-- accel/accel.sh@20 -- # read -r var val
00:11:08.828   16:55:01	-- accel/accel.sh@21 -- # val=
00:11:08.828   16:55:01	-- accel/accel.sh@22 -- # case "$var" in
00:11:08.828   16:55:01	-- accel/accel.sh@20 -- # IFS=:
00:11:08.828   16:55:01	-- accel/accel.sh@20 -- # read -r var val
00:11:08.828   16:55:01	-- accel/accel.sh@28 -- # [[ -n software ]]
00:11:08.828   16:55:01	-- accel/accel.sh@28 -- # [[ -n decompress ]]
00:11:08.828   16:55:01	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:11:08.828  
00:11:08.828  real	0m3.421s
00:11:08.828  user	0m2.833s
00:11:08.828  sys	0m0.424s
00:11:08.828   16:55:01	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:11:08.828  ************************************
00:11:08.828   16:55:01	-- common/autotest_common.sh@10 -- # set +x
00:11:08.828  END TEST accel_decomp_mthread
00:11:08.828  ************************************
00:11:08.828   16:55:01	-- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2
00:11:08.828   16:55:01	-- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']'
00:11:08.828   16:55:01	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:11:08.828   16:55:01	-- common/autotest_common.sh@10 -- # set +x
00:11:08.828  ************************************
00:11:08.828  START TEST accel_deomp_full_mthread
00:11:08.828  ************************************
00:11:08.828   16:55:01	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2
00:11:08.828   16:55:01	-- accel/accel.sh@16 -- # local accel_opc
00:11:08.828   16:55:01	-- accel/accel.sh@17 -- # local accel_module
00:11:08.828    16:55:01	-- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2
00:11:08.828    16:55:01	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2
00:11:08.828     16:55:01	-- accel/accel.sh@12 -- # build_accel_config
00:11:08.828     16:55:01	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:11:08.828     16:55:01	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:11:08.828     16:55:01	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:11:08.828     16:55:01	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:11:08.828     16:55:01	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:11:08.829     16:55:01	-- accel/accel.sh@41 -- # local IFS=,
00:11:08.829     16:55:01	-- accel/accel.sh@42 -- # jq -r .
00:11:08.829  [2024-11-19 16:55:01.563429] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:11:08.829  [2024-11-19 16:55:01.563722] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119012 ]
00:11:09.087  [2024-11-19 16:55:01.721754] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:11:09.087  [2024-11-19 16:55:01.800168] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:11:10.464   16:55:03	-- accel/accel.sh@18 -- # out='Preparing input file...
00:11:10.464  
00:11:10.464  SPDK Configuration:
00:11:10.464  Core mask:      0x1
00:11:10.464  
00:11:10.464  Accel Perf Configuration:
00:11:10.464  Workload Type:  decompress
00:11:10.464  Transfer size:  111250 bytes
00:11:10.464  Vector count    1
00:11:10.464  Module:         software
00:11:10.464  File Name:      /home/vagrant/spdk_repo/spdk/test/accel/bib
00:11:10.464  Queue depth:    32
00:11:10.464  Allocate depth: 32
00:11:10.464  # threads/core: 2
00:11:10.464  Run time:       1 seconds
00:11:10.464  Verify:         Yes
00:11:10.464  
00:11:10.464  Running for 1 seconds...
00:11:10.464  
00:11:10.464  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:11:10.464  ------------------------------------------------------------------------------------
00:11:10.464  0,1                        1760/s         72 MiB/s                0                0
00:11:10.464  0,0                        1760/s         72 MiB/s                0                0
00:11:10.464  ====================================================================================
00:11:10.464  Total                      3520/s        373 MiB/s                0                0'
00:11:10.464   16:55:03	-- accel/accel.sh@20 -- # IFS=:
00:11:10.464   16:55:03	-- accel/accel.sh@20 -- # read -r var val
00:11:10.464    16:55:03	-- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2
00:11:10.464    16:55:03	-- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2
00:11:10.464     16:55:03	-- accel/accel.sh@12 -- # build_accel_config
00:11:10.464     16:55:03	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:11:10.464     16:55:03	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:11:10.464     16:55:03	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:11:10.464     16:55:03	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:11:10.464     16:55:03	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:11:10.464     16:55:03	-- accel/accel.sh@41 -- # local IFS=,
00:11:10.464     16:55:03	-- accel/accel.sh@42 -- # jq -r .
00:11:10.464  [2024-11-19 16:55:03.281074] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:11:10.464  [2024-11-19 16:55:03.281908] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119047 ]
00:11:10.723  [2024-11-19 16:55:03.437736] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:11:10.723  [2024-11-19 16:55:03.524949] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:11:10.983   16:55:03	-- accel/accel.sh@21 -- # val=
00:11:10.983   16:55:03	-- accel/accel.sh@22 -- # case "$var" in
00:11:10.983   16:55:03	-- accel/accel.sh@20 -- # IFS=:
00:11:10.983   16:55:03	-- accel/accel.sh@20 -- # read -r var val
00:11:10.983   16:55:03	-- accel/accel.sh@21 -- # val=
00:11:10.983   16:55:03	-- accel/accel.sh@22 -- # case "$var" in
00:11:10.983   16:55:03	-- accel/accel.sh@20 -- # IFS=:
00:11:10.983   16:55:03	-- accel/accel.sh@20 -- # read -r var val
00:11:10.983   16:55:03	-- accel/accel.sh@21 -- # val=
00:11:10.983   16:55:03	-- accel/accel.sh@22 -- # case "$var" in
00:11:10.983   16:55:03	-- accel/accel.sh@20 -- # IFS=:
00:11:10.983   16:55:03	-- accel/accel.sh@20 -- # read -r var val
00:11:10.983   16:55:03	-- accel/accel.sh@21 -- # val=0x1
00:11:10.983   16:55:03	-- accel/accel.sh@22 -- # case "$var" in
00:11:10.983   16:55:03	-- accel/accel.sh@20 -- # IFS=:
00:11:10.983   16:55:03	-- accel/accel.sh@20 -- # read -r var val
00:11:10.983   16:55:03	-- accel/accel.sh@21 -- # val=
00:11:10.983   16:55:03	-- accel/accel.sh@22 -- # case "$var" in
00:11:10.983   16:55:03	-- accel/accel.sh@20 -- # IFS=:
00:11:10.983   16:55:03	-- accel/accel.sh@20 -- # read -r var val
00:11:10.983   16:55:03	-- accel/accel.sh@21 -- # val=
00:11:10.983   16:55:03	-- accel/accel.sh@22 -- # case "$var" in
00:11:10.983   16:55:03	-- accel/accel.sh@20 -- # IFS=:
00:11:10.983   16:55:03	-- accel/accel.sh@20 -- # read -r var val
00:11:10.983   16:55:03	-- accel/accel.sh@21 -- # val=decompress
00:11:10.983   16:55:03	-- accel/accel.sh@22 -- # case "$var" in
00:11:10.983   16:55:03	-- accel/accel.sh@24 -- # accel_opc=decompress
00:11:10.983   16:55:03	-- accel/accel.sh@20 -- # IFS=:
00:11:10.983   16:55:03	-- accel/accel.sh@20 -- # read -r var val
00:11:10.983   16:55:03	-- accel/accel.sh@21 -- # val='111250 bytes'
00:11:10.983   16:55:03	-- accel/accel.sh@22 -- # case "$var" in
00:11:10.983   16:55:03	-- accel/accel.sh@20 -- # IFS=:
00:11:10.983   16:55:03	-- accel/accel.sh@20 -- # read -r var val
00:11:10.983   16:55:03	-- accel/accel.sh@21 -- # val=
00:11:10.983   16:55:03	-- accel/accel.sh@22 -- # case "$var" in
00:11:10.983   16:55:03	-- accel/accel.sh@20 -- # IFS=:
00:11:10.983   16:55:03	-- accel/accel.sh@20 -- # read -r var val
00:11:10.983   16:55:03	-- accel/accel.sh@21 -- # val=software
00:11:10.983   16:55:03	-- accel/accel.sh@22 -- # case "$var" in
00:11:10.983   16:55:03	-- accel/accel.sh@23 -- # accel_module=software
00:11:10.983   16:55:03	-- accel/accel.sh@20 -- # IFS=:
00:11:10.983   16:55:03	-- accel/accel.sh@20 -- # read -r var val
00:11:10.983   16:55:03	-- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib
00:11:10.983   16:55:03	-- accel/accel.sh@22 -- # case "$var" in
00:11:10.983   16:55:03	-- accel/accel.sh@20 -- # IFS=:
00:11:10.983   16:55:03	-- accel/accel.sh@20 -- # read -r var val
00:11:10.983   16:55:03	-- accel/accel.sh@21 -- # val=32
00:11:10.983   16:55:03	-- accel/accel.sh@22 -- # case "$var" in
00:11:10.983   16:55:03	-- accel/accel.sh@20 -- # IFS=:
00:11:10.983   16:55:03	-- accel/accel.sh@20 -- # read -r var val
00:11:10.983   16:55:03	-- accel/accel.sh@21 -- # val=32
00:11:10.983   16:55:03	-- accel/accel.sh@22 -- # case "$var" in
00:11:10.983   16:55:03	-- accel/accel.sh@20 -- # IFS=:
00:11:10.983   16:55:03	-- accel/accel.sh@20 -- # read -r var val
00:11:10.983   16:55:03	-- accel/accel.sh@21 -- # val=2
00:11:10.983   16:55:03	-- accel/accel.sh@22 -- # case "$var" in
00:11:10.983   16:55:03	-- accel/accel.sh@20 -- # IFS=:
00:11:10.983   16:55:03	-- accel/accel.sh@20 -- # read -r var val
00:11:10.983   16:55:03	-- accel/accel.sh@21 -- # val='1 seconds'
00:11:10.983   16:55:03	-- accel/accel.sh@22 -- # case "$var" in
00:11:10.983   16:55:03	-- accel/accel.sh@20 -- # IFS=:
00:11:10.983   16:55:03	-- accel/accel.sh@20 -- # read -r var val
00:11:10.983   16:55:03	-- accel/accel.sh@21 -- # val=Yes
00:11:10.983   16:55:03	-- accel/accel.sh@22 -- # case "$var" in
00:11:10.983   16:55:03	-- accel/accel.sh@20 -- # IFS=:
00:11:10.983   16:55:03	-- accel/accel.sh@20 -- # read -r var val
00:11:10.983   16:55:03	-- accel/accel.sh@21 -- # val=
00:11:10.983   16:55:03	-- accel/accel.sh@22 -- # case "$var" in
00:11:10.983   16:55:03	-- accel/accel.sh@20 -- # IFS=:
00:11:10.983   16:55:03	-- accel/accel.sh@20 -- # read -r var val
00:11:10.983   16:55:03	-- accel/accel.sh@21 -- # val=
00:11:10.983   16:55:03	-- accel/accel.sh@22 -- # case "$var" in
00:11:10.983   16:55:03	-- accel/accel.sh@20 -- # IFS=:
00:11:10.983   16:55:03	-- accel/accel.sh@20 -- # read -r var val
00:11:12.363   16:55:04	-- accel/accel.sh@21 -- # val=
00:11:12.363   16:55:04	-- accel/accel.sh@22 -- # case "$var" in
00:11:12.363   16:55:04	-- accel/accel.sh@20 -- # IFS=:
00:11:12.363   16:55:04	-- accel/accel.sh@20 -- # read -r var val
00:11:12.363   16:55:04	-- accel/accel.sh@21 -- # val=
00:11:12.363   16:55:04	-- accel/accel.sh@22 -- # case "$var" in
00:11:12.363   16:55:04	-- accel/accel.sh@20 -- # IFS=:
00:11:12.363   16:55:04	-- accel/accel.sh@20 -- # read -r var val
00:11:12.363   16:55:04	-- accel/accel.sh@21 -- # val=
00:11:12.363   16:55:04	-- accel/accel.sh@22 -- # case "$var" in
00:11:12.363   16:55:04	-- accel/accel.sh@20 -- # IFS=:
00:11:12.363   16:55:04	-- accel/accel.sh@20 -- # read -r var val
00:11:12.363   16:55:04	-- accel/accel.sh@21 -- # val=
00:11:12.363   16:55:04	-- accel/accel.sh@22 -- # case "$var" in
00:11:12.363   16:55:04	-- accel/accel.sh@20 -- # IFS=:
00:11:12.363   16:55:04	-- accel/accel.sh@20 -- # read -r var val
00:11:12.363   16:55:04	-- accel/accel.sh@21 -- # val=
00:11:12.363   16:55:04	-- accel/accel.sh@22 -- # case "$var" in
00:11:12.363   16:55:04	-- accel/accel.sh@20 -- # IFS=:
00:11:12.363   16:55:04	-- accel/accel.sh@20 -- # read -r var val
00:11:12.363   16:55:04	-- accel/accel.sh@21 -- # val=
00:11:12.363   16:55:04	-- accel/accel.sh@22 -- # case "$var" in
00:11:12.363   16:55:04	-- accel/accel.sh@20 -- # IFS=:
00:11:12.363   16:55:04	-- accel/accel.sh@20 -- # read -r var val
00:11:12.363   16:55:04	-- accel/accel.sh@21 -- # val=
00:11:12.363   16:55:04	-- accel/accel.sh@22 -- # case "$var" in
00:11:12.363   16:55:04	-- accel/accel.sh@20 -- # IFS=:
00:11:12.363   16:55:04	-- accel/accel.sh@20 -- # read -r var val
00:11:12.363   16:55:04	-- accel/accel.sh@28 -- # [[ -n software ]]
00:11:12.363   16:55:04	-- accel/accel.sh@28 -- # [[ -n decompress ]]
00:11:12.363   16:55:04	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:11:12.363  
00:11:12.363  real	0m3.479s
00:11:12.363  user	0m2.829s
00:11:12.363  sys	0m0.476s
00:11:12.363   16:55:04	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:11:12.363   16:55:04	-- common/autotest_common.sh@10 -- # set +x
00:11:12.363  ************************************
00:11:12.363  END TEST accel_deomp_full_mthread
00:11:12.363  ************************************
00:11:12.363   16:55:05	-- accel/accel.sh@116 -- # [[ n == y ]]
00:11:12.363   16:55:05	-- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62
00:11:12.363    16:55:05	-- accel/accel.sh@129 -- # build_accel_config
00:11:12.363    16:55:05	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:11:12.363   16:55:05	-- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']'
00:11:12.363    16:55:05	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:11:12.363   16:55:05	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:11:12.363    16:55:05	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:11:12.363   16:55:05	-- common/autotest_common.sh@10 -- # set +x
00:11:12.363    16:55:05	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:11:12.363    16:55:05	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:11:12.363    16:55:05	-- accel/accel.sh@41 -- # local IFS=,
00:11:12.363    16:55:05	-- accel/accel.sh@42 -- # jq -r .
00:11:12.363  ************************************
00:11:12.363  START TEST accel_dif_functional_tests
00:11:12.363  ************************************
00:11:12.363   16:55:05	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62
00:11:12.363  [2024-11-19 16:55:05.153078] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:11:12.363  [2024-11-19 16:55:05.153310] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119082 ]
00:11:12.623  [2024-11-19 16:55:05.317975] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3
00:11:12.623  [2024-11-19 16:55:05.396772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:11:12.623  [2024-11-19 16:55:05.396860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:11:12.623  [2024-11-19 16:55:05.396865] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:11:12.882  
00:11:12.882  
00:11:12.882       CUnit - A unit testing framework for C - Version 2.1-3
00:11:12.882       http://cunit.sourceforge.net/
00:11:12.882  
00:11:12.882  
00:11:12.882  Suite: accel_dif
00:11:12.882    Test: verify: DIF generated, GUARD check ...passed
00:11:12.882    Test: verify: DIF generated, APPTAG check ...passed
00:11:12.882    Test: verify: DIF generated, REFTAG check ...passed
00:11:12.882    Test: verify: DIF not generated, GUARD check ...[2024-11-19 16:55:05.524783] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10,  Expected=5a5a, Actual=7867
00:11:12.882  passed[2024-11-19 16:55:05.524944] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10,  Expected=5a5a, Actual=7867
00:11:12.882  
00:11:12.882    Test: verify: DIF not generated, APPTAG check ...[2024-11-19 16:55:05.525189] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10,  Expected=14, Actual=5a5a
00:11:12.882  [2024-11-19 16:55:05.525266] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10,  Expected=14, Actual=5a5a
00:11:12.882  passed
00:11:12.882    Test: verify: DIF not generated, REFTAG check ...[2024-11-19 16:55:05.525566] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a
00:11:12.882  [2024-11-19 16:55:05.525663] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a
00:11:12.882  passed
00:11:12.882    Test: verify: APPTAG correct, APPTAG check ...passed
00:11:12.882    Test: verify: APPTAG incorrect, APPTAG check ...[2024-11-19 16:55:05.526273] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30,  Expected=28, Actual=14
00:11:12.882  passed
00:11:12.882    Test: verify: APPTAG incorrect, no APPTAG check ...passed
00:11:12.882    Test: verify: REFTAG incorrect, REFTAG ignore ...passed
00:11:12.882    Test: verify: REFTAG_INIT correct, REFTAG check ...passed
00:11:12.882    Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-11-19 16:55:05.526946] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10
00:11:12.882  passed
00:11:12.882    Test: generate copy: DIF generated, GUARD check ...passed
00:11:12.882    Test: generate copy: DIF generated, APTTAG check ...passed
00:11:12.882    Test: generate copy: DIF generated, REFTAG check ...passed
00:11:12.882    Test: generate copy: DIF generated, no GUARD check flag set ...passed
00:11:12.882    Test: generate copy: DIF generated, no APPTAG check flag set ...passed
00:11:12.882    Test: generate copy: DIF generated, no REFTAG check flag set ...passed
00:11:12.882    Test: generate copy: iovecs-len validate ...[2024-11-19 16:55:05.528156] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size.
00:11:12.882  passed
00:11:12.882    Test: generate copy: buffer alignment validate ...passed
00:11:12.882  
00:11:12.882  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:11:12.882                suites      1      1    n/a      0        0
00:11:12.882                 tests     20     20     20      0        0
00:11:12.882               asserts    204    204    204      0      n/a
00:11:12.882  
00:11:12.882  Elapsed time =    0.009 seconds
00:11:13.141  
00:11:13.141  real	0m0.849s
00:11:13.141  user	0m1.124s
00:11:13.141  sys	0m0.293s
00:11:13.141   16:55:05	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:11:13.141   16:55:05	-- common/autotest_common.sh@10 -- # set +x
00:11:13.141  ************************************
00:11:13.141  END TEST accel_dif_functional_tests
00:11:13.141  ************************************
00:11:13.141  ************************************
00:11:13.142  END TEST accel
00:11:13.142  ************************************
00:11:13.142  
00:11:13.142  real	1m9.978s
00:11:13.142  user	1m12.439s
00:11:13.142  sys	0m9.893s
00:11:13.142   16:55:05	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:11:13.142   16:55:05	-- common/autotest_common.sh@10 -- # set +x
00:11:13.401   16:55:06	-- spdk/autotest.sh@177 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh
00:11:13.401   16:55:06	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:11:13.401   16:55:06	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:11:13.401   16:55:06	-- common/autotest_common.sh@10 -- # set +x
00:11:13.401  ************************************
00:11:13.401  START TEST accel_rpc
00:11:13.401  ************************************
00:11:13.401   16:55:06	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh
00:11:13.401  * Looking for test storage...
00:11:13.401  * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel
00:11:13.401    16:55:06	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:11:13.401     16:55:06	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:11:13.401     16:55:06	-- common/autotest_common.sh@1690 -- # lcov --version
00:11:13.401    16:55:06	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:11:13.401    16:55:06	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:11:13.401    16:55:06	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:11:13.401    16:55:06	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:11:13.401    16:55:06	-- scripts/common.sh@335 -- # IFS=.-:
00:11:13.401    16:55:06	-- scripts/common.sh@335 -- # read -ra ver1
00:11:13.401    16:55:06	-- scripts/common.sh@336 -- # IFS=.-:
00:11:13.401    16:55:06	-- scripts/common.sh@336 -- # read -ra ver2
00:11:13.401    16:55:06	-- scripts/common.sh@337 -- # local 'op=<'
00:11:13.401    16:55:06	-- scripts/common.sh@339 -- # ver1_l=2
00:11:13.401    16:55:06	-- scripts/common.sh@340 -- # ver2_l=1
00:11:13.401    16:55:06	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:11:13.401    16:55:06	-- scripts/common.sh@343 -- # case "$op" in
00:11:13.401    16:55:06	-- scripts/common.sh@344 -- # : 1
00:11:13.401    16:55:06	-- scripts/common.sh@363 -- # (( v = 0 ))
00:11:13.401    16:55:06	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:11:13.401     16:55:06	-- scripts/common.sh@364 -- # decimal 1
00:11:13.401     16:55:06	-- scripts/common.sh@352 -- # local d=1
00:11:13.401     16:55:06	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:13.401     16:55:06	-- scripts/common.sh@354 -- # echo 1
00:11:13.401    16:55:06	-- scripts/common.sh@364 -- # ver1[v]=1
00:11:13.401     16:55:06	-- scripts/common.sh@365 -- # decimal 2
00:11:13.401     16:55:06	-- scripts/common.sh@352 -- # local d=2
00:11:13.401     16:55:06	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:11:13.401     16:55:06	-- scripts/common.sh@354 -- # echo 2
00:11:13.401    16:55:06	-- scripts/common.sh@365 -- # ver2[v]=2
00:11:13.401    16:55:06	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:11:13.401    16:55:06	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:11:13.401    16:55:06	-- scripts/common.sh@367 -- # return 0
00:11:13.401    16:55:06	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:11:13.401    16:55:06	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:11:13.401  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:13.401  		--rc genhtml_branch_coverage=1
00:11:13.401  		--rc genhtml_function_coverage=1
00:11:13.401  		--rc genhtml_legend=1
00:11:13.401  		--rc geninfo_all_blocks=1
00:11:13.401  		--rc geninfo_unexecuted_blocks=1
00:11:13.401  		
00:11:13.401  		'
00:11:13.401    16:55:06	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:11:13.401  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:13.401  		--rc genhtml_branch_coverage=1
00:11:13.401  		--rc genhtml_function_coverage=1
00:11:13.401  		--rc genhtml_legend=1
00:11:13.401  		--rc geninfo_all_blocks=1
00:11:13.401  		--rc geninfo_unexecuted_blocks=1
00:11:13.401  		
00:11:13.401  		'
00:11:13.401    16:55:06	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:11:13.401  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:13.401  		--rc genhtml_branch_coverage=1
00:11:13.401  		--rc genhtml_function_coverage=1
00:11:13.401  		--rc genhtml_legend=1
00:11:13.401  		--rc geninfo_all_blocks=1
00:11:13.401  		--rc geninfo_unexecuted_blocks=1
00:11:13.401  		
00:11:13.401  		'
00:11:13.401    16:55:06	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:11:13.401  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:13.401  		--rc genhtml_branch_coverage=1
00:11:13.401  		--rc genhtml_function_coverage=1
00:11:13.401  		--rc genhtml_legend=1
00:11:13.401  		--rc geninfo_all_blocks=1
00:11:13.401  		--rc geninfo_unexecuted_blocks=1
00:11:13.401  		
00:11:13.401  		'
00:11:13.401   16:55:06	-- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR
00:11:13.401   16:55:06	-- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=119176
00:11:13.401   16:55:06	-- accel/accel_rpc.sh@15 -- # waitforlisten 119176
00:11:13.401   16:55:06	-- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc
00:11:13.401   16:55:06	-- common/autotest_common.sh@829 -- # '[' -z 119176 ']'
00:11:13.401   16:55:06	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:11:13.401   16:55:06	-- common/autotest_common.sh@834 -- # local max_retries=100
00:11:13.401   16:55:06	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:11:13.401  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:11:13.401   16:55:06	-- common/autotest_common.sh@838 -- # xtrace_disable
00:11:13.401   16:55:06	-- common/autotest_common.sh@10 -- # set +x
00:11:13.660  [2024-11-19 16:55:06.336907] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:11:13.660  [2024-11-19 16:55:06.337170] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119176 ]
00:11:13.661  [2024-11-19 16:55:06.495020] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:11:13.920  [2024-11-19 16:55:06.569641] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:11:13.920  [2024-11-19 16:55:06.570071] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:11:13.920   16:55:06	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:11:13.920   16:55:06	-- common/autotest_common.sh@862 -- # return 0
00:11:13.920   16:55:06	-- accel/accel_rpc.sh@45 -- # [[ y == y ]]
00:11:13.920   16:55:06	-- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]]
00:11:13.920   16:55:06	-- accel/accel_rpc.sh@49 -- # [[ y == y ]]
00:11:13.920   16:55:06	-- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]]
00:11:13.920   16:55:06	-- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite
00:11:13.920   16:55:06	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:11:13.920   16:55:06	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:11:13.920   16:55:06	-- common/autotest_common.sh@10 -- # set +x
00:11:13.920  ************************************
00:11:13.920  START TEST accel_assign_opcode
00:11:13.920  ************************************
00:11:13.920   16:55:06	-- common/autotest_common.sh@1114 -- # accel_assign_opcode_test_suite
00:11:13.920   16:55:06	-- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect
00:11:13.920   16:55:06	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:13.920   16:55:06	-- common/autotest_common.sh@10 -- # set +x
00:11:13.920  [2024-11-19 16:55:06.655171] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect
00:11:13.920   16:55:06	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:13.920   16:55:06	-- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software
00:11:13.920   16:55:06	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:13.920   16:55:06	-- common/autotest_common.sh@10 -- # set +x
00:11:13.920  [2024-11-19 16:55:06.663144] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software
00:11:13.920   16:55:06	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:13.920   16:55:06	-- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init
00:11:13.920   16:55:06	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:13.920   16:55:06	-- common/autotest_common.sh@10 -- # set +x
00:11:14.179   16:55:06	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:14.179   16:55:06	-- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments
00:11:14.179   16:55:06	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:14.179   16:55:06	-- common/autotest_common.sh@10 -- # set +x
00:11:14.179   16:55:06	-- accel/accel_rpc.sh@42 -- # jq -r .copy
00:11:14.179   16:55:06	-- accel/accel_rpc.sh@42 -- # grep software
00:11:14.179   16:55:06	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:14.179  software
00:11:14.179  
00:11:14.179  real	0m0.387s
00:11:14.179  user	0m0.055s
00:11:14.179  sys	0m0.004s
00:11:14.179   16:55:07	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:11:14.179   16:55:07	-- common/autotest_common.sh@10 -- # set +x
00:11:14.179  ************************************
00:11:14.179  END TEST accel_assign_opcode
00:11:14.179  ************************************
00:11:14.438   16:55:07	-- accel/accel_rpc.sh@55 -- # killprocess 119176
00:11:14.438   16:55:07	-- common/autotest_common.sh@936 -- # '[' -z 119176 ']'
00:11:14.438   16:55:07	-- common/autotest_common.sh@940 -- # kill -0 119176
00:11:14.438    16:55:07	-- common/autotest_common.sh@941 -- # uname
00:11:14.438   16:55:07	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:11:14.438    16:55:07	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 119176
00:11:14.438   16:55:07	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:11:14.438   16:55:07	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:11:14.438   16:55:07	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 119176'
00:11:14.438  killing process with pid 119176
00:11:14.438   16:55:07	-- common/autotest_common.sh@955 -- # kill 119176
00:11:14.438   16:55:07	-- common/autotest_common.sh@960 -- # wait 119176
00:11:15.006  ************************************
00:11:15.006  END TEST accel_rpc
00:11:15.006  ************************************
00:11:15.006  
00:11:15.006  real	0m1.759s
00:11:15.006  user	0m1.508s
00:11:15.006  sys	0m0.675s
00:11:15.006   16:55:07	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:11:15.006   16:55:07	-- common/autotest_common.sh@10 -- # set +x
00:11:15.006   16:55:07	-- spdk/autotest.sh@178 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh
00:11:15.006   16:55:07	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:11:15.006   16:55:07	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:11:15.006   16:55:07	-- common/autotest_common.sh@10 -- # set +x
00:11:15.266  ************************************
00:11:15.266  START TEST app_cmdline
00:11:15.266  ************************************
00:11:15.266   16:55:07	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh
00:11:15.266  * Looking for test storage...
00:11:15.266  * Found test storage at /home/vagrant/spdk_repo/spdk/test/app
00:11:15.266    16:55:07	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:11:15.266     16:55:07	-- common/autotest_common.sh@1690 -- # lcov --version
00:11:15.266     16:55:07	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:11:15.266    16:55:08	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:11:15.266    16:55:08	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:11:15.266    16:55:08	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:11:15.266    16:55:08	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:11:15.266    16:55:08	-- scripts/common.sh@335 -- # IFS=.-:
00:11:15.266    16:55:08	-- scripts/common.sh@335 -- # read -ra ver1
00:11:15.266    16:55:08	-- scripts/common.sh@336 -- # IFS=.-:
00:11:15.266    16:55:08	-- scripts/common.sh@336 -- # read -ra ver2
00:11:15.266    16:55:08	-- scripts/common.sh@337 -- # local 'op=<'
00:11:15.266    16:55:08	-- scripts/common.sh@339 -- # ver1_l=2
00:11:15.266    16:55:08	-- scripts/common.sh@340 -- # ver2_l=1
00:11:15.266    16:55:08	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:11:15.266    16:55:08	-- scripts/common.sh@343 -- # case "$op" in
00:11:15.266    16:55:08	-- scripts/common.sh@344 -- # : 1
00:11:15.266    16:55:08	-- scripts/common.sh@363 -- # (( v = 0 ))
00:11:15.266    16:55:08	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:11:15.266     16:55:08	-- scripts/common.sh@364 -- # decimal 1
00:11:15.266     16:55:08	-- scripts/common.sh@352 -- # local d=1
00:11:15.266     16:55:08	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:15.266     16:55:08	-- scripts/common.sh@354 -- # echo 1
00:11:15.266    16:55:08	-- scripts/common.sh@364 -- # ver1[v]=1
00:11:15.266     16:55:08	-- scripts/common.sh@365 -- # decimal 2
00:11:15.266     16:55:08	-- scripts/common.sh@352 -- # local d=2
00:11:15.266     16:55:08	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:11:15.266     16:55:08	-- scripts/common.sh@354 -- # echo 2
00:11:15.266    16:55:08	-- scripts/common.sh@365 -- # ver2[v]=2
00:11:15.266    16:55:08	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:11:15.266    16:55:08	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:11:15.266    16:55:08	-- scripts/common.sh@367 -- # return 0
00:11:15.266    16:55:08	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:11:15.266    16:55:08	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:11:15.266  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:15.266  		--rc genhtml_branch_coverage=1
00:11:15.266  		--rc genhtml_function_coverage=1
00:11:15.267  		--rc genhtml_legend=1
00:11:15.267  		--rc geninfo_all_blocks=1
00:11:15.267  		--rc geninfo_unexecuted_blocks=1
00:11:15.267  		
00:11:15.267  		'
00:11:15.267    16:55:08	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:11:15.267  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:15.267  		--rc genhtml_branch_coverage=1
00:11:15.267  		--rc genhtml_function_coverage=1
00:11:15.267  		--rc genhtml_legend=1
00:11:15.267  		--rc geninfo_all_blocks=1
00:11:15.267  		--rc geninfo_unexecuted_blocks=1
00:11:15.267  		
00:11:15.267  		'
00:11:15.267    16:55:08	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:11:15.267  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:15.267  		--rc genhtml_branch_coverage=1
00:11:15.267  		--rc genhtml_function_coverage=1
00:11:15.267  		--rc genhtml_legend=1
00:11:15.267  		--rc geninfo_all_blocks=1
00:11:15.267  		--rc geninfo_unexecuted_blocks=1
00:11:15.267  		
00:11:15.267  		'
00:11:15.267    16:55:08	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:11:15.267  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:15.267  		--rc genhtml_branch_coverage=1
00:11:15.267  		--rc genhtml_function_coverage=1
00:11:15.267  		--rc genhtml_legend=1
00:11:15.267  		--rc geninfo_all_blocks=1
00:11:15.267  		--rc geninfo_unexecuted_blocks=1
00:11:15.267  		
00:11:15.267  		'
00:11:15.267   16:55:08	-- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT
00:11:15.267   16:55:08	-- app/cmdline.sh@17 -- # spdk_tgt_pid=119285
00:11:15.267   16:55:08	-- app/cmdline.sh@18 -- # waitforlisten 119285
00:11:15.267   16:55:08	-- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods
00:11:15.267   16:55:08	-- common/autotest_common.sh@829 -- # '[' -z 119285 ']'
00:11:15.267   16:55:08	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:11:15.267   16:55:08	-- common/autotest_common.sh@834 -- # local max_retries=100
00:11:15.267   16:55:08	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:11:15.267  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:11:15.267   16:55:08	-- common/autotest_common.sh@838 -- # xtrace_disable
00:11:15.267   16:55:08	-- common/autotest_common.sh@10 -- # set +x
00:11:15.526  [2024-11-19 16:55:08.140302] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:11:15.526  [2024-11-19 16:55:08.140503] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119285 ]
00:11:15.526  [2024-11-19 16:55:08.287339] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:11:15.526  [2024-11-19 16:55:08.369715] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:11:15.526  [2024-11-19 16:55:08.369969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:11:16.463   16:55:09	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:11:16.463   16:55:09	-- common/autotest_common.sh@862 -- # return 0
00:11:16.463   16:55:09	-- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version
00:11:16.722  {
00:11:16.722    "version": "SPDK v24.01.1-pre git sha1 c13c99a5e",
00:11:16.722    "fields": {
00:11:16.722      "major": 24,
00:11:16.722      "minor": 1,
00:11:16.722      "patch": 1,
00:11:16.722      "suffix": "-pre",
00:11:16.722      "commit": "c13c99a5e"
00:11:16.722    }
00:11:16.722  }
00:11:16.722   16:55:09	-- app/cmdline.sh@22 -- # expected_methods=()
00:11:16.722   16:55:09	-- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods")
00:11:16.722   16:55:09	-- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version")
00:11:16.722   16:55:09	-- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort))
00:11:16.722    16:55:09	-- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods
00:11:16.722    16:55:09	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:16.722    16:55:09	-- app/cmdline.sh@26 -- # jq -r '.[]'
00:11:16.722    16:55:09	-- common/autotest_common.sh@10 -- # set +x
00:11:16.722    16:55:09	-- app/cmdline.sh@26 -- # sort
00:11:16.722    16:55:09	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:16.722   16:55:09	-- app/cmdline.sh@27 -- # (( 2 == 2 ))
00:11:16.722   16:55:09	-- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]]
00:11:16.722   16:55:09	-- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats
00:11:16.722   16:55:09	-- common/autotest_common.sh@650 -- # local es=0
00:11:16.722   16:55:09	-- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats
00:11:16.722   16:55:09	-- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:11:16.722   16:55:09	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:11:16.722    16:55:09	-- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:11:16.722   16:55:09	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:11:16.722    16:55:09	-- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:11:16.722   16:55:09	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:11:16.722   16:55:09	-- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:11:16.722   16:55:09	-- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]]
00:11:16.722   16:55:09	-- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats
00:11:16.982  request:
00:11:16.982  {
00:11:16.982    "method": "env_dpdk_get_mem_stats",
00:11:16.982    "req_id": 1
00:11:16.982  }
00:11:16.982  Got JSON-RPC error response
00:11:16.982  response:
00:11:16.982  {
00:11:16.982    "code": -32601,
00:11:16.982    "message": "Method not found"
00:11:16.982  }
00:11:16.982   16:55:09	-- common/autotest_common.sh@653 -- # es=1
00:11:16.982   16:55:09	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:11:16.982   16:55:09	-- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:11:16.982   16:55:09	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:11:16.982   16:55:09	-- app/cmdline.sh@1 -- # killprocess 119285
00:11:16.982   16:55:09	-- common/autotest_common.sh@936 -- # '[' -z 119285 ']'
00:11:16.982   16:55:09	-- common/autotest_common.sh@940 -- # kill -0 119285
00:11:16.982    16:55:09	-- common/autotest_common.sh@941 -- # uname
00:11:16.982   16:55:09	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:11:16.982    16:55:09	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 119285
00:11:16.982   16:55:09	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:11:16.982   16:55:09	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:11:16.982  killing process with pid 119285
00:11:16.982   16:55:09	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 119285'
00:11:16.982   16:55:09	-- common/autotest_common.sh@955 -- # kill 119285
00:11:16.982   16:55:09	-- common/autotest_common.sh@960 -- # wait 119285
00:11:17.550  ************************************
00:11:17.550  END TEST app_cmdline
00:11:17.550  ************************************
00:11:17.550  
00:11:17.550  real	0m2.508s
00:11:17.550  user	0m2.789s
00:11:17.550  sys	0m0.742s
00:11:17.550   16:55:10	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:11:17.550   16:55:10	-- common/autotest_common.sh@10 -- # set +x
00:11:17.810   16:55:10	-- spdk/autotest.sh@179 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh
00:11:17.810   16:55:10	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:11:17.810   16:55:10	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:11:17.810   16:55:10	-- common/autotest_common.sh@10 -- # set +x
00:11:17.810  ************************************
00:11:17.810  START TEST version
00:11:17.810  ************************************
00:11:17.810   16:55:10	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh
00:11:17.810  * Looking for test storage...
00:11:17.810  * Found test storage at /home/vagrant/spdk_repo/spdk/test/app
00:11:17.810    16:55:10	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:11:17.810     16:55:10	-- common/autotest_common.sh@1690 -- # lcov --version
00:11:17.810     16:55:10	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:11:17.810    16:55:10	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:11:17.810    16:55:10	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:11:17.810    16:55:10	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:11:17.810    16:55:10	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:11:17.810    16:55:10	-- scripts/common.sh@335 -- # IFS=.-:
00:11:17.810    16:55:10	-- scripts/common.sh@335 -- # read -ra ver1
00:11:17.810    16:55:10	-- scripts/common.sh@336 -- # IFS=.-:
00:11:17.810    16:55:10	-- scripts/common.sh@336 -- # read -ra ver2
00:11:17.810    16:55:10	-- scripts/common.sh@337 -- # local 'op=<'
00:11:17.810    16:55:10	-- scripts/common.sh@339 -- # ver1_l=2
00:11:17.810    16:55:10	-- scripts/common.sh@340 -- # ver2_l=1
00:11:17.810    16:55:10	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:11:17.810    16:55:10	-- scripts/common.sh@343 -- # case "$op" in
00:11:17.810    16:55:10	-- scripts/common.sh@344 -- # : 1
00:11:17.810    16:55:10	-- scripts/common.sh@363 -- # (( v = 0 ))
00:11:17.810    16:55:10	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:11:17.810     16:55:10	-- scripts/common.sh@364 -- # decimal 1
00:11:17.810     16:55:10	-- scripts/common.sh@352 -- # local d=1
00:11:17.810     16:55:10	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:17.810     16:55:10	-- scripts/common.sh@354 -- # echo 1
00:11:17.810    16:55:10	-- scripts/common.sh@364 -- # ver1[v]=1
00:11:17.810     16:55:10	-- scripts/common.sh@365 -- # decimal 2
00:11:17.810     16:55:10	-- scripts/common.sh@352 -- # local d=2
00:11:17.810     16:55:10	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:11:17.810     16:55:10	-- scripts/common.sh@354 -- # echo 2
00:11:17.810    16:55:10	-- scripts/common.sh@365 -- # ver2[v]=2
00:11:17.810    16:55:10	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:11:17.810    16:55:10	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:11:17.810    16:55:10	-- scripts/common.sh@367 -- # return 0
00:11:17.810    16:55:10	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:11:17.810    16:55:10	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:11:17.810  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:17.810  		--rc genhtml_branch_coverage=1
00:11:17.810  		--rc genhtml_function_coverage=1
00:11:17.810  		--rc genhtml_legend=1
00:11:17.810  		--rc geninfo_all_blocks=1
00:11:17.810  		--rc geninfo_unexecuted_blocks=1
00:11:17.810  		
00:11:17.810  		'
00:11:17.811    16:55:10	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:11:17.811  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:17.811  		--rc genhtml_branch_coverage=1
00:11:17.811  		--rc genhtml_function_coverage=1
00:11:17.811  		--rc genhtml_legend=1
00:11:17.811  		--rc geninfo_all_blocks=1
00:11:17.811  		--rc geninfo_unexecuted_blocks=1
00:11:17.811  		
00:11:17.811  		'
00:11:17.811    16:55:10	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:11:17.811  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:17.811  		--rc genhtml_branch_coverage=1
00:11:17.811  		--rc genhtml_function_coverage=1
00:11:17.811  		--rc genhtml_legend=1
00:11:17.811  		--rc geninfo_all_blocks=1
00:11:17.811  		--rc geninfo_unexecuted_blocks=1
00:11:17.811  		
00:11:17.811  		'
00:11:17.811    16:55:10	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:11:17.811  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:17.811  		--rc genhtml_branch_coverage=1
00:11:17.811  		--rc genhtml_function_coverage=1
00:11:17.811  		--rc genhtml_legend=1
00:11:17.811  		--rc geninfo_all_blocks=1
00:11:17.811  		--rc geninfo_unexecuted_blocks=1
00:11:17.811  		
00:11:17.811  		'
00:11:17.811    16:55:10	-- app/version.sh@17 -- # get_header_version major
00:11:17.811    16:55:10	-- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h
00:11:17.811    16:55:10	-- app/version.sh@14 -- # cut -f2
00:11:17.811    16:55:10	-- app/version.sh@14 -- # tr -d '"'
00:11:18.070   16:55:10	-- app/version.sh@17 -- # major=24
00:11:18.070    16:55:10	-- app/version.sh@18 -- # get_header_version minor
00:11:18.070    16:55:10	-- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h
00:11:18.070    16:55:10	-- app/version.sh@14 -- # tr -d '"'
00:11:18.070    16:55:10	-- app/version.sh@14 -- # cut -f2
00:11:18.070   16:55:10	-- app/version.sh@18 -- # minor=1
00:11:18.070    16:55:10	-- app/version.sh@19 -- # get_header_version patch
00:11:18.070    16:55:10	-- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h
00:11:18.070    16:55:10	-- app/version.sh@14 -- # cut -f2
00:11:18.070    16:55:10	-- app/version.sh@14 -- # tr -d '"'
00:11:18.070   16:55:10	-- app/version.sh@19 -- # patch=1
00:11:18.070    16:55:10	-- app/version.sh@20 -- # get_header_version suffix
00:11:18.070    16:55:10	-- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h
00:11:18.070    16:55:10	-- app/version.sh@14 -- # tr -d '"'
00:11:18.070    16:55:10	-- app/version.sh@14 -- # cut -f2
00:11:18.070   16:55:10	-- app/version.sh@20 -- # suffix=-pre
00:11:18.070   16:55:10	-- app/version.sh@22 -- # version=24.1
00:11:18.070   16:55:10	-- app/version.sh@25 -- # (( patch != 0 ))
00:11:18.070   16:55:10	-- app/version.sh@25 -- # version=24.1.1
00:11:18.070   16:55:10	-- app/version.sh@28 -- # version=24.1.1rc0
00:11:18.070   16:55:10	-- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python
00:11:18.070    16:55:10	-- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)'
00:11:18.070   16:55:10	-- app/version.sh@30 -- # py_version=24.1.1rc0
00:11:18.070   16:55:10	-- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]]
00:11:18.070  ************************************
00:11:18.070  END TEST version
00:11:18.070  ************************************
00:11:18.070  
00:11:18.070  real	0m0.298s
00:11:18.070  user	0m0.213s
00:11:18.070  sys	0m0.132s
00:11:18.070   16:55:10	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:11:18.070   16:55:10	-- common/autotest_common.sh@10 -- # set +x
00:11:18.070   16:55:10	-- spdk/autotest.sh@181 -- # '[' 1 -eq 1 ']'
00:11:18.070   16:55:10	-- spdk/autotest.sh@182 -- # run_test blockdev_general /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh
00:11:18.070   16:55:10	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:11:18.070   16:55:10	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:11:18.070   16:55:10	-- common/autotest_common.sh@10 -- # set +x
00:11:18.070  ************************************
00:11:18.070  START TEST blockdev_general
00:11:18.070  ************************************
00:11:18.070   16:55:10	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh
00:11:18.070  * Looking for test storage...
00:11:18.070  * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev
00:11:18.070    16:55:10	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:11:18.070     16:55:10	-- common/autotest_common.sh@1690 -- # lcov --version
00:11:18.070     16:55:10	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:11:18.329    16:55:10	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:11:18.329    16:55:10	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:11:18.329    16:55:11	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:11:18.329    16:55:11	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:11:18.329    16:55:11	-- scripts/common.sh@335 -- # IFS=.-:
00:11:18.329    16:55:11	-- scripts/common.sh@335 -- # read -ra ver1
00:11:18.329    16:55:11	-- scripts/common.sh@336 -- # IFS=.-:
00:11:18.329    16:55:11	-- scripts/common.sh@336 -- # read -ra ver2
00:11:18.329    16:55:11	-- scripts/common.sh@337 -- # local 'op=<'
00:11:18.329    16:55:11	-- scripts/common.sh@339 -- # ver1_l=2
00:11:18.329    16:55:11	-- scripts/common.sh@340 -- # ver2_l=1
00:11:18.329    16:55:11	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:11:18.329    16:55:11	-- scripts/common.sh@343 -- # case "$op" in
00:11:18.329    16:55:11	-- scripts/common.sh@344 -- # : 1
00:11:18.329    16:55:11	-- scripts/common.sh@363 -- # (( v = 0 ))
00:11:18.329    16:55:11	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:11:18.329     16:55:11	-- scripts/common.sh@364 -- # decimal 1
00:11:18.329     16:55:11	-- scripts/common.sh@352 -- # local d=1
00:11:18.329     16:55:11	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:18.329     16:55:11	-- scripts/common.sh@354 -- # echo 1
00:11:18.329    16:55:11	-- scripts/common.sh@364 -- # ver1[v]=1
00:11:18.329     16:55:11	-- scripts/common.sh@365 -- # decimal 2
00:11:18.329     16:55:11	-- scripts/common.sh@352 -- # local d=2
00:11:18.329     16:55:11	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:11:18.329     16:55:11	-- scripts/common.sh@354 -- # echo 2
00:11:18.329    16:55:11	-- scripts/common.sh@365 -- # ver2[v]=2
00:11:18.329    16:55:11	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:11:18.329    16:55:11	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:11:18.329    16:55:11	-- scripts/common.sh@367 -- # return 0
00:11:18.329    16:55:11	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:11:18.329    16:55:11	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:11:18.329  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:18.329  		--rc genhtml_branch_coverage=1
00:11:18.329  		--rc genhtml_function_coverage=1
00:11:18.329  		--rc genhtml_legend=1
00:11:18.329  		--rc geninfo_all_blocks=1
00:11:18.329  		--rc geninfo_unexecuted_blocks=1
00:11:18.329  		
00:11:18.329  		'
00:11:18.329    16:55:11	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:11:18.329  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:18.329  		--rc genhtml_branch_coverage=1
00:11:18.329  		--rc genhtml_function_coverage=1
00:11:18.329  		--rc genhtml_legend=1
00:11:18.329  		--rc geninfo_all_blocks=1
00:11:18.329  		--rc geninfo_unexecuted_blocks=1
00:11:18.329  		
00:11:18.329  		'
00:11:18.329    16:55:11	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:11:18.329  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:18.329  		--rc genhtml_branch_coverage=1
00:11:18.329  		--rc genhtml_function_coverage=1
00:11:18.329  		--rc genhtml_legend=1
00:11:18.329  		--rc geninfo_all_blocks=1
00:11:18.329  		--rc geninfo_unexecuted_blocks=1
00:11:18.330  		
00:11:18.330  		'
00:11:18.330    16:55:11	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:11:18.330  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:18.330  		--rc genhtml_branch_coverage=1
00:11:18.330  		--rc genhtml_function_coverage=1
00:11:18.330  		--rc genhtml_legend=1
00:11:18.330  		--rc geninfo_all_blocks=1
00:11:18.330  		--rc geninfo_unexecuted_blocks=1
00:11:18.330  		
00:11:18.330  		'
00:11:18.330   16:55:11	-- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh
00:11:18.330    16:55:11	-- bdev/nbd_common.sh@6 -- # set -e
00:11:18.330   16:55:11	-- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd
00:11:18.330   16:55:11	-- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:11:18.330   16:55:11	-- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json
00:11:18.330   16:55:11	-- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json
00:11:18.330   16:55:11	-- bdev/blockdev.sh@18 -- # :
00:11:18.330   16:55:11	-- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0
00:11:18.330   16:55:11	-- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1
00:11:18.330   16:55:11	-- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5
00:11:18.330    16:55:11	-- bdev/blockdev.sh@672 -- # uname -s
00:11:18.330   16:55:11	-- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']'
00:11:18.330   16:55:11	-- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0
00:11:18.330   16:55:11	-- bdev/blockdev.sh@680 -- # test_type=bdev
00:11:18.330   16:55:11	-- bdev/blockdev.sh@681 -- # crypto_device=
00:11:18.330   16:55:11	-- bdev/blockdev.sh@682 -- # dek=
00:11:18.330   16:55:11	-- bdev/blockdev.sh@683 -- # env_ctx=
00:11:18.330   16:55:11	-- bdev/blockdev.sh@684 -- # wait_for_rpc=
00:11:18.330   16:55:11	-- bdev/blockdev.sh@685 -- # '[' -n '' ']'
00:11:18.330   16:55:11	-- bdev/blockdev.sh@688 -- # [[ bdev == bdev ]]
00:11:18.330   16:55:11	-- bdev/blockdev.sh@689 -- # wait_for_rpc=--wait-for-rpc
00:11:18.330   16:55:11	-- bdev/blockdev.sh@691 -- # start_spdk_tgt
00:11:18.330   16:55:11	-- bdev/blockdev.sh@45 -- # spdk_tgt_pid=119468
00:11:18.330   16:55:11	-- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT
00:11:18.330   16:55:11	-- bdev/blockdev.sh@47 -- # waitforlisten 119468
00:11:18.330   16:55:11	-- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' --wait-for-rpc
00:11:18.330   16:55:11	-- common/autotest_common.sh@829 -- # '[' -z 119468 ']'
00:11:18.330   16:55:11	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:11:18.330   16:55:11	-- common/autotest_common.sh@834 -- # local max_retries=100
00:11:18.330  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:11:18.330   16:55:11	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:11:18.330   16:55:11	-- common/autotest_common.sh@838 -- # xtrace_disable
00:11:18.330   16:55:11	-- common/autotest_common.sh@10 -- # set +x
00:11:18.330  [2024-11-19 16:55:11.129640] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:11:18.330  [2024-11-19 16:55:11.130074] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119468 ]
00:11:18.593  [2024-11-19 16:55:11.282285] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:11:18.593  [2024-11-19 16:55:11.333660] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:11:18.593  [2024-11-19 16:55:11.334053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:11:19.545   16:55:12	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:11:19.545   16:55:12	-- common/autotest_common.sh@862 -- # return 0
00:11:19.546   16:55:12	-- bdev/blockdev.sh@692 -- # case "$test_type" in
00:11:19.546   16:55:12	-- bdev/blockdev.sh@694 -- # setup_bdev_conf
00:11:19.546   16:55:12	-- bdev/blockdev.sh@51 -- # rpc_cmd
00:11:19.546   16:55:12	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:19.546   16:55:12	-- common/autotest_common.sh@10 -- # set +x
00:11:19.546  [2024-11-19 16:55:12.317412] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1
00:11:19.546  [2024-11-19 16:55:12.317495] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1
00:11:19.546  
00:11:19.546  [2024-11-19 16:55:12.325378] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2
00:11:19.546  [2024-11-19 16:55:12.325444] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2
00:11:19.546  
00:11:19.546  Malloc0
00:11:19.546  Malloc1
00:11:19.546  Malloc2
00:11:19.546  Malloc3
00:11:19.804  Malloc4
00:11:19.804  Malloc5
00:11:19.804  Malloc6
00:11:19.804  Malloc7
00:11:19.804  Malloc8
00:11:19.804  Malloc9
00:11:19.804  [2024-11-19 16:55:12.492238] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3
00:11:19.804  [2024-11-19 16:55:12.492337] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:11:19.804  [2024-11-19 16:55:12.492384] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480
00:11:19.804  [2024-11-19 16:55:12.492416] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:11:19.804  [2024-11-19 16:55:12.495191] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:11:19.804  [2024-11-19 16:55:12.495250] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT
00:11:19.804  TestPT
00:11:19.804   16:55:12	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:19.804   16:55:12	-- bdev/blockdev.sh@74 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/bdev/aiofile bs=2048 count=5000
00:11:19.804  5000+0 records in
00:11:19.804  5000+0 records out
00:11:19.804  10240000 bytes (10 MB, 9.8 MiB) copied, 0.0356494 s, 287 MB/s
00:11:19.804   16:55:12	-- bdev/blockdev.sh@75 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/bdev/aiofile AIO0 2048
00:11:19.804   16:55:12	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:19.804   16:55:12	-- common/autotest_common.sh@10 -- # set +x
00:11:19.804  AIO0
00:11:19.804   16:55:12	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:19.804   16:55:12	-- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine
00:11:19.804   16:55:12	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:19.804   16:55:12	-- common/autotest_common.sh@10 -- # set +x
00:11:19.804   16:55:12	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:19.804   16:55:12	-- bdev/blockdev.sh@738 -- # cat
00:11:19.804    16:55:12	-- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel
00:11:19.804    16:55:12	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:19.804    16:55:12	-- common/autotest_common.sh@10 -- # set +x
00:11:19.804    16:55:12	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:19.804    16:55:12	-- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev
00:11:19.804    16:55:12	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:19.804    16:55:12	-- common/autotest_common.sh@10 -- # set +x
00:11:20.064    16:55:12	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:20.064    16:55:12	-- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf
00:11:20.064    16:55:12	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:20.064    16:55:12	-- common/autotest_common.sh@10 -- # set +x
00:11:20.064    16:55:12	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:20.064   16:55:12	-- bdev/blockdev.sh@746 -- # mapfile -t bdevs
00:11:20.064    16:55:12	-- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs
00:11:20.064    16:55:12	-- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)'
00:11:20.064    16:55:12	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:20.064    16:55:12	-- common/autotest_common.sh@10 -- # set +x
00:11:20.064    16:55:12	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:20.064   16:55:12	-- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name
00:11:20.064    16:55:12	-- bdev/blockdev.sh@747 -- # jq -r .name
00:11:20.066    16:55:12	-- bdev/blockdev.sh@747 -- # printf '%s\n' '{' '  "name": "Malloc0",' '  "aliases": [' '    "b62e43e8-a683-4f4b-add9-e442c3f3ddab"' '  ],' '  "product_name": "Malloc disk",' '  "block_size": 512,' '  "num_blocks": 65536,' '  "uuid": "b62e43e8-a683-4f4b-add9-e442c3f3ddab",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 20000,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "memory_domains": [' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    }' '  ],' '  "driver_specific": {}' '}' '{' '  "name": "Malloc1p0",' '  "aliases": [' '    "a88f24b5-b89f-5bf6-bfe1-a6b648f431a8"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 32768,' '  "uuid": "a88f24b5-b89f-5bf6-bfe1-a6b648f431a8",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc1",' '      "offset_blocks": 0' '    }' '  }' '}' '{' '  "name": "Malloc1p1",' '  "aliases": [' '    "a7267782-9e1a-5727-9932-15e5353a8a4a"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 32768,' '  "uuid": "a7267782-9e1a-5727-9932-15e5353a8a4a",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc1",' '      "offset_blocks": 32768' '    }' '  }' '}' '{' '  "name": "Malloc2p0",' '  "aliases": [' '    "de8f2ef3-ec97-5d3b-a5ff-e5a96142e5fe"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "de8f2ef3-ec97-5d3b-a5ff-e5a96142e5fe",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 0' '    }' '  }' '}' '{' '  "name": "Malloc2p1",' '  "aliases": [' '    "5f787359-a152-560d-97ed-a2cec882bfcb"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "5f787359-a152-560d-97ed-a2cec882bfcb",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 8192' '    }' '  }' '}' '{' '  "name": "Malloc2p2",' '  "aliases": [' '    "ccf78d51-1259-5bad-b89c-f47ae05576f6"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "ccf78d51-1259-5bad-b89c-f47ae05576f6",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 16384' '    }' '  }' '}' '{' '  "name": "Malloc2p3",' '  "aliases": [' '    "de7fdfab-0018-58f7-a974-aadd0bf6f1d4"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "de7fdfab-0018-58f7-a974-aadd0bf6f1d4",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 24576' '    }' '  }' '}' '{' '  "name": "Malloc2p4",' '  "aliases": [' '    "6e412625-f267-5120-a11a-2ec4b63328f3"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "6e412625-f267-5120-a11a-2ec4b63328f3",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 32768' '    }' '  }' '}' '{' '  "name": "Malloc2p5",' '  "aliases": [' '    "9e422c63-ecbb-597a-86a5-1a3aa6e72ade"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "9e422c63-ecbb-597a-86a5-1a3aa6e72ade",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 40960' '    }' '  }' '}' '{' '  "name": "Malloc2p6",' '  "aliases": [' '    "42c74c99-2958-5bb3-96fb-a8f1da5f66ff"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "42c74c99-2958-5bb3-96fb-a8f1da5f66ff",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 49152' '    }' '  }' '}' '{' '  "name": "Malloc2p7",' '  "aliases": [' '    "5a6df78e-dde9-596e-b50f-1f1ebba5717c"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "5a6df78e-dde9-596e-b50f-1f1ebba5717c",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 57344' '    }' '  }' '}' '{' '  "name": "TestPT",' '  "aliases": [' '    "1cf32d11-d96a-5a51-a03e-4613a9e884b5"' '  ],' '  "product_name": "passthru",' '  "block_size": 512,' '  "num_blocks": 65536,' '  "uuid": "1cf32d11-d96a-5a51-a03e-4613a9e884b5",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "memory_domains": [' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    }' '  ],' '  "driver_specific": {' '    "passthru": {' '      "name": "TestPT",' '      "base_bdev_name": "Malloc3"' '    }' '  }' '}' '{' '  "name": "raid0",' '  "aliases": [' '    "aadbe5dd-9ccb-4ca6-add4-d686bb123f4c"' '  ],' '  "product_name": "Raid Volume",' '  "block_size": 512,' '  "num_blocks": 131072,' '  "uuid": "aadbe5dd-9ccb-4ca6-add4-d686bb123f4c",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "memory_domains": [' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    }' '  ],' '  "driver_specific": {' '    "raid": {' '      "uuid": "aadbe5dd-9ccb-4ca6-add4-d686bb123f4c",' '      "strip_size_kb": 64,' '      "state": "online",' '      "raid_level": "raid0",' '      "superblock": false,' '      "num_base_bdevs": 2,' '      "num_base_bdevs_discovered": 2,' '      "num_base_bdevs_operational": 2,' '      "base_bdevs_list": [' '        {' '          "name": "Malloc4",' '          "uuid": "8c11f5f9-2ff6-4966-8580-1cc0824801d7",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        },' '        {' '          "name": "Malloc5",' '          "uuid": "466653f4-400a-4eae-a94f-101330cb8103",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        }' '      ]' '    }' '  }' '}' '{' '  "name": "concat0",' '  "aliases": [' '    "3b6738f6-c3f2-4ad7-9cf0-833682f0af90"' '  ],' '  "product_name": "Raid Volume",' '  "block_size": 512,' '  "num_blocks": 131072,' '  "uuid": "3b6738f6-c3f2-4ad7-9cf0-833682f0af90",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "memory_domains": [' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    }' '  ],' '  "driver_specific": {' '    "raid": {' '      "uuid": "3b6738f6-c3f2-4ad7-9cf0-833682f0af90",' '      "strip_size_kb": 64,' '      "state": "online",' '      "raid_level": "concat",' '      "superblock": false,' '      "num_base_bdevs": 2,' '      "num_base_bdevs_discovered": 2,' '      "num_base_bdevs_operational": 2,' '      "base_bdevs_list": [' '        {' '          "name": "Malloc6",' '          "uuid": "81763f77-e608-4fc5-ba88-af3f895bd7ed",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        },' '        {' '          "name": "Malloc7",' '          "uuid": "d8265720-7716-48bd-8647-c3ed8153f75c",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        }' '      ]' '    }' '  }' '}' '{' '  "name": "raid1",' '  "aliases": [' '    "237c6f5a-3e13-4100-ad5a-96133c0921ba"' '  ],' '  "product_name": "Raid Volume",' '  "block_size": 512,' '  "num_blocks": 65536,' '  "uuid": "237c6f5a-3e13-4100-ad5a-96133c0921ba",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": false,' '    "write_zeroes": true,' '    "flush": false,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "memory_domains": [' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    }' '  ],' '  "driver_specific": {' '    "raid": {' '      "uuid": "237c6f5a-3e13-4100-ad5a-96133c0921ba",' '      "strip_size_kb": 0,' '      "state": "online",' '      "raid_level": "raid1",' '      "superblock": false,' '      "num_base_bdevs": 2,' '      "num_base_bdevs_discovered": 2,' '      "num_base_bdevs_operational": 2,' '      "base_bdevs_list": [' '        {' '          "name": "Malloc8",' '          "uuid": "9bac68fb-3846-41dd-9fc2-de159656762b",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        },' '        {' '          "name": "Malloc9",' '          "uuid": "098e42da-c770-48e7-92c0-19e19e04c8ae",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        }' '      ]' '    }' '  }' '}' '{' '  "name": "AIO0",' '  "aliases": [' '    "587e9156-b8b7-46ed-bc41-2e5a3b52334a"' '  ],' '  "product_name": "AIO disk",' '  "block_size": 2048,' '  "num_blocks": 5000,' '  "uuid": "587e9156-b8b7-46ed-bc41-2e5a3b52334a",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": false,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "driver_specific": {' '    "aio": {' '      "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' '      "block_size_override": true,' '      "readonly": false' '    }' '  }' '}'
00:11:20.066   16:55:12	-- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}")
00:11:20.066   16:55:12	-- bdev/blockdev.sh@750 -- # hello_world_bdev=Malloc0
00:11:20.066   16:55:12	-- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT
00:11:20.066   16:55:12	-- bdev/blockdev.sh@752 -- # killprocess 119468
00:11:20.066   16:55:12	-- common/autotest_common.sh@936 -- # '[' -z 119468 ']'
00:11:20.066   16:55:12	-- common/autotest_common.sh@940 -- # kill -0 119468
00:11:20.066    16:55:12	-- common/autotest_common.sh@941 -- # uname
00:11:20.066   16:55:12	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:11:20.066    16:55:12	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 119468
00:11:20.066   16:55:12	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:11:20.066   16:55:12	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:11:20.066   16:55:12	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 119468'
00:11:20.066  killing process with pid 119468
00:11:20.066   16:55:12	-- common/autotest_common.sh@955 -- # kill 119468
00:11:20.066   16:55:12	-- common/autotest_common.sh@960 -- # wait 119468
00:11:20.634   16:55:13	-- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT
00:11:20.634   16:55:13	-- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 ''
00:11:20.634   16:55:13	-- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']'
00:11:20.634   16:55:13	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:11:20.634   16:55:13	-- common/autotest_common.sh@10 -- # set +x
00:11:20.634  ************************************
00:11:20.634  START TEST bdev_hello_world
00:11:20.634  ************************************
00:11:20.634   16:55:13	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 ''
00:11:20.893  [2024-11-19 16:55:13.510025] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:11:20.893  [2024-11-19 16:55:13.510327] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119521 ]
00:11:20.893  [2024-11-19 16:55:13.666525] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:11:20.893  [2024-11-19 16:55:13.708893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:11:21.152  [2024-11-19 16:55:13.829425] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1
00:11:21.152  [2024-11-19 16:55:13.829551] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1
00:11:21.152  [2024-11-19 16:55:13.837355] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2
00:11:21.152  [2024-11-19 16:55:13.837429] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2
00:11:21.152  [2024-11-19 16:55:13.845399] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3
00:11:21.152  [2024-11-19 16:55:13.845469] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3
00:11:21.152  [2024-11-19 16:55:13.845514] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival
00:11:21.152  [2024-11-19 16:55:13.933570] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3
00:11:21.152  [2024-11-19 16:55:13.933687] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:11:21.152  [2024-11-19 16:55:13.933737] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80
00:11:21.152  [2024-11-19 16:55:13.933776] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:11:21.152  [2024-11-19 16:55:13.936222] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:11:21.152  [2024-11-19 16:55:13.936300] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT
00:11:21.422  [2024-11-19 16:55:14.089105] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application
00:11:21.422  [2024-11-19 16:55:14.089257] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Malloc0
00:11:21.422  [2024-11-19 16:55:14.089445] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel
00:11:21.422  [2024-11-19 16:55:14.089610] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev
00:11:21.422  [2024-11-19 16:55:14.089777] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully
00:11:21.422  [2024-11-19 16:55:14.089848] hello_bdev.c:  84:hello_read: *NOTICE*: Reading io
00:11:21.422  [2024-11-19 16:55:14.089959] hello_bdev.c:  65:read_complete: *NOTICE*: Read string from bdev : Hello World!
00:11:21.422  
00:11:21.422  [2024-11-19 16:55:14.090048] hello_bdev.c:  74:read_complete: *NOTICE*: Stopping app
00:11:21.701  ************************************
00:11:21.701  END TEST bdev_hello_world
00:11:21.701  ************************************
00:11:21.701  
00:11:21.701  real	0m1.059s
00:11:21.701  user	0m0.618s
00:11:21.701  sys	0m0.297s
00:11:21.701   16:55:14	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:11:21.701   16:55:14	-- common/autotest_common.sh@10 -- # set +x
00:11:21.701   16:55:14	-- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds ''
00:11:21.701   16:55:14	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:11:21.701   16:55:14	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:11:21.701   16:55:14	-- common/autotest_common.sh@10 -- # set +x
00:11:21.701  ************************************
00:11:21.701  START TEST bdev_bounds
00:11:21.701  ************************************
00:11:21.701   16:55:14	-- common/autotest_common.sh@1114 -- # bdev_bounds ''
00:11:21.701   16:55:14	-- bdev/blockdev.sh@288 -- # bdevio_pid=119566
00:11:21.701   16:55:14	-- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT
00:11:21.701   16:55:14	-- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 119566'
00:11:21.701  Process bdevio pid: 119566
00:11:21.701   16:55:14	-- bdev/blockdev.sh@291 -- # waitforlisten 119566
00:11:21.701   16:55:14	-- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json ''
00:11:21.701   16:55:14	-- common/autotest_common.sh@829 -- # '[' -z 119566 ']'
00:11:21.701   16:55:14	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:11:21.701   16:55:14	-- common/autotest_common.sh@834 -- # local max_retries=100
00:11:21.701   16:55:14	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:11:21.701  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:11:21.701   16:55:14	-- common/autotest_common.sh@838 -- # xtrace_disable
00:11:21.701   16:55:14	-- common/autotest_common.sh@10 -- # set +x
00:11:21.960  [2024-11-19 16:55:14.638084] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:11:21.960  [2024-11-19 16:55:14.638371] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119566 ]
00:11:21.960  [2024-11-19 16:55:14.804821] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3
00:11:22.219  [2024-11-19 16:55:14.851879] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:11:22.219  [2024-11-19 16:55:14.852345] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:11:22.219  [2024-11-19 16:55:14.852346] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:11:22.219  [2024-11-19 16:55:14.973818] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1
00:11:22.219  [2024-11-19 16:55:14.973925] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1
00:11:22.219  [2024-11-19 16:55:14.981731] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2
00:11:22.219  [2024-11-19 16:55:14.981793] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2
00:11:22.219  [2024-11-19 16:55:14.989844] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3
00:11:22.219  [2024-11-19 16:55:14.989928] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3
00:11:22.219  [2024-11-19 16:55:14.989973] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival
00:11:22.479  [2024-11-19 16:55:15.087772] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3
00:11:22.479  [2024-11-19 16:55:15.087897] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:11:22.479  [2024-11-19 16:55:15.087953] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80
00:11:22.479  [2024-11-19 16:55:15.087978] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:11:22.479  [2024-11-19 16:55:15.090718] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:11:22.479  [2024-11-19 16:55:15.090771] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT
00:11:22.738   16:55:15	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:11:22.738   16:55:15	-- common/autotest_common.sh@862 -- # return 0
00:11:22.738   16:55:15	-- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests
00:11:22.998  I/O targets:
00:11:22.998    Malloc0: 65536 blocks of 512 bytes (32 MiB)
00:11:22.998    Malloc1p0: 32768 blocks of 512 bytes (16 MiB)
00:11:22.998    Malloc1p1: 32768 blocks of 512 bytes (16 MiB)
00:11:22.998    Malloc2p0: 8192 blocks of 512 bytes (4 MiB)
00:11:22.998    Malloc2p1: 8192 blocks of 512 bytes (4 MiB)
00:11:22.998    Malloc2p2: 8192 blocks of 512 bytes (4 MiB)
00:11:22.998    Malloc2p3: 8192 blocks of 512 bytes (4 MiB)
00:11:22.998    Malloc2p4: 8192 blocks of 512 bytes (4 MiB)
00:11:22.998    Malloc2p5: 8192 blocks of 512 bytes (4 MiB)
00:11:22.998    Malloc2p6: 8192 blocks of 512 bytes (4 MiB)
00:11:22.998    Malloc2p7: 8192 blocks of 512 bytes (4 MiB)
00:11:22.998    TestPT: 65536 blocks of 512 bytes (32 MiB)
00:11:22.998    raid0: 131072 blocks of 512 bytes (64 MiB)
00:11:22.998    concat0: 131072 blocks of 512 bytes (64 MiB)
00:11:22.998    raid1: 65536 blocks of 512 bytes (32 MiB)
00:11:22.998    AIO0: 5000 blocks of 2048 bytes (10 MiB)
00:11:22.998  
00:11:22.998  
00:11:22.998       CUnit - A unit testing framework for C - Version 2.1-3
00:11:22.998       http://cunit.sourceforge.net/
00:11:22.998  
00:11:22.998  
00:11:22.998  Suite: bdevio tests on: AIO0
00:11:22.998    Test: blockdev write read block ...passed
00:11:22.998    Test: blockdev write zeroes read block ...passed
00:11:22.998    Test: blockdev write zeroes read no split ...passed
00:11:22.998    Test: blockdev write zeroes read split ...passed
00:11:22.998    Test: blockdev write zeroes read split partial ...passed
00:11:22.998    Test: blockdev reset ...passed
00:11:22.998    Test: blockdev write read 8 blocks ...passed
00:11:22.998    Test: blockdev write read size > 128k ...passed
00:11:22.998    Test: blockdev write read invalid size ...passed
00:11:22.998    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:11:22.998    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:11:22.998    Test: blockdev write read max offset ...passed
00:11:22.998    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:11:22.998    Test: blockdev writev readv 8 blocks ...passed
00:11:22.998    Test: blockdev writev readv 30 x 1block ...passed
00:11:22.998    Test: blockdev writev readv block ...passed
00:11:22.998    Test: blockdev writev readv size > 128k ...passed
00:11:22.998    Test: blockdev writev readv size > 128k in two iovs ...passed
00:11:22.998    Test: blockdev comparev and writev ...passed
00:11:22.998    Test: blockdev nvme passthru rw ...passed
00:11:22.998    Test: blockdev nvme passthru vendor specific ...passed
00:11:22.998    Test: blockdev nvme admin passthru ...passed
00:11:22.998    Test: blockdev copy ...passed
00:11:22.998  Suite: bdevio tests on: raid1
00:11:22.998    Test: blockdev write read block ...passed
00:11:22.998    Test: blockdev write zeroes read block ...passed
00:11:22.998    Test: blockdev write zeroes read no split ...passed
00:11:22.998    Test: blockdev write zeroes read split ...passed
00:11:22.998    Test: blockdev write zeroes read split partial ...passed
00:11:22.998    Test: blockdev reset ...passed
00:11:22.998    Test: blockdev write read 8 blocks ...passed
00:11:22.998    Test: blockdev write read size > 128k ...passed
00:11:22.998    Test: blockdev write read invalid size ...passed
00:11:22.998    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:11:22.998    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:11:22.998    Test: blockdev write read max offset ...passed
00:11:22.998    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:11:22.998    Test: blockdev writev readv 8 blocks ...passed
00:11:22.998    Test: blockdev writev readv 30 x 1block ...passed
00:11:22.998    Test: blockdev writev readv block ...passed
00:11:22.998    Test: blockdev writev readv size > 128k ...passed
00:11:22.998    Test: blockdev writev readv size > 128k in two iovs ...passed
00:11:22.998    Test: blockdev comparev and writev ...passed
00:11:22.998    Test: blockdev nvme passthru rw ...passed
00:11:22.998    Test: blockdev nvme passthru vendor specific ...passed
00:11:22.998    Test: blockdev nvme admin passthru ...passed
00:11:22.998    Test: blockdev copy ...passed
00:11:22.998  Suite: bdevio tests on: concat0
00:11:22.998    Test: blockdev write read block ...passed
00:11:22.998    Test: blockdev write zeroes read block ...passed
00:11:22.998    Test: blockdev write zeroes read no split ...passed
00:11:22.998    Test: blockdev write zeroes read split ...passed
00:11:22.999    Test: blockdev write zeroes read split partial ...passed
00:11:22.999    Test: blockdev reset ...passed
00:11:22.999    Test: blockdev write read 8 blocks ...passed
00:11:22.999    Test: blockdev write read size > 128k ...passed
00:11:22.999    Test: blockdev write read invalid size ...passed
00:11:22.999    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:11:22.999    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:11:22.999    Test: blockdev write read max offset ...passed
00:11:22.999    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:11:22.999    Test: blockdev writev readv 8 blocks ...passed
00:11:22.999    Test: blockdev writev readv 30 x 1block ...passed
00:11:22.999    Test: blockdev writev readv block ...passed
00:11:22.999    Test: blockdev writev readv size > 128k ...passed
00:11:22.999    Test: blockdev writev readv size > 128k in two iovs ...passed
00:11:22.999    Test: blockdev comparev and writev ...passed
00:11:22.999    Test: blockdev nvme passthru rw ...passed
00:11:22.999    Test: blockdev nvme passthru vendor specific ...passed
00:11:22.999    Test: blockdev nvme admin passthru ...passed
00:11:22.999    Test: blockdev copy ...passed
00:11:22.999  Suite: bdevio tests on: raid0
00:11:22.999    Test: blockdev write read block ...passed
00:11:22.999    Test: blockdev write zeroes read block ...passed
00:11:22.999    Test: blockdev write zeroes read no split ...passed
00:11:22.999    Test: blockdev write zeroes read split ...passed
00:11:22.999    Test: blockdev write zeroes read split partial ...passed
00:11:22.999    Test: blockdev reset ...passed
00:11:22.999    Test: blockdev write read 8 blocks ...passed
00:11:22.999    Test: blockdev write read size > 128k ...passed
00:11:22.999    Test: blockdev write read invalid size ...passed
00:11:22.999    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:11:22.999    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:11:22.999    Test: blockdev write read max offset ...passed
00:11:22.999    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:11:22.999    Test: blockdev writev readv 8 blocks ...passed
00:11:22.999    Test: blockdev writev readv 30 x 1block ...passed
00:11:22.999    Test: blockdev writev readv block ...passed
00:11:22.999    Test: blockdev writev readv size > 128k ...passed
00:11:22.999    Test: blockdev writev readv size > 128k in two iovs ...passed
00:11:22.999    Test: blockdev comparev and writev ...passed
00:11:22.999    Test: blockdev nvme passthru rw ...passed
00:11:22.999    Test: blockdev nvme passthru vendor specific ...passed
00:11:22.999    Test: blockdev nvme admin passthru ...passed
00:11:22.999    Test: blockdev copy ...passed
00:11:22.999  Suite: bdevio tests on: TestPT
00:11:22.999    Test: blockdev write read block ...passed
00:11:22.999    Test: blockdev write zeroes read block ...passed
00:11:22.999    Test: blockdev write zeroes read no split ...passed
00:11:22.999    Test: blockdev write zeroes read split ...passed
00:11:22.999    Test: blockdev write zeroes read split partial ...passed
00:11:22.999    Test: blockdev reset ...passed
00:11:22.999    Test: blockdev write read 8 blocks ...passed
00:11:22.999    Test: blockdev write read size > 128k ...passed
00:11:22.999    Test: blockdev write read invalid size ...passed
00:11:22.999    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:11:22.999    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:11:22.999    Test: blockdev write read max offset ...passed
00:11:22.999    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:11:22.999    Test: blockdev writev readv 8 blocks ...passed
00:11:22.999    Test: blockdev writev readv 30 x 1block ...passed
00:11:22.999    Test: blockdev writev readv block ...passed
00:11:22.999    Test: blockdev writev readv size > 128k ...passed
00:11:22.999    Test: blockdev writev readv size > 128k in two iovs ...passed
00:11:22.999    Test: blockdev comparev and writev ...passed
00:11:22.999    Test: blockdev nvme passthru rw ...passed
00:11:22.999    Test: blockdev nvme passthru vendor specific ...passed
00:11:22.999    Test: blockdev nvme admin passthru ...passed
00:11:22.999    Test: blockdev copy ...passed
00:11:22.999  Suite: bdevio tests on: Malloc2p7
00:11:22.999    Test: blockdev write read block ...passed
00:11:22.999    Test: blockdev write zeroes read block ...passed
00:11:22.999    Test: blockdev write zeroes read no split ...passed
00:11:22.999    Test: blockdev write zeroes read split ...passed
00:11:22.999    Test: blockdev write zeroes read split partial ...passed
00:11:22.999    Test: blockdev reset ...passed
00:11:22.999    Test: blockdev write read 8 blocks ...passed
00:11:22.999    Test: blockdev write read size > 128k ...passed
00:11:22.999    Test: blockdev write read invalid size ...passed
00:11:22.999    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:11:22.999    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:11:22.999    Test: blockdev write read max offset ...passed
00:11:22.999    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:11:22.999    Test: blockdev writev readv 8 blocks ...passed
00:11:22.999    Test: blockdev writev readv 30 x 1block ...passed
00:11:22.999    Test: blockdev writev readv block ...passed
00:11:22.999    Test: blockdev writev readv size > 128k ...passed
00:11:22.999    Test: blockdev writev readv size > 128k in two iovs ...passed
00:11:22.999    Test: blockdev comparev and writev ...passed
00:11:22.999    Test: blockdev nvme passthru rw ...passed
00:11:22.999    Test: blockdev nvme passthru vendor specific ...passed
00:11:22.999    Test: blockdev nvme admin passthru ...passed
00:11:22.999    Test: blockdev copy ...passed
00:11:22.999  Suite: bdevio tests on: Malloc2p6
00:11:22.999    Test: blockdev write read block ...passed
00:11:22.999    Test: blockdev write zeroes read block ...passed
00:11:22.999    Test: blockdev write zeroes read no split ...passed
00:11:22.999    Test: blockdev write zeroes read split ...passed
00:11:22.999    Test: blockdev write zeroes read split partial ...passed
00:11:22.999    Test: blockdev reset ...passed
00:11:22.999    Test: blockdev write read 8 blocks ...passed
00:11:22.999    Test: blockdev write read size > 128k ...passed
00:11:22.999    Test: blockdev write read invalid size ...passed
00:11:22.999    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:11:22.999    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:11:22.999    Test: blockdev write read max offset ...passed
00:11:22.999    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:11:22.999    Test: blockdev writev readv 8 blocks ...passed
00:11:22.999    Test: blockdev writev readv 30 x 1block ...passed
00:11:22.999    Test: blockdev writev readv block ...passed
00:11:22.999    Test: blockdev writev readv size > 128k ...passed
00:11:22.999    Test: blockdev writev readv size > 128k in two iovs ...passed
00:11:22.999    Test: blockdev comparev and writev ...passed
00:11:22.999    Test: blockdev nvme passthru rw ...passed
00:11:22.999    Test: blockdev nvme passthru vendor specific ...passed
00:11:22.999    Test: blockdev nvme admin passthru ...passed
00:11:22.999    Test: blockdev copy ...passed
00:11:22.999  Suite: bdevio tests on: Malloc2p5
00:11:22.999    Test: blockdev write read block ...passed
00:11:22.999    Test: blockdev write zeroes read block ...passed
00:11:22.999    Test: blockdev write zeroes read no split ...passed
00:11:22.999    Test: blockdev write zeroes read split ...passed
00:11:22.999    Test: blockdev write zeroes read split partial ...passed
00:11:22.999    Test: blockdev reset ...passed
00:11:22.999    Test: blockdev write read 8 blocks ...passed
00:11:22.999    Test: blockdev write read size > 128k ...passed
00:11:22.999    Test: blockdev write read invalid size ...passed
00:11:22.999    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:11:22.999    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:11:22.999    Test: blockdev write read max offset ...passed
00:11:22.999    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:11:22.999    Test: blockdev writev readv 8 blocks ...passed
00:11:22.999    Test: blockdev writev readv 30 x 1block ...passed
00:11:22.999    Test: blockdev writev readv block ...passed
00:11:22.999    Test: blockdev writev readv size > 128k ...passed
00:11:23.258    Test: blockdev writev readv size > 128k in two iovs ...passed
00:11:23.258    Test: blockdev comparev and writev ...passed
00:11:23.258    Test: blockdev nvme passthru rw ...passed
00:11:23.258    Test: blockdev nvme passthru vendor specific ...passed
00:11:23.258    Test: blockdev nvme admin passthru ...passed
00:11:23.258    Test: blockdev copy ...passed
00:11:23.258  Suite: bdevio tests on: Malloc2p4
00:11:23.258    Test: blockdev write read block ...passed
00:11:23.258    Test: blockdev write zeroes read block ...passed
00:11:23.258    Test: blockdev write zeroes read no split ...passed
00:11:23.258    Test: blockdev write zeroes read split ...passed
00:11:23.258    Test: blockdev write zeroes read split partial ...passed
00:11:23.258    Test: blockdev reset ...passed
00:11:23.258    Test: blockdev write read 8 blocks ...passed
00:11:23.258    Test: blockdev write read size > 128k ...passed
00:11:23.258    Test: blockdev write read invalid size ...passed
00:11:23.258    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:11:23.258    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:11:23.258    Test: blockdev write read max offset ...passed
00:11:23.258    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:11:23.258    Test: blockdev writev readv 8 blocks ...passed
00:11:23.258    Test: blockdev writev readv 30 x 1block ...passed
00:11:23.258    Test: blockdev writev readv block ...passed
00:11:23.258    Test: blockdev writev readv size > 128k ...passed
00:11:23.258    Test: blockdev writev readv size > 128k in two iovs ...passed
00:11:23.258    Test: blockdev comparev and writev ...passed
00:11:23.258    Test: blockdev nvme passthru rw ...passed
00:11:23.258    Test: blockdev nvme passthru vendor specific ...passed
00:11:23.258    Test: blockdev nvme admin passthru ...passed
00:11:23.258    Test: blockdev copy ...passed
00:11:23.258  Suite: bdevio tests on: Malloc2p3
00:11:23.258    Test: blockdev write read block ...passed
00:11:23.258    Test: blockdev write zeroes read block ...passed
00:11:23.258    Test: blockdev write zeroes read no split ...passed
00:11:23.258    Test: blockdev write zeroes read split ...passed
00:11:23.258    Test: blockdev write zeroes read split partial ...passed
00:11:23.258    Test: blockdev reset ...passed
00:11:23.258    Test: blockdev write read 8 blocks ...passed
00:11:23.258    Test: blockdev write read size > 128k ...passed
00:11:23.258    Test: blockdev write read invalid size ...passed
00:11:23.258    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:11:23.258    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:11:23.258    Test: blockdev write read max offset ...passed
00:11:23.258    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:11:23.258    Test: blockdev writev readv 8 blocks ...passed
00:11:23.258    Test: blockdev writev readv 30 x 1block ...passed
00:11:23.258    Test: blockdev writev readv block ...passed
00:11:23.258    Test: blockdev writev readv size > 128k ...passed
00:11:23.258    Test: blockdev writev readv size > 128k in two iovs ...passed
00:11:23.258    Test: blockdev comparev and writev ...passed
00:11:23.259    Test: blockdev nvme passthru rw ...passed
00:11:23.259    Test: blockdev nvme passthru vendor specific ...passed
00:11:23.259    Test: blockdev nvme admin passthru ...passed
00:11:23.259    Test: blockdev copy ...passed
00:11:23.259  Suite: bdevio tests on: Malloc2p2
00:11:23.259    Test: blockdev write read block ...passed
00:11:23.259    Test: blockdev write zeroes read block ...passed
00:11:23.259    Test: blockdev write zeroes read no split ...passed
00:11:23.259    Test: blockdev write zeroes read split ...passed
00:11:23.259    Test: blockdev write zeroes read split partial ...passed
00:11:23.259    Test: blockdev reset ...passed
00:11:23.259    Test: blockdev write read 8 blocks ...passed
00:11:23.259    Test: blockdev write read size > 128k ...passed
00:11:23.259    Test: blockdev write read invalid size ...passed
00:11:23.259    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:11:23.259    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:11:23.259    Test: blockdev write read max offset ...passed
00:11:23.259    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:11:23.259    Test: blockdev writev readv 8 blocks ...passed
00:11:23.259    Test: blockdev writev readv 30 x 1block ...passed
00:11:23.259    Test: blockdev writev readv block ...passed
00:11:23.259    Test: blockdev writev readv size > 128k ...passed
00:11:23.259    Test: blockdev writev readv size > 128k in two iovs ...passed
00:11:23.259    Test: blockdev comparev and writev ...passed
00:11:23.259    Test: blockdev nvme passthru rw ...passed
00:11:23.259    Test: blockdev nvme passthru vendor specific ...passed
00:11:23.259    Test: blockdev nvme admin passthru ...passed
00:11:23.259    Test: blockdev copy ...passed
00:11:23.259  Suite: bdevio tests on: Malloc2p1
00:11:23.259    Test: blockdev write read block ...passed
00:11:23.259    Test: blockdev write zeroes read block ...passed
00:11:23.259    Test: blockdev write zeroes read no split ...passed
00:11:23.259    Test: blockdev write zeroes read split ...passed
00:11:23.259    Test: blockdev write zeroes read split partial ...passed
00:11:23.259    Test: blockdev reset ...passed
00:11:23.259    Test: blockdev write read 8 blocks ...passed
00:11:23.259    Test: blockdev write read size > 128k ...passed
00:11:23.259    Test: blockdev write read invalid size ...passed
00:11:23.259    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:11:23.259    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:11:23.259    Test: blockdev write read max offset ...passed
00:11:23.259    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:11:23.259    Test: blockdev writev readv 8 blocks ...passed
00:11:23.259    Test: blockdev writev readv 30 x 1block ...passed
00:11:23.259    Test: blockdev writev readv block ...passed
00:11:23.259    Test: blockdev writev readv size > 128k ...passed
00:11:23.259    Test: blockdev writev readv size > 128k in two iovs ...passed
00:11:23.259    Test: blockdev comparev and writev ...passed
00:11:23.259    Test: blockdev nvme passthru rw ...passed
00:11:23.259    Test: blockdev nvme passthru vendor specific ...passed
00:11:23.259    Test: blockdev nvme admin passthru ...passed
00:11:23.259    Test: blockdev copy ...passed
00:11:23.259  Suite: bdevio tests on: Malloc2p0
00:11:23.259    Test: blockdev write read block ...passed
00:11:23.259    Test: blockdev write zeroes read block ...passed
00:11:23.259    Test: blockdev write zeroes read no split ...passed
00:11:23.259    Test: blockdev write zeroes read split ...passed
00:11:23.259    Test: blockdev write zeroes read split partial ...passed
00:11:23.259    Test: blockdev reset ...passed
00:11:23.259    Test: blockdev write read 8 blocks ...passed
00:11:23.259    Test: blockdev write read size > 128k ...passed
00:11:23.259    Test: blockdev write read invalid size ...passed
00:11:23.259    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:11:23.259    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:11:23.259    Test: blockdev write read max offset ...passed
00:11:23.259    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:11:23.259    Test: blockdev writev readv 8 blocks ...passed
00:11:23.259    Test: blockdev writev readv 30 x 1block ...passed
00:11:23.259    Test: blockdev writev readv block ...passed
00:11:23.259    Test: blockdev writev readv size > 128k ...passed
00:11:23.259    Test: blockdev writev readv size > 128k in two iovs ...passed
00:11:23.259    Test: blockdev comparev and writev ...passed
00:11:23.259    Test: blockdev nvme passthru rw ...passed
00:11:23.259    Test: blockdev nvme passthru vendor specific ...passed
00:11:23.259    Test: blockdev nvme admin passthru ...passed
00:11:23.259    Test: blockdev copy ...passed
00:11:23.259  Suite: bdevio tests on: Malloc1p1
00:11:23.259    Test: blockdev write read block ...passed
00:11:23.259    Test: blockdev write zeroes read block ...passed
00:11:23.259    Test: blockdev write zeroes read no split ...passed
00:11:23.259    Test: blockdev write zeroes read split ...passed
00:11:23.259    Test: blockdev write zeroes read split partial ...passed
00:11:23.259    Test: blockdev reset ...passed
00:11:23.259    Test: blockdev write read 8 blocks ...passed
00:11:23.259    Test: blockdev write read size > 128k ...passed
00:11:23.259    Test: blockdev write read invalid size ...passed
00:11:23.259    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:11:23.259    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:11:23.259    Test: blockdev write read max offset ...passed
00:11:23.259    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:11:23.259    Test: blockdev writev readv 8 blocks ...passed
00:11:23.259    Test: blockdev writev readv 30 x 1block ...passed
00:11:23.259    Test: blockdev writev readv block ...passed
00:11:23.259    Test: blockdev writev readv size > 128k ...passed
00:11:23.259    Test: blockdev writev readv size > 128k in two iovs ...passed
00:11:23.259    Test: blockdev comparev and writev ...passed
00:11:23.259    Test: blockdev nvme passthru rw ...passed
00:11:23.259    Test: blockdev nvme passthru vendor specific ...passed
00:11:23.259    Test: blockdev nvme admin passthru ...passed
00:11:23.259    Test: blockdev copy ...passed
00:11:23.259  Suite: bdevio tests on: Malloc1p0
00:11:23.259    Test: blockdev write read block ...passed
00:11:23.259    Test: blockdev write zeroes read block ...passed
00:11:23.259    Test: blockdev write zeroes read no split ...passed
00:11:23.259    Test: blockdev write zeroes read split ...passed
00:11:23.259    Test: blockdev write zeroes read split partial ...passed
00:11:23.259    Test: blockdev reset ...passed
00:11:23.259    Test: blockdev write read 8 blocks ...passed
00:11:23.259    Test: blockdev write read size > 128k ...passed
00:11:23.259    Test: blockdev write read invalid size ...passed
00:11:23.259    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:11:23.259    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:11:23.259    Test: blockdev write read max offset ...passed
00:11:23.259    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:11:23.259    Test: blockdev writev readv 8 blocks ...passed
00:11:23.259    Test: blockdev writev readv 30 x 1block ...passed
00:11:23.259    Test: blockdev writev readv block ...passed
00:11:23.259    Test: blockdev writev readv size > 128k ...passed
00:11:23.259    Test: blockdev writev readv size > 128k in two iovs ...passed
00:11:23.259    Test: blockdev comparev and writev ...passed
00:11:23.259    Test: blockdev nvme passthru rw ...passed
00:11:23.259    Test: blockdev nvme passthru vendor specific ...passed
00:11:23.259    Test: blockdev nvme admin passthru ...passed
00:11:23.259    Test: blockdev copy ...passed
00:11:23.259  Suite: bdevio tests on: Malloc0
00:11:23.259    Test: blockdev write read block ...passed
00:11:23.259    Test: blockdev write zeroes read block ...passed
00:11:23.259    Test: blockdev write zeroes read no split ...passed
00:11:23.259    Test: blockdev write zeroes read split ...passed
00:11:23.259    Test: blockdev write zeroes read split partial ...passed
00:11:23.259    Test: blockdev reset ...passed
00:11:23.259    Test: blockdev write read 8 blocks ...passed
00:11:23.259    Test: blockdev write read size > 128k ...passed
00:11:23.259    Test: blockdev write read invalid size ...passed
00:11:23.259    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:11:23.259    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:11:23.259    Test: blockdev write read max offset ...passed
00:11:23.259    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:11:23.259    Test: blockdev writev readv 8 blocks ...passed
00:11:23.259    Test: blockdev writev readv 30 x 1block ...passed
00:11:23.259    Test: blockdev writev readv block ...passed
00:11:23.259    Test: blockdev writev readv size > 128k ...passed
00:11:23.259    Test: blockdev writev readv size > 128k in two iovs ...passed
00:11:23.259    Test: blockdev comparev and writev ...passed
00:11:23.259    Test: blockdev nvme passthru rw ...passed
00:11:23.259    Test: blockdev nvme passthru vendor specific ...passed
00:11:23.259    Test: blockdev nvme admin passthru ...passed
00:11:23.259    Test: blockdev copy ...passed
00:11:23.259  
00:11:23.259  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:11:23.259                suites     16     16    n/a      0        0
00:11:23.259                 tests    368    368    368      0        0
00:11:23.259               asserts   2224   2224   2224      0      n/a
00:11:23.259  
00:11:23.259  Elapsed time =    0.625 seconds
00:11:23.259  0
00:11:23.259   16:55:15	-- bdev/blockdev.sh@293 -- # killprocess 119566
00:11:23.259   16:55:15	-- common/autotest_common.sh@936 -- # '[' -z 119566 ']'
00:11:23.259   16:55:15	-- common/autotest_common.sh@940 -- # kill -0 119566
00:11:23.259    16:55:15	-- common/autotest_common.sh@941 -- # uname
00:11:23.259   16:55:15	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:11:23.259    16:55:15	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 119566
00:11:23.259  killing process with pid 119566
00:11:23.259   16:55:15	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:11:23.259   16:55:15	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:11:23.259   16:55:15	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 119566'
00:11:23.259   16:55:15	-- common/autotest_common.sh@955 -- # kill 119566
00:11:23.259   16:55:15	-- common/autotest_common.sh@960 -- # wait 119566
00:11:23.517   16:55:16	-- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT
00:11:23.517  
00:11:23.517  real	0m1.816s
00:11:23.517  user	0m4.275s
00:11:23.517  sys	0m0.450s
00:11:23.517   16:55:16	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:11:23.517   16:55:16	-- common/autotest_common.sh@10 -- # set +x
00:11:23.517  ************************************
00:11:23.517  END TEST bdev_bounds
00:11:23.517  ************************************
00:11:23.775   16:55:16	-- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' ''
00:11:23.775   16:55:16	-- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']'
00:11:23.775   16:55:16	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:11:23.776   16:55:16	-- common/autotest_common.sh@10 -- # set +x
00:11:23.776  ************************************
00:11:23.776  START TEST bdev_nbd
00:11:23.776  ************************************
00:11:23.776   16:55:16	-- common/autotest_common.sh@1114 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' ''
00:11:23.776    16:55:16	-- bdev/blockdev.sh@298 -- # uname -s
00:11:23.776   16:55:16	-- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]]
00:11:23.776   16:55:16	-- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:11:23.776   16:55:16	-- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:11:23.776   16:55:16	-- bdev/blockdev.sh@302 -- # bdev_all=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0')
00:11:23.776   16:55:16	-- bdev/blockdev.sh@302 -- # local bdev_all
00:11:23.776   16:55:16	-- bdev/blockdev.sh@303 -- # local bdev_num=16
00:11:23.776   16:55:16	-- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]]
00:11:23.776   16:55:16	-- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9')
00:11:23.776   16:55:16	-- bdev/blockdev.sh@309 -- # local nbd_all
00:11:23.776   16:55:16	-- bdev/blockdev.sh@310 -- # bdev_num=16
00:11:23.776   16:55:16	-- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9')
00:11:23.776   16:55:16	-- bdev/blockdev.sh@312 -- # local nbd_list
00:11:23.776   16:55:16	-- bdev/blockdev.sh@313 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0')
00:11:23.776   16:55:16	-- bdev/blockdev.sh@313 -- # local bdev_list
00:11:23.776   16:55:16	-- bdev/blockdev.sh@316 -- # nbd_pid=119617
00:11:23.776   16:55:16	-- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT
00:11:23.776   16:55:16	-- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json ''
00:11:23.776   16:55:16	-- bdev/blockdev.sh@318 -- # waitforlisten 119617 /var/tmp/spdk-nbd.sock
00:11:23.776   16:55:16	-- common/autotest_common.sh@829 -- # '[' -z 119617 ']'
00:11:23.776   16:55:16	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:11:23.776   16:55:16	-- common/autotest_common.sh@834 -- # local max_retries=100
00:11:23.776   16:55:16	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:11:23.776  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:11:23.776   16:55:16	-- common/autotest_common.sh@838 -- # xtrace_disable
00:11:23.776   16:55:16	-- common/autotest_common.sh@10 -- # set +x
00:11:23.776  [2024-11-19 16:55:16.496976] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:11:23.776  [2024-11-19 16:55:16.497127] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:11:24.035  [2024-11-19 16:55:16.641236] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:11:24.035  [2024-11-19 16:55:16.684739] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:11:24.035  [2024-11-19 16:55:16.806521] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1
00:11:24.035  [2024-11-19 16:55:16.806623] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1
00:11:24.035  [2024-11-19 16:55:16.814426] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2
00:11:24.035  [2024-11-19 16:55:16.814486] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2
00:11:24.036  [2024-11-19 16:55:16.822468] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3
00:11:24.036  [2024-11-19 16:55:16.822534] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3
00:11:24.036  [2024-11-19 16:55:16.822572] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival
00:11:24.294  [2024-11-19 16:55:16.920461] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3
00:11:24.295  [2024-11-19 16:55:16.920577] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:11:24.295  [2024-11-19 16:55:16.920626] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80
00:11:24.295  [2024-11-19 16:55:16.920656] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:11:24.295  [2024-11-19 16:55:16.923189] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:11:24.295  [2024-11-19 16:55:16.923250] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT
00:11:24.862   16:55:17	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:11:24.862   16:55:17	-- common/autotest_common.sh@862 -- # return 0
00:11:24.862   16:55:17	-- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0'
00:11:24.862   16:55:17	-- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:11:24.862   16:55:17	-- bdev/nbd_common.sh@114 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0')
00:11:24.862   16:55:17	-- bdev/nbd_common.sh@114 -- # local bdev_list
00:11:24.862   16:55:17	-- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0'
00:11:24.862   16:55:17	-- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:11:24.862   16:55:17	-- bdev/nbd_common.sh@23 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0')
00:11:24.862   16:55:17	-- bdev/nbd_common.sh@23 -- # local bdev_list
00:11:24.862   16:55:17	-- bdev/nbd_common.sh@24 -- # local i
00:11:24.862   16:55:17	-- bdev/nbd_common.sh@25 -- # local nbd_device
00:11:24.862   16:55:17	-- bdev/nbd_common.sh@27 -- # (( i = 0 ))
00:11:24.862   16:55:17	-- bdev/nbd_common.sh@27 -- # (( i < 16 ))
00:11:24.862    16:55:17	-- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0
00:11:25.121   16:55:17	-- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0
00:11:25.121    16:55:17	-- bdev/nbd_common.sh@30 -- # basename /dev/nbd0
00:11:25.121   16:55:17	-- bdev/nbd_common.sh@30 -- # waitfornbd nbd0
00:11:25.121   16:55:17	-- common/autotest_common.sh@866 -- # local nbd_name=nbd0
00:11:25.121   16:55:17	-- common/autotest_common.sh@867 -- # local i
00:11:25.121   16:55:17	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:11:25.121   16:55:17	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:11:25.121   16:55:17	-- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions
00:11:25.121   16:55:17	-- common/autotest_common.sh@871 -- # break
00:11:25.121   16:55:17	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:11:25.121   16:55:17	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:11:25.121   16:55:17	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:11:25.121  1+0 records in
00:11:25.121  1+0 records out
00:11:25.121  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000389926 s, 10.5 MB/s
00:11:25.121    16:55:17	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:25.121   16:55:17	-- common/autotest_common.sh@884 -- # size=4096
00:11:25.121   16:55:17	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:25.121   16:55:17	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:11:25.121   16:55:17	-- common/autotest_common.sh@887 -- # return 0
00:11:25.121   16:55:17	-- bdev/nbd_common.sh@27 -- # (( i++ ))
00:11:25.121   16:55:17	-- bdev/nbd_common.sh@27 -- # (( i < 16 ))
00:11:25.121    16:55:17	-- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0
00:11:25.380   16:55:18	-- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1
00:11:25.380    16:55:18	-- bdev/nbd_common.sh@30 -- # basename /dev/nbd1
00:11:25.380   16:55:18	-- bdev/nbd_common.sh@30 -- # waitfornbd nbd1
00:11:25.380   16:55:18	-- common/autotest_common.sh@866 -- # local nbd_name=nbd1
00:11:25.380   16:55:18	-- common/autotest_common.sh@867 -- # local i
00:11:25.380   16:55:18	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:11:25.380   16:55:18	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:11:25.380   16:55:18	-- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions
00:11:25.380   16:55:18	-- common/autotest_common.sh@871 -- # break
00:11:25.380   16:55:18	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:11:25.380   16:55:18	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:11:25.380   16:55:18	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:11:25.380  1+0 records in
00:11:25.380  1+0 records out
00:11:25.380  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000434584 s, 9.4 MB/s
00:11:25.380    16:55:18	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:25.380   16:55:18	-- common/autotest_common.sh@884 -- # size=4096
00:11:25.380   16:55:18	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:25.380   16:55:18	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:11:25.380   16:55:18	-- common/autotest_common.sh@887 -- # return 0
00:11:25.380   16:55:18	-- bdev/nbd_common.sh@27 -- # (( i++ ))
00:11:25.380   16:55:18	-- bdev/nbd_common.sh@27 -- # (( i < 16 ))
00:11:25.380    16:55:18	-- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1
00:11:25.639   16:55:18	-- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2
00:11:25.639    16:55:18	-- bdev/nbd_common.sh@30 -- # basename /dev/nbd2
00:11:25.639   16:55:18	-- bdev/nbd_common.sh@30 -- # waitfornbd nbd2
00:11:25.639   16:55:18	-- common/autotest_common.sh@866 -- # local nbd_name=nbd2
00:11:25.639   16:55:18	-- common/autotest_common.sh@867 -- # local i
00:11:25.639   16:55:18	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:11:25.639   16:55:18	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:11:25.639   16:55:18	-- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions
00:11:25.639   16:55:18	-- common/autotest_common.sh@871 -- # break
00:11:25.639   16:55:18	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:11:25.639   16:55:18	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:11:25.639   16:55:18	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:11:25.639  1+0 records in
00:11:25.639  1+0 records out
00:11:25.639  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000628947 s, 6.5 MB/s
00:11:25.639    16:55:18	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:25.639   16:55:18	-- common/autotest_common.sh@884 -- # size=4096
00:11:25.639   16:55:18	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:25.639   16:55:18	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:11:25.639   16:55:18	-- common/autotest_common.sh@887 -- # return 0
00:11:25.639   16:55:18	-- bdev/nbd_common.sh@27 -- # (( i++ ))
00:11:25.639   16:55:18	-- bdev/nbd_common.sh@27 -- # (( i < 16 ))
00:11:25.639    16:55:18	-- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0
00:11:25.898   16:55:18	-- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3
00:11:25.898    16:55:18	-- bdev/nbd_common.sh@30 -- # basename /dev/nbd3
00:11:25.898   16:55:18	-- bdev/nbd_common.sh@30 -- # waitfornbd nbd3
00:11:25.898   16:55:18	-- common/autotest_common.sh@866 -- # local nbd_name=nbd3
00:11:25.898   16:55:18	-- common/autotest_common.sh@867 -- # local i
00:11:25.898   16:55:18	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:11:25.898   16:55:18	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:11:25.898   16:55:18	-- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions
00:11:25.898   16:55:18	-- common/autotest_common.sh@871 -- # break
00:11:25.898   16:55:18	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:11:25.898   16:55:18	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:11:25.898   16:55:18	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:11:25.898  1+0 records in
00:11:25.898  1+0 records out
00:11:25.898  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000431873 s, 9.5 MB/s
00:11:25.898    16:55:18	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:25.898   16:55:18	-- common/autotest_common.sh@884 -- # size=4096
00:11:25.898   16:55:18	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:25.898   16:55:18	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:11:25.898   16:55:18	-- common/autotest_common.sh@887 -- # return 0
00:11:25.898   16:55:18	-- bdev/nbd_common.sh@27 -- # (( i++ ))
00:11:25.898   16:55:18	-- bdev/nbd_common.sh@27 -- # (( i < 16 ))
00:11:25.898    16:55:18	-- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1
00:11:26.157   16:55:18	-- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4
00:11:26.157    16:55:18	-- bdev/nbd_common.sh@30 -- # basename /dev/nbd4
00:11:26.157   16:55:18	-- bdev/nbd_common.sh@30 -- # waitfornbd nbd4
00:11:26.157   16:55:18	-- common/autotest_common.sh@866 -- # local nbd_name=nbd4
00:11:26.157   16:55:18	-- common/autotest_common.sh@867 -- # local i
00:11:26.157   16:55:18	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:11:26.157   16:55:18	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:11:26.157   16:55:18	-- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions
00:11:26.157   16:55:18	-- common/autotest_common.sh@871 -- # break
00:11:26.157   16:55:18	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:11:26.157   16:55:18	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:11:26.157   16:55:18	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:11:26.157  1+0 records in
00:11:26.157  1+0 records out
00:11:26.157  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000592042 s, 6.9 MB/s
00:11:26.157    16:55:18	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:26.157   16:55:18	-- common/autotest_common.sh@884 -- # size=4096
00:11:26.157   16:55:18	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:26.157   16:55:18	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:11:26.157   16:55:18	-- common/autotest_common.sh@887 -- # return 0
00:11:26.157   16:55:18	-- bdev/nbd_common.sh@27 -- # (( i++ ))
00:11:26.157   16:55:18	-- bdev/nbd_common.sh@27 -- # (( i < 16 ))
00:11:26.157    16:55:18	-- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2
00:11:26.416   16:55:19	-- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5
00:11:26.416    16:55:19	-- bdev/nbd_common.sh@30 -- # basename /dev/nbd5
00:11:26.416   16:55:19	-- bdev/nbd_common.sh@30 -- # waitfornbd nbd5
00:11:26.416   16:55:19	-- common/autotest_common.sh@866 -- # local nbd_name=nbd5
00:11:26.416   16:55:19	-- common/autotest_common.sh@867 -- # local i
00:11:26.416   16:55:19	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:11:26.416   16:55:19	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:11:26.416   16:55:19	-- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions
00:11:26.416   16:55:19	-- common/autotest_common.sh@871 -- # break
00:11:26.416   16:55:19	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:11:26.416   16:55:19	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:11:26.416   16:55:19	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:11:26.416  1+0 records in
00:11:26.416  1+0 records out
00:11:26.416  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00119377 s, 3.4 MB/s
00:11:26.416    16:55:19	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:26.416   16:55:19	-- common/autotest_common.sh@884 -- # size=4096
00:11:26.416   16:55:19	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:26.416   16:55:19	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:11:26.416   16:55:19	-- common/autotest_common.sh@887 -- # return 0
00:11:26.416   16:55:19	-- bdev/nbd_common.sh@27 -- # (( i++ ))
00:11:26.416   16:55:19	-- bdev/nbd_common.sh@27 -- # (( i < 16 ))
00:11:26.416    16:55:19	-- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3
00:11:26.982   16:55:19	-- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6
00:11:26.982    16:55:19	-- bdev/nbd_common.sh@30 -- # basename /dev/nbd6
00:11:26.982   16:55:19	-- bdev/nbd_common.sh@30 -- # waitfornbd nbd6
00:11:26.982   16:55:19	-- common/autotest_common.sh@866 -- # local nbd_name=nbd6
00:11:26.982   16:55:19	-- common/autotest_common.sh@867 -- # local i
00:11:26.982   16:55:19	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:11:26.982   16:55:19	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:11:26.982   16:55:19	-- common/autotest_common.sh@870 -- # grep -q -w nbd6 /proc/partitions
00:11:26.982   16:55:19	-- common/autotest_common.sh@871 -- # break
00:11:26.982   16:55:19	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:11:26.982   16:55:19	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:11:26.982   16:55:19	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:11:26.982  1+0 records in
00:11:26.982  1+0 records out
00:11:26.982  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000965817 s, 4.2 MB/s
00:11:26.982    16:55:19	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:26.982   16:55:19	-- common/autotest_common.sh@884 -- # size=4096
00:11:26.982   16:55:19	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:26.982   16:55:19	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:11:26.982   16:55:19	-- common/autotest_common.sh@887 -- # return 0
00:11:26.982   16:55:19	-- bdev/nbd_common.sh@27 -- # (( i++ ))
00:11:26.982   16:55:19	-- bdev/nbd_common.sh@27 -- # (( i < 16 ))
00:11:26.982    16:55:19	-- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4
00:11:27.241   16:55:19	-- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd7
00:11:27.241    16:55:19	-- bdev/nbd_common.sh@30 -- # basename /dev/nbd7
00:11:27.241   16:55:19	-- bdev/nbd_common.sh@30 -- # waitfornbd nbd7
00:11:27.241   16:55:19	-- common/autotest_common.sh@866 -- # local nbd_name=nbd7
00:11:27.241   16:55:19	-- common/autotest_common.sh@867 -- # local i
00:11:27.241   16:55:19	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:11:27.241   16:55:19	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:11:27.241   16:55:19	-- common/autotest_common.sh@870 -- # grep -q -w nbd7 /proc/partitions
00:11:27.241   16:55:19	-- common/autotest_common.sh@871 -- # break
00:11:27.241   16:55:19	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:11:27.241   16:55:19	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:11:27.241   16:55:19	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd7 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:11:27.241  1+0 records in
00:11:27.241  1+0 records out
00:11:27.241  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000590119 s, 6.9 MB/s
00:11:27.241    16:55:19	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:27.241   16:55:19	-- common/autotest_common.sh@884 -- # size=4096
00:11:27.241   16:55:19	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:27.241   16:55:19	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:11:27.241   16:55:19	-- common/autotest_common.sh@887 -- # return 0
00:11:27.241   16:55:19	-- bdev/nbd_common.sh@27 -- # (( i++ ))
00:11:27.241   16:55:19	-- bdev/nbd_common.sh@27 -- # (( i < 16 ))
00:11:27.241    16:55:19	-- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5
00:11:27.500   16:55:20	-- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd8
00:11:27.500    16:55:20	-- bdev/nbd_common.sh@30 -- # basename /dev/nbd8
00:11:27.500   16:55:20	-- bdev/nbd_common.sh@30 -- # waitfornbd nbd8
00:11:27.500   16:55:20	-- common/autotest_common.sh@866 -- # local nbd_name=nbd8
00:11:27.500   16:55:20	-- common/autotest_common.sh@867 -- # local i
00:11:27.500   16:55:20	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:11:27.500   16:55:20	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:11:27.500   16:55:20	-- common/autotest_common.sh@870 -- # grep -q -w nbd8 /proc/partitions
00:11:27.500   16:55:20	-- common/autotest_common.sh@871 -- # break
00:11:27.500   16:55:20	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:11:27.500   16:55:20	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:11:27.500   16:55:20	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd8 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:11:27.500  1+0 records in
00:11:27.500  1+0 records out
00:11:27.500  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00233377 s, 1.8 MB/s
00:11:27.500    16:55:20	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:27.500   16:55:20	-- common/autotest_common.sh@884 -- # size=4096
00:11:27.500   16:55:20	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:27.500   16:55:20	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:11:27.500   16:55:20	-- common/autotest_common.sh@887 -- # return 0
00:11:27.500   16:55:20	-- bdev/nbd_common.sh@27 -- # (( i++ ))
00:11:27.500   16:55:20	-- bdev/nbd_common.sh@27 -- # (( i < 16 ))
00:11:27.500    16:55:20	-- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6
00:11:27.759   16:55:20	-- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd9
00:11:27.759    16:55:20	-- bdev/nbd_common.sh@30 -- # basename /dev/nbd9
00:11:27.759   16:55:20	-- bdev/nbd_common.sh@30 -- # waitfornbd nbd9
00:11:27.759   16:55:20	-- common/autotest_common.sh@866 -- # local nbd_name=nbd9
00:11:27.759   16:55:20	-- common/autotest_common.sh@867 -- # local i
00:11:27.759   16:55:20	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:11:27.759   16:55:20	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:11:27.759   16:55:20	-- common/autotest_common.sh@870 -- # grep -q -w nbd9 /proc/partitions
00:11:27.759   16:55:20	-- common/autotest_common.sh@871 -- # break
00:11:27.759   16:55:20	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:11:27.759   16:55:20	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:11:27.759   16:55:20	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd9 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:11:27.759  1+0 records in
00:11:27.759  1+0 records out
00:11:27.759  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000961144 s, 4.3 MB/s
00:11:27.759    16:55:20	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:27.759   16:55:20	-- common/autotest_common.sh@884 -- # size=4096
00:11:27.759   16:55:20	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:27.759   16:55:20	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:11:27.759   16:55:20	-- common/autotest_common.sh@887 -- # return 0
00:11:27.759   16:55:20	-- bdev/nbd_common.sh@27 -- # (( i++ ))
00:11:27.759   16:55:20	-- bdev/nbd_common.sh@27 -- # (( i < 16 ))
00:11:27.759    16:55:20	-- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7
00:11:28.018   16:55:20	-- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd10
00:11:28.018    16:55:20	-- bdev/nbd_common.sh@30 -- # basename /dev/nbd10
00:11:28.018   16:55:20	-- bdev/nbd_common.sh@30 -- # waitfornbd nbd10
00:11:28.018   16:55:20	-- common/autotest_common.sh@866 -- # local nbd_name=nbd10
00:11:28.018   16:55:20	-- common/autotest_common.sh@867 -- # local i
00:11:28.018   16:55:20	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:11:28.018   16:55:20	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:11:28.018   16:55:20	-- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions
00:11:28.018   16:55:20	-- common/autotest_common.sh@871 -- # break
00:11:28.018   16:55:20	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:11:28.018   16:55:20	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:11:28.018   16:55:20	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:11:28.018  1+0 records in
00:11:28.018  1+0 records out
00:11:28.018  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000868942 s, 4.7 MB/s
00:11:28.018    16:55:20	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:28.018   16:55:20	-- common/autotest_common.sh@884 -- # size=4096
00:11:28.018   16:55:20	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:28.018   16:55:20	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:11:28.018   16:55:20	-- common/autotest_common.sh@887 -- # return 0
00:11:28.018   16:55:20	-- bdev/nbd_common.sh@27 -- # (( i++ ))
00:11:28.018   16:55:20	-- bdev/nbd_common.sh@27 -- # (( i < 16 ))
00:11:28.018    16:55:20	-- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT
00:11:28.277   16:55:21	-- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd11
00:11:28.277    16:55:21	-- bdev/nbd_common.sh@30 -- # basename /dev/nbd11
00:11:28.277   16:55:21	-- bdev/nbd_common.sh@30 -- # waitfornbd nbd11
00:11:28.277   16:55:21	-- common/autotest_common.sh@866 -- # local nbd_name=nbd11
00:11:28.277   16:55:21	-- common/autotest_common.sh@867 -- # local i
00:11:28.277   16:55:21	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:11:28.277   16:55:21	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:11:28.277   16:55:21	-- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions
00:11:28.277   16:55:21	-- common/autotest_common.sh@871 -- # break
00:11:28.277   16:55:21	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:11:28.277   16:55:21	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:11:28.277   16:55:21	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:11:28.277  1+0 records in
00:11:28.277  1+0 records out
00:11:28.277  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000816693 s, 5.0 MB/s
00:11:28.277    16:55:21	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:28.277   16:55:21	-- common/autotest_common.sh@884 -- # size=4096
00:11:28.277   16:55:21	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:28.277   16:55:21	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:11:28.277   16:55:21	-- common/autotest_common.sh@887 -- # return 0
00:11:28.277   16:55:21	-- bdev/nbd_common.sh@27 -- # (( i++ ))
00:11:28.277   16:55:21	-- bdev/nbd_common.sh@27 -- # (( i < 16 ))
00:11:28.277    16:55:21	-- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0
00:11:28.537   16:55:21	-- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd12
00:11:28.537    16:55:21	-- bdev/nbd_common.sh@30 -- # basename /dev/nbd12
00:11:28.537   16:55:21	-- bdev/nbd_common.sh@30 -- # waitfornbd nbd12
00:11:28.537   16:55:21	-- common/autotest_common.sh@866 -- # local nbd_name=nbd12
00:11:28.537   16:55:21	-- common/autotest_common.sh@867 -- # local i
00:11:28.537   16:55:21	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:11:28.537   16:55:21	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:11:28.537   16:55:21	-- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions
00:11:28.537   16:55:21	-- common/autotest_common.sh@871 -- # break
00:11:28.537   16:55:21	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:11:28.537   16:55:21	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:11:28.537   16:55:21	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:11:28.537  1+0 records in
00:11:28.537  1+0 records out
00:11:28.537  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000894443 s, 4.6 MB/s
00:11:28.537    16:55:21	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:28.537   16:55:21	-- common/autotest_common.sh@884 -- # size=4096
00:11:28.537   16:55:21	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:28.537   16:55:21	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:11:28.537   16:55:21	-- common/autotest_common.sh@887 -- # return 0
00:11:28.537   16:55:21	-- bdev/nbd_common.sh@27 -- # (( i++ ))
00:11:28.537   16:55:21	-- bdev/nbd_common.sh@27 -- # (( i < 16 ))
00:11:28.537    16:55:21	-- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0
00:11:28.796   16:55:21	-- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd13
00:11:28.796    16:55:21	-- bdev/nbd_common.sh@30 -- # basename /dev/nbd13
00:11:28.796   16:55:21	-- bdev/nbd_common.sh@30 -- # waitfornbd nbd13
00:11:28.796   16:55:21	-- common/autotest_common.sh@866 -- # local nbd_name=nbd13
00:11:28.796   16:55:21	-- common/autotest_common.sh@867 -- # local i
00:11:28.796   16:55:21	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:11:28.796   16:55:21	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:11:28.796   16:55:21	-- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions
00:11:28.796   16:55:21	-- common/autotest_common.sh@871 -- # break
00:11:28.796   16:55:21	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:11:28.796   16:55:21	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:11:28.796   16:55:21	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:11:28.796  1+0 records in
00:11:28.796  1+0 records out
00:11:28.796  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00121103 s, 3.4 MB/s
00:11:28.796    16:55:21	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:28.796   16:55:21	-- common/autotest_common.sh@884 -- # size=4096
00:11:28.796   16:55:21	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:28.796   16:55:21	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:11:28.796   16:55:21	-- common/autotest_common.sh@887 -- # return 0
00:11:28.796   16:55:21	-- bdev/nbd_common.sh@27 -- # (( i++ ))
00:11:28.796   16:55:21	-- bdev/nbd_common.sh@27 -- # (( i < 16 ))
00:11:28.796    16:55:21	-- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1
00:11:29.053   16:55:21	-- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd14
00:11:29.053    16:55:21	-- bdev/nbd_common.sh@30 -- # basename /dev/nbd14
00:11:29.053   16:55:21	-- bdev/nbd_common.sh@30 -- # waitfornbd nbd14
00:11:29.053   16:55:21	-- common/autotest_common.sh@866 -- # local nbd_name=nbd14
00:11:29.053   16:55:21	-- common/autotest_common.sh@867 -- # local i
00:11:29.054   16:55:21	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:11:29.054   16:55:21	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:11:29.054   16:55:21	-- common/autotest_common.sh@870 -- # grep -q -w nbd14 /proc/partitions
00:11:29.054   16:55:21	-- common/autotest_common.sh@871 -- # break
00:11:29.054   16:55:21	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:11:29.054   16:55:21	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:11:29.054   16:55:21	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:11:29.054  1+0 records in
00:11:29.054  1+0 records out
00:11:29.054  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00104924 s, 3.9 MB/s
00:11:29.054    16:55:21	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:29.054   16:55:21	-- common/autotest_common.sh@884 -- # size=4096
00:11:29.054   16:55:21	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:29.054   16:55:21	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:11:29.054   16:55:21	-- common/autotest_common.sh@887 -- # return 0
00:11:29.054   16:55:21	-- bdev/nbd_common.sh@27 -- # (( i++ ))
00:11:29.054   16:55:21	-- bdev/nbd_common.sh@27 -- # (( i < 16 ))
00:11:29.313    16:55:21	-- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0
00:11:29.572   16:55:22	-- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd15
00:11:29.572    16:55:22	-- bdev/nbd_common.sh@30 -- # basename /dev/nbd15
00:11:29.572   16:55:22	-- bdev/nbd_common.sh@30 -- # waitfornbd nbd15
00:11:29.572   16:55:22	-- common/autotest_common.sh@866 -- # local nbd_name=nbd15
00:11:29.572   16:55:22	-- common/autotest_common.sh@867 -- # local i
00:11:29.572   16:55:22	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:11:29.572   16:55:22	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:11:29.572   16:55:22	-- common/autotest_common.sh@870 -- # grep -q -w nbd15 /proc/partitions
00:11:29.572   16:55:22	-- common/autotest_common.sh@871 -- # break
00:11:29.572   16:55:22	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:11:29.572   16:55:22	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:11:29.572   16:55:22	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd15 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:11:29.572  1+0 records in
00:11:29.572  1+0 records out
00:11:29.572  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.001678 s, 2.4 MB/s
00:11:29.572    16:55:22	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:29.572   16:55:22	-- common/autotest_common.sh@884 -- # size=4096
00:11:29.572   16:55:22	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:29.572   16:55:22	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:11:29.572   16:55:22	-- common/autotest_common.sh@887 -- # return 0
00:11:29.572   16:55:22	-- bdev/nbd_common.sh@27 -- # (( i++ ))
00:11:29.572   16:55:22	-- bdev/nbd_common.sh@27 -- # (( i < 16 ))
00:11:29.572    16:55:22	-- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:11:29.831   16:55:22	-- bdev/nbd_common.sh@118 -- # nbd_disks_json='[
00:11:29.831    {
00:11:29.831      "nbd_device": "/dev/nbd0",
00:11:29.831      "bdev_name": "Malloc0"
00:11:29.831    },
00:11:29.831    {
00:11:29.831      "nbd_device": "/dev/nbd1",
00:11:29.831      "bdev_name": "Malloc1p0"
00:11:29.831    },
00:11:29.831    {
00:11:29.831      "nbd_device": "/dev/nbd2",
00:11:29.831      "bdev_name": "Malloc1p1"
00:11:29.831    },
00:11:29.831    {
00:11:29.831      "nbd_device": "/dev/nbd3",
00:11:29.831      "bdev_name": "Malloc2p0"
00:11:29.831    },
00:11:29.831    {
00:11:29.831      "nbd_device": "/dev/nbd4",
00:11:29.831      "bdev_name": "Malloc2p1"
00:11:29.831    },
00:11:29.831    {
00:11:29.831      "nbd_device": "/dev/nbd5",
00:11:29.831      "bdev_name": "Malloc2p2"
00:11:29.831    },
00:11:29.831    {
00:11:29.831      "nbd_device": "/dev/nbd6",
00:11:29.831      "bdev_name": "Malloc2p3"
00:11:29.831    },
00:11:29.831    {
00:11:29.831      "nbd_device": "/dev/nbd7",
00:11:29.831      "bdev_name": "Malloc2p4"
00:11:29.831    },
00:11:29.831    {
00:11:29.831      "nbd_device": "/dev/nbd8",
00:11:29.831      "bdev_name": "Malloc2p5"
00:11:29.831    },
00:11:29.831    {
00:11:29.831      "nbd_device": "/dev/nbd9",
00:11:29.831      "bdev_name": "Malloc2p6"
00:11:29.831    },
00:11:29.831    {
00:11:29.831      "nbd_device": "/dev/nbd10",
00:11:29.831      "bdev_name": "Malloc2p7"
00:11:29.831    },
00:11:29.831    {
00:11:29.831      "nbd_device": "/dev/nbd11",
00:11:29.831      "bdev_name": "TestPT"
00:11:29.831    },
00:11:29.831    {
00:11:29.831      "nbd_device": "/dev/nbd12",
00:11:29.831      "bdev_name": "raid0"
00:11:29.831    },
00:11:29.831    {
00:11:29.831      "nbd_device": "/dev/nbd13",
00:11:29.831      "bdev_name": "concat0"
00:11:29.831    },
00:11:29.831    {
00:11:29.831      "nbd_device": "/dev/nbd14",
00:11:29.831      "bdev_name": "raid1"
00:11:29.831    },
00:11:29.831    {
00:11:29.831      "nbd_device": "/dev/nbd15",
00:11:29.831      "bdev_name": "AIO0"
00:11:29.831    }
00:11:29.831  ]'
00:11:29.831   16:55:22	-- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device'))
00:11:29.831    16:55:22	-- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device'
00:11:29.831    16:55:22	-- bdev/nbd_common.sh@119 -- # echo '[
00:11:29.831    {
00:11:29.831      "nbd_device": "/dev/nbd0",
00:11:29.831      "bdev_name": "Malloc0"
00:11:29.831    },
00:11:29.831    {
00:11:29.831      "nbd_device": "/dev/nbd1",
00:11:29.831      "bdev_name": "Malloc1p0"
00:11:29.831    },
00:11:29.831    {
00:11:29.831      "nbd_device": "/dev/nbd2",
00:11:29.831      "bdev_name": "Malloc1p1"
00:11:29.831    },
00:11:29.831    {
00:11:29.831      "nbd_device": "/dev/nbd3",
00:11:29.831      "bdev_name": "Malloc2p0"
00:11:29.831    },
00:11:29.831    {
00:11:29.831      "nbd_device": "/dev/nbd4",
00:11:29.831      "bdev_name": "Malloc2p1"
00:11:29.831    },
00:11:29.831    {
00:11:29.831      "nbd_device": "/dev/nbd5",
00:11:29.831      "bdev_name": "Malloc2p2"
00:11:29.831    },
00:11:29.831    {
00:11:29.831      "nbd_device": "/dev/nbd6",
00:11:29.831      "bdev_name": "Malloc2p3"
00:11:29.831    },
00:11:29.831    {
00:11:29.831      "nbd_device": "/dev/nbd7",
00:11:29.831      "bdev_name": "Malloc2p4"
00:11:29.831    },
00:11:29.831    {
00:11:29.831      "nbd_device": "/dev/nbd8",
00:11:29.831      "bdev_name": "Malloc2p5"
00:11:29.831    },
00:11:29.831    {
00:11:29.831      "nbd_device": "/dev/nbd9",
00:11:29.831      "bdev_name": "Malloc2p6"
00:11:29.831    },
00:11:29.831    {
00:11:29.831      "nbd_device": "/dev/nbd10",
00:11:29.831      "bdev_name": "Malloc2p7"
00:11:29.831    },
00:11:29.831    {
00:11:29.831      "nbd_device": "/dev/nbd11",
00:11:29.831      "bdev_name": "TestPT"
00:11:29.831    },
00:11:29.831    {
00:11:29.831      "nbd_device": "/dev/nbd12",
00:11:29.831      "bdev_name": "raid0"
00:11:29.831    },
00:11:29.831    {
00:11:29.831      "nbd_device": "/dev/nbd13",
00:11:29.831      "bdev_name": "concat0"
00:11:29.831    },
00:11:29.831    {
00:11:29.831      "nbd_device": "/dev/nbd14",
00:11:29.831      "bdev_name": "raid1"
00:11:29.831    },
00:11:29.831    {
00:11:29.831      "nbd_device": "/dev/nbd15",
00:11:29.831      "bdev_name": "AIO0"
00:11:29.831    }
00:11:29.831  ]'
00:11:29.831   16:55:22	-- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15'
00:11:29.831   16:55:22	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:11:29.831   16:55:22	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15')
00:11:29.831   16:55:22	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:11:29.831   16:55:22	-- bdev/nbd_common.sh@51 -- # local i
00:11:29.831   16:55:22	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:11:29.831   16:55:22	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:11:30.090    16:55:22	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:11:30.090   16:55:22	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:11:30.090   16:55:22	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:11:30.090   16:55:22	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:11:30.090   16:55:22	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:11:30.090   16:55:22	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:11:30.090   16:55:22	-- bdev/nbd_common.sh@41 -- # break
00:11:30.090   16:55:22	-- bdev/nbd_common.sh@45 -- # return 0
00:11:30.090   16:55:22	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:11:30.090   16:55:22	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:11:30.350    16:55:23	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:11:30.350   16:55:23	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:11:30.350   16:55:23	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:11:30.350   16:55:23	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:11:30.350   16:55:23	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:11:30.350   16:55:23	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:11:30.350   16:55:23	-- bdev/nbd_common.sh@41 -- # break
00:11:30.350   16:55:23	-- bdev/nbd_common.sh@45 -- # return 0
00:11:30.350   16:55:23	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:11:30.350   16:55:23	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2
00:11:30.609    16:55:23	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd2
00:11:30.609   16:55:23	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2
00:11:30.609   16:55:23	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2
00:11:30.609   16:55:23	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:11:30.609   16:55:23	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:11:30.609   16:55:23	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions
00:11:30.609   16:55:23	-- bdev/nbd_common.sh@41 -- # break
00:11:30.609   16:55:23	-- bdev/nbd_common.sh@45 -- # return 0
00:11:30.609   16:55:23	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:11:30.609   16:55:23	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3
00:11:30.868    16:55:23	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd3
00:11:30.868   16:55:23	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3
00:11:30.868   16:55:23	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3
00:11:30.868   16:55:23	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:11:30.868   16:55:23	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:11:30.868   16:55:23	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions
00:11:30.868   16:55:23	-- bdev/nbd_common.sh@41 -- # break
00:11:30.868   16:55:23	-- bdev/nbd_common.sh@45 -- # return 0
00:11:30.868   16:55:23	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:11:30.868   16:55:23	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4
00:11:31.127    16:55:23	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd4
00:11:31.127   16:55:23	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4
00:11:31.127   16:55:23	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4
00:11:31.127   16:55:23	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:11:31.127   16:55:23	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:11:31.127   16:55:23	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions
00:11:31.127   16:55:23	-- bdev/nbd_common.sh@41 -- # break
00:11:31.127   16:55:23	-- bdev/nbd_common.sh@45 -- # return 0
00:11:31.128   16:55:23	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:11:31.128   16:55:23	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5
00:11:31.386    16:55:23	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd5
00:11:31.386   16:55:23	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5
00:11:31.386   16:55:23	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5
00:11:31.386   16:55:23	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:11:31.386   16:55:23	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:11:31.386   16:55:23	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions
00:11:31.386   16:55:23	-- bdev/nbd_common.sh@41 -- # break
00:11:31.386   16:55:23	-- bdev/nbd_common.sh@45 -- # return 0
00:11:31.386   16:55:23	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:11:31.386   16:55:23	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6
00:11:31.386    16:55:24	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd6
00:11:31.386   16:55:24	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6
00:11:31.386   16:55:24	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6
00:11:31.386   16:55:24	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:11:31.387   16:55:24	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:11:31.387   16:55:24	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions
00:11:31.646   16:55:24	-- bdev/nbd_common.sh@41 -- # break
00:11:31.646   16:55:24	-- bdev/nbd_common.sh@45 -- # return 0
00:11:31.646   16:55:24	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:11:31.646   16:55:24	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7
00:11:31.905    16:55:24	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd7
00:11:31.905   16:55:24	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7
00:11:31.905   16:55:24	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7
00:11:31.905   16:55:24	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:11:31.905   16:55:24	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:11:31.905   16:55:24	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions
00:11:31.905   16:55:24	-- bdev/nbd_common.sh@41 -- # break
00:11:31.905   16:55:24	-- bdev/nbd_common.sh@45 -- # return 0
00:11:31.905   16:55:24	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:11:31.905   16:55:24	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8
00:11:31.905    16:55:24	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd8
00:11:32.164   16:55:24	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8
00:11:32.164   16:55:24	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8
00:11:32.164   16:55:24	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:11:32.164   16:55:24	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:11:32.164   16:55:24	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions
00:11:32.164   16:55:24	-- bdev/nbd_common.sh@41 -- # break
00:11:32.164   16:55:24	-- bdev/nbd_common.sh@45 -- # return 0
00:11:32.164   16:55:24	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:11:32.164   16:55:24	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9
00:11:32.164    16:55:24	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd9
00:11:32.164   16:55:24	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9
00:11:32.164   16:55:24	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9
00:11:32.164   16:55:24	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:11:32.164   16:55:24	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:11:32.164   16:55:24	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions
00:11:32.164   16:55:24	-- bdev/nbd_common.sh@41 -- # break
00:11:32.164   16:55:24	-- bdev/nbd_common.sh@45 -- # return 0
00:11:32.164   16:55:24	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:11:32.164   16:55:24	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10
00:11:32.423    16:55:25	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd10
00:11:32.423   16:55:25	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10
00:11:32.423   16:55:25	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10
00:11:32.423   16:55:25	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:11:32.423   16:55:25	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:11:32.423   16:55:25	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions
00:11:32.423   16:55:25	-- bdev/nbd_common.sh@41 -- # break
00:11:32.423   16:55:25	-- bdev/nbd_common.sh@45 -- # return 0
00:11:32.423   16:55:25	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:11:32.423   16:55:25	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11
00:11:32.682    16:55:25	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd11
00:11:32.682   16:55:25	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11
00:11:32.682   16:55:25	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11
00:11:32.682   16:55:25	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:11:32.682   16:55:25	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:11:32.682   16:55:25	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions
00:11:32.682   16:55:25	-- bdev/nbd_common.sh@41 -- # break
00:11:32.682   16:55:25	-- bdev/nbd_common.sh@45 -- # return 0
00:11:32.682   16:55:25	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:11:32.682   16:55:25	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12
00:11:32.941    16:55:25	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd12
00:11:32.941   16:55:25	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12
00:11:32.941   16:55:25	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12
00:11:32.941   16:55:25	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:11:32.941   16:55:25	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:11:32.941   16:55:25	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions
00:11:32.941   16:55:25	-- bdev/nbd_common.sh@41 -- # break
00:11:32.941   16:55:25	-- bdev/nbd_common.sh@45 -- # return 0
00:11:32.941   16:55:25	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:11:32.941   16:55:25	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13
00:11:33.201    16:55:25	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd13
00:11:33.201   16:55:25	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13
00:11:33.201   16:55:25	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13
00:11:33.201   16:55:25	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:11:33.201   16:55:25	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:11:33.201   16:55:25	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions
00:11:33.201   16:55:25	-- bdev/nbd_common.sh@41 -- # break
00:11:33.201   16:55:25	-- bdev/nbd_common.sh@45 -- # return 0
00:11:33.201   16:55:25	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:11:33.201   16:55:25	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14
00:11:33.460    16:55:26	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd14
00:11:33.460   16:55:26	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14
00:11:33.460   16:55:26	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14
00:11:33.460   16:55:26	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:11:33.460   16:55:26	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:11:33.460   16:55:26	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions
00:11:33.460   16:55:26	-- bdev/nbd_common.sh@41 -- # break
00:11:33.460   16:55:26	-- bdev/nbd_common.sh@45 -- # return 0
00:11:33.460   16:55:26	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:11:33.460   16:55:26	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15
00:11:33.720    16:55:26	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd15
00:11:33.720   16:55:26	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15
00:11:33.720   16:55:26	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15
00:11:33.720   16:55:26	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:11:33.720   16:55:26	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:11:33.720   16:55:26	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions
00:11:33.720   16:55:26	-- bdev/nbd_common.sh@41 -- # break
00:11:33.720   16:55:26	-- bdev/nbd_common.sh@45 -- # return 0
00:11:33.720    16:55:26	-- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:11:33.720    16:55:26	-- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:11:33.720     16:55:26	-- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:11:33.979    16:55:26	-- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:11:33.979     16:55:26	-- bdev/nbd_common.sh@64 -- # echo '[]'
00:11:33.979     16:55:26	-- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:11:33.979    16:55:26	-- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:11:33.979     16:55:26	-- bdev/nbd_common.sh@65 -- # echo ''
00:11:33.979     16:55:26	-- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:11:33.979     16:55:26	-- bdev/nbd_common.sh@65 -- # true
00:11:33.979    16:55:26	-- bdev/nbd_common.sh@65 -- # count=0
00:11:33.979    16:55:26	-- bdev/nbd_common.sh@66 -- # echo 0
00:11:33.979   16:55:26	-- bdev/nbd_common.sh@122 -- # count=0
00:11:33.979   16:55:26	-- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']'
00:11:33.979   16:55:26	-- bdev/nbd_common.sh@127 -- # return 0
00:11:33.979   16:55:26	-- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9'
00:11:33.979   16:55:26	-- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:11:33.979   16:55:26	-- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0')
00:11:33.979   16:55:26	-- bdev/nbd_common.sh@91 -- # local bdev_list
00:11:33.979   16:55:26	-- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9')
00:11:33.979   16:55:26	-- bdev/nbd_common.sh@92 -- # local nbd_list
00:11:33.979   16:55:26	-- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9'
00:11:33.979   16:55:26	-- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:11:33.979   16:55:26	-- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0')
00:11:33.979   16:55:26	-- bdev/nbd_common.sh@10 -- # local bdev_list
00:11:33.979   16:55:26	-- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9')
00:11:33.979   16:55:26	-- bdev/nbd_common.sh@11 -- # local nbd_list
00:11:33.979   16:55:26	-- bdev/nbd_common.sh@12 -- # local i
00:11:33.979   16:55:26	-- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:11:33.979   16:55:26	-- bdev/nbd_common.sh@14 -- # (( i < 16 ))
00:11:33.979   16:55:26	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0
00:11:34.239  /dev/nbd0
00:11:34.498    16:55:27	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:11:34.498   16:55:27	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:11:34.498   16:55:27	-- common/autotest_common.sh@866 -- # local nbd_name=nbd0
00:11:34.498   16:55:27	-- common/autotest_common.sh@867 -- # local i
00:11:34.498   16:55:27	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:11:34.498   16:55:27	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:11:34.498   16:55:27	-- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions
00:11:34.498   16:55:27	-- common/autotest_common.sh@871 -- # break
00:11:34.498   16:55:27	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:11:34.498   16:55:27	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:11:34.498   16:55:27	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:11:34.498  1+0 records in
00:11:34.498  1+0 records out
00:11:34.498  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000481256 s, 8.5 MB/s
00:11:34.498    16:55:27	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:34.498   16:55:27	-- common/autotest_common.sh@884 -- # size=4096
00:11:34.498   16:55:27	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:34.498   16:55:27	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:11:34.498   16:55:27	-- common/autotest_common.sh@887 -- # return 0
00:11:34.498   16:55:27	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:11:34.498   16:55:27	-- bdev/nbd_common.sh@14 -- # (( i < 16 ))
00:11:34.498   16:55:27	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 /dev/nbd1
00:11:34.759  /dev/nbd1
00:11:34.759    16:55:27	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:11:34.759   16:55:27	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:11:34.759   16:55:27	-- common/autotest_common.sh@866 -- # local nbd_name=nbd1
00:11:34.759   16:55:27	-- common/autotest_common.sh@867 -- # local i
00:11:34.759   16:55:27	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:11:34.759   16:55:27	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:11:34.759   16:55:27	-- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions
00:11:34.759   16:55:27	-- common/autotest_common.sh@871 -- # break
00:11:34.759   16:55:27	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:11:34.759   16:55:27	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:11:34.759   16:55:27	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:11:34.759  1+0 records in
00:11:34.759  1+0 records out
00:11:34.759  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000547277 s, 7.5 MB/s
00:11:34.759    16:55:27	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:34.759   16:55:27	-- common/autotest_common.sh@884 -- # size=4096
00:11:34.759   16:55:27	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:34.759   16:55:27	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:11:34.759   16:55:27	-- common/autotest_common.sh@887 -- # return 0
00:11:34.759   16:55:27	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:11:34.759   16:55:27	-- bdev/nbd_common.sh@14 -- # (( i < 16 ))
00:11:34.759   16:55:27	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 /dev/nbd10
00:11:35.017  /dev/nbd10
00:11:35.017    16:55:27	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd10
00:11:35.017   16:55:27	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd10
00:11:35.018   16:55:27	-- common/autotest_common.sh@866 -- # local nbd_name=nbd10
00:11:35.018   16:55:27	-- common/autotest_common.sh@867 -- # local i
00:11:35.018   16:55:27	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:11:35.018   16:55:27	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:11:35.018   16:55:27	-- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions
00:11:35.018   16:55:27	-- common/autotest_common.sh@871 -- # break
00:11:35.018   16:55:27	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:11:35.018   16:55:27	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:11:35.018   16:55:27	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:11:35.018  1+0 records in
00:11:35.018  1+0 records out
00:11:35.018  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000926823 s, 4.4 MB/s
00:11:35.018    16:55:27	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:35.018   16:55:27	-- common/autotest_common.sh@884 -- # size=4096
00:11:35.018   16:55:27	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:35.018   16:55:27	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:11:35.018   16:55:27	-- common/autotest_common.sh@887 -- # return 0
00:11:35.018   16:55:27	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:11:35.018   16:55:27	-- bdev/nbd_common.sh@14 -- # (( i < 16 ))
00:11:35.018   16:55:27	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 /dev/nbd11
00:11:35.277  /dev/nbd11
00:11:35.277    16:55:27	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd11
00:11:35.277   16:55:27	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd11
00:11:35.277   16:55:27	-- common/autotest_common.sh@866 -- # local nbd_name=nbd11
00:11:35.277   16:55:27	-- common/autotest_common.sh@867 -- # local i
00:11:35.277   16:55:27	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:11:35.277   16:55:27	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:11:35.277   16:55:27	-- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions
00:11:35.277   16:55:27	-- common/autotest_common.sh@871 -- # break
00:11:35.277   16:55:27	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:11:35.277   16:55:28	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:11:35.277   16:55:28	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:11:35.277  1+0 records in
00:11:35.277  1+0 records out
00:11:35.277  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000770914 s, 5.3 MB/s
00:11:35.277    16:55:28	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:35.277   16:55:28	-- common/autotest_common.sh@884 -- # size=4096
00:11:35.277   16:55:28	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:35.277   16:55:28	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:11:35.277   16:55:28	-- common/autotest_common.sh@887 -- # return 0
00:11:35.277   16:55:28	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:11:35.277   16:55:28	-- bdev/nbd_common.sh@14 -- # (( i < 16 ))
00:11:35.277   16:55:28	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 /dev/nbd12
00:11:35.537  /dev/nbd12
00:11:35.537    16:55:28	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd12
00:11:35.537   16:55:28	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd12
00:11:35.537   16:55:28	-- common/autotest_common.sh@866 -- # local nbd_name=nbd12
00:11:35.537   16:55:28	-- common/autotest_common.sh@867 -- # local i
00:11:35.537   16:55:28	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:11:35.537   16:55:28	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:11:35.537   16:55:28	-- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions
00:11:35.537   16:55:28	-- common/autotest_common.sh@871 -- # break
00:11:35.537   16:55:28	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:11:35.537   16:55:28	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:11:35.537   16:55:28	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:11:35.537  1+0 records in
00:11:35.537  1+0 records out
00:11:35.537  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000725582 s, 5.6 MB/s
00:11:35.537    16:55:28	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:35.537   16:55:28	-- common/autotest_common.sh@884 -- # size=4096
00:11:35.537   16:55:28	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:35.537   16:55:28	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:11:35.537   16:55:28	-- common/autotest_common.sh@887 -- # return 0
00:11:35.537   16:55:28	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:11:35.537   16:55:28	-- bdev/nbd_common.sh@14 -- # (( i < 16 ))
00:11:35.537   16:55:28	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 /dev/nbd13
00:11:35.796  /dev/nbd13
00:11:35.796    16:55:28	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd13
00:11:35.796   16:55:28	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd13
00:11:35.796   16:55:28	-- common/autotest_common.sh@866 -- # local nbd_name=nbd13
00:11:35.796   16:55:28	-- common/autotest_common.sh@867 -- # local i
00:11:35.796   16:55:28	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:11:35.796   16:55:28	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:11:35.796   16:55:28	-- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions
00:11:35.796   16:55:28	-- common/autotest_common.sh@871 -- # break
00:11:35.796   16:55:28	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:11:35.796   16:55:28	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:11:35.796   16:55:28	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:11:35.796  1+0 records in
00:11:35.796  1+0 records out
00:11:35.796  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000608488 s, 6.7 MB/s
00:11:35.796    16:55:28	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:35.796   16:55:28	-- common/autotest_common.sh@884 -- # size=4096
00:11:35.796   16:55:28	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:35.796   16:55:28	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:11:35.796   16:55:28	-- common/autotest_common.sh@887 -- # return 0
00:11:35.796   16:55:28	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:11:35.796   16:55:28	-- bdev/nbd_common.sh@14 -- # (( i < 16 ))
00:11:35.796   16:55:28	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 /dev/nbd14
00:11:36.055  /dev/nbd14
00:11:36.055    16:55:28	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd14
00:11:36.055   16:55:28	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd14
00:11:36.055   16:55:28	-- common/autotest_common.sh@866 -- # local nbd_name=nbd14
00:11:36.055   16:55:28	-- common/autotest_common.sh@867 -- # local i
00:11:36.055   16:55:28	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:11:36.055   16:55:28	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:11:36.055   16:55:28	-- common/autotest_common.sh@870 -- # grep -q -w nbd14 /proc/partitions
00:11:36.055   16:55:28	-- common/autotest_common.sh@871 -- # break
00:11:36.055   16:55:28	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:11:36.055   16:55:28	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:11:36.055   16:55:28	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:11:36.055  1+0 records in
00:11:36.055  1+0 records out
00:11:36.055  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000855408 s, 4.8 MB/s
00:11:36.055    16:55:28	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:36.055   16:55:28	-- common/autotest_common.sh@884 -- # size=4096
00:11:36.055   16:55:28	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:36.055   16:55:28	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:11:36.055   16:55:28	-- common/autotest_common.sh@887 -- # return 0
00:11:36.055   16:55:28	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:11:36.055   16:55:28	-- bdev/nbd_common.sh@14 -- # (( i < 16 ))
00:11:36.055   16:55:28	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 /dev/nbd15
00:11:36.341  /dev/nbd15
00:11:36.341    16:55:29	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd15
00:11:36.600   16:55:29	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd15
00:11:36.600   16:55:29	-- common/autotest_common.sh@866 -- # local nbd_name=nbd15
00:11:36.600   16:55:29	-- common/autotest_common.sh@867 -- # local i
00:11:36.600   16:55:29	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:11:36.600   16:55:29	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:11:36.600   16:55:29	-- common/autotest_common.sh@870 -- # grep -q -w nbd15 /proc/partitions
00:11:36.600   16:55:29	-- common/autotest_common.sh@871 -- # break
00:11:36.600   16:55:29	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:11:36.600   16:55:29	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:11:36.600   16:55:29	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd15 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:11:36.600  1+0 records in
00:11:36.600  1+0 records out
00:11:36.600  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000765336 s, 5.4 MB/s
00:11:36.600    16:55:29	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:36.600   16:55:29	-- common/autotest_common.sh@884 -- # size=4096
00:11:36.600   16:55:29	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:36.600   16:55:29	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:11:36.600   16:55:29	-- common/autotest_common.sh@887 -- # return 0
00:11:36.600   16:55:29	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:11:36.600   16:55:29	-- bdev/nbd_common.sh@14 -- # (( i < 16 ))
00:11:36.600   16:55:29	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 /dev/nbd2
00:11:36.859  /dev/nbd2
00:11:36.859    16:55:29	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd2
00:11:36.859   16:55:29	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd2
00:11:36.859   16:55:29	-- common/autotest_common.sh@866 -- # local nbd_name=nbd2
00:11:36.859   16:55:29	-- common/autotest_common.sh@867 -- # local i
00:11:36.859   16:55:29	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:11:36.859   16:55:29	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:11:36.859   16:55:29	-- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions
00:11:36.859   16:55:29	-- common/autotest_common.sh@871 -- # break
00:11:36.859   16:55:29	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:11:36.859   16:55:29	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:11:36.859   16:55:29	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:11:36.859  1+0 records in
00:11:36.859  1+0 records out
00:11:36.859  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0010104 s, 4.1 MB/s
00:11:36.859    16:55:29	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:36.859   16:55:29	-- common/autotest_common.sh@884 -- # size=4096
00:11:36.859   16:55:29	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:36.859   16:55:29	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:11:36.859   16:55:29	-- common/autotest_common.sh@887 -- # return 0
00:11:36.859   16:55:29	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:11:36.859   16:55:29	-- bdev/nbd_common.sh@14 -- # (( i < 16 ))
00:11:36.859   16:55:29	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 /dev/nbd3
00:11:36.859  /dev/nbd3
00:11:37.118    16:55:29	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd3
00:11:37.118   16:55:29	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd3
00:11:37.118   16:55:29	-- common/autotest_common.sh@866 -- # local nbd_name=nbd3
00:11:37.118   16:55:29	-- common/autotest_common.sh@867 -- # local i
00:11:37.118   16:55:29	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:11:37.118   16:55:29	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:11:37.118   16:55:29	-- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions
00:11:37.118   16:55:29	-- common/autotest_common.sh@871 -- # break
00:11:37.118   16:55:29	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:11:37.118   16:55:29	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:11:37.118   16:55:29	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:11:37.118  1+0 records in
00:11:37.118  1+0 records out
00:11:37.118  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000516064 s, 7.9 MB/s
00:11:37.118    16:55:29	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:37.118   16:55:29	-- common/autotest_common.sh@884 -- # size=4096
00:11:37.118   16:55:29	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:37.118   16:55:29	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:11:37.118   16:55:29	-- common/autotest_common.sh@887 -- # return 0
00:11:37.118   16:55:29	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:11:37.118   16:55:29	-- bdev/nbd_common.sh@14 -- # (( i < 16 ))
00:11:37.118   16:55:29	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 /dev/nbd4
00:11:37.118  /dev/nbd4
00:11:37.118    16:55:29	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd4
00:11:37.118   16:55:29	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd4
00:11:37.118   16:55:29	-- common/autotest_common.sh@866 -- # local nbd_name=nbd4
00:11:37.118   16:55:29	-- common/autotest_common.sh@867 -- # local i
00:11:37.118   16:55:29	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:11:37.118   16:55:29	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:11:37.118   16:55:29	-- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions
00:11:37.118   16:55:29	-- common/autotest_common.sh@871 -- # break
00:11:37.118   16:55:29	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:11:37.118   16:55:29	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:11:37.118   16:55:29	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:11:37.118  1+0 records in
00:11:37.118  1+0 records out
00:11:37.118  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00158618 s, 2.6 MB/s
00:11:37.118    16:55:29	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:37.118   16:55:29	-- common/autotest_common.sh@884 -- # size=4096
00:11:37.118   16:55:29	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:37.118   16:55:29	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:11:37.118   16:55:29	-- common/autotest_common.sh@887 -- # return 0
00:11:37.118   16:55:29	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:11:37.118   16:55:29	-- bdev/nbd_common.sh@14 -- # (( i < 16 ))
00:11:37.118   16:55:29	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT /dev/nbd5
00:11:37.687  /dev/nbd5
00:11:37.687    16:55:30	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd5
00:11:37.687   16:55:30	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd5
00:11:37.687   16:55:30	-- common/autotest_common.sh@866 -- # local nbd_name=nbd5
00:11:37.687   16:55:30	-- common/autotest_common.sh@867 -- # local i
00:11:37.687   16:55:30	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:11:37.687   16:55:30	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:11:37.687   16:55:30	-- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions
00:11:37.687   16:55:30	-- common/autotest_common.sh@871 -- # break
00:11:37.687   16:55:30	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:11:37.687   16:55:30	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:11:37.687   16:55:30	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:11:37.687  1+0 records in
00:11:37.687  1+0 records out
00:11:37.687  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000946578 s, 4.3 MB/s
00:11:37.687    16:55:30	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:37.687   16:55:30	-- common/autotest_common.sh@884 -- # size=4096
00:11:37.687   16:55:30	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:37.687   16:55:30	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:11:37.687   16:55:30	-- common/autotest_common.sh@887 -- # return 0
00:11:37.687   16:55:30	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:11:37.687   16:55:30	-- bdev/nbd_common.sh@14 -- # (( i < 16 ))
00:11:37.687   16:55:30	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 /dev/nbd6
00:11:37.947  /dev/nbd6
00:11:37.947    16:55:30	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd6
00:11:37.947   16:55:30	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd6
00:11:37.947   16:55:30	-- common/autotest_common.sh@866 -- # local nbd_name=nbd6
00:11:37.947   16:55:30	-- common/autotest_common.sh@867 -- # local i
00:11:37.947   16:55:30	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:11:37.947   16:55:30	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:11:37.947   16:55:30	-- common/autotest_common.sh@870 -- # grep -q -w nbd6 /proc/partitions
00:11:37.947   16:55:30	-- common/autotest_common.sh@871 -- # break
00:11:37.947   16:55:30	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:11:37.947   16:55:30	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:11:37.947   16:55:30	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:11:37.947  1+0 records in
00:11:37.947  1+0 records out
00:11:37.947  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000539529 s, 7.6 MB/s
00:11:37.947    16:55:30	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:37.947   16:55:30	-- common/autotest_common.sh@884 -- # size=4096
00:11:37.947   16:55:30	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:37.947   16:55:30	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:11:37.947   16:55:30	-- common/autotest_common.sh@887 -- # return 0
00:11:37.947   16:55:30	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:11:37.947   16:55:30	-- bdev/nbd_common.sh@14 -- # (( i < 16 ))
00:11:37.947   16:55:30	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 /dev/nbd7
00:11:37.947  /dev/nbd7
00:11:38.206    16:55:30	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd7
00:11:38.206   16:55:30	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd7
00:11:38.206   16:55:30	-- common/autotest_common.sh@866 -- # local nbd_name=nbd7
00:11:38.206   16:55:30	-- common/autotest_common.sh@867 -- # local i
00:11:38.206   16:55:30	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:11:38.206   16:55:30	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:11:38.206   16:55:30	-- common/autotest_common.sh@870 -- # grep -q -w nbd7 /proc/partitions
00:11:38.206   16:55:30	-- common/autotest_common.sh@871 -- # break
00:11:38.206   16:55:30	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:11:38.206   16:55:30	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:11:38.206   16:55:30	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd7 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:11:38.206  1+0 records in
00:11:38.206  1+0 records out
00:11:38.206  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000495924 s, 8.3 MB/s
00:11:38.206    16:55:30	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:38.206   16:55:30	-- common/autotest_common.sh@884 -- # size=4096
00:11:38.206   16:55:30	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:38.206   16:55:30	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:11:38.206   16:55:30	-- common/autotest_common.sh@887 -- # return 0
00:11:38.206   16:55:30	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:11:38.206   16:55:30	-- bdev/nbd_common.sh@14 -- # (( i < 16 ))
00:11:38.206   16:55:30	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 /dev/nbd8
00:11:38.206  /dev/nbd8
00:11:38.206    16:55:31	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd8
00:11:38.206   16:55:31	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd8
00:11:38.206   16:55:31	-- common/autotest_common.sh@866 -- # local nbd_name=nbd8
00:11:38.206   16:55:31	-- common/autotest_common.sh@867 -- # local i
00:11:38.206   16:55:31	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:11:38.206   16:55:31	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:11:38.206   16:55:31	-- common/autotest_common.sh@870 -- # grep -q -w nbd8 /proc/partitions
00:11:38.206   16:55:31	-- common/autotest_common.sh@871 -- # break
00:11:38.206   16:55:31	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:11:38.206   16:55:31	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:11:38.206   16:55:31	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd8 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:11:38.206  1+0 records in
00:11:38.206  1+0 records out
00:11:38.206  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0014279 s, 2.9 MB/s
00:11:38.206    16:55:31	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:38.206   16:55:31	-- common/autotest_common.sh@884 -- # size=4096
00:11:38.206   16:55:31	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:38.206   16:55:31	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:11:38.206   16:55:31	-- common/autotest_common.sh@887 -- # return 0
00:11:38.206   16:55:31	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:11:38.206   16:55:31	-- bdev/nbd_common.sh@14 -- # (( i < 16 ))
00:11:38.206   16:55:31	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 /dev/nbd9
00:11:38.465  /dev/nbd9
00:11:38.725    16:55:31	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd9
00:11:38.725   16:55:31	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd9
00:11:38.725   16:55:31	-- common/autotest_common.sh@866 -- # local nbd_name=nbd9
00:11:38.725   16:55:31	-- common/autotest_common.sh@867 -- # local i
00:11:38.725   16:55:31	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:11:38.725   16:55:31	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:11:38.725   16:55:31	-- common/autotest_common.sh@870 -- # grep -q -w nbd9 /proc/partitions
00:11:38.725   16:55:31	-- common/autotest_common.sh@871 -- # break
00:11:38.725   16:55:31	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:11:38.725   16:55:31	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:11:38.725   16:55:31	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd9 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:11:38.725  1+0 records in
00:11:38.725  1+0 records out
00:11:38.725  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00162425 s, 2.5 MB/s
00:11:38.725    16:55:31	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:38.725   16:55:31	-- common/autotest_common.sh@884 -- # size=4096
00:11:38.725   16:55:31	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:11:38.725   16:55:31	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:11:38.725   16:55:31	-- common/autotest_common.sh@887 -- # return 0
00:11:38.725   16:55:31	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:11:38.725   16:55:31	-- bdev/nbd_common.sh@14 -- # (( i < 16 ))
00:11:38.725    16:55:31	-- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:11:38.725    16:55:31	-- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:11:38.725     16:55:31	-- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:11:38.984    16:55:31	-- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:11:38.984    {
00:11:38.984      "nbd_device": "/dev/nbd0",
00:11:38.984      "bdev_name": "Malloc0"
00:11:38.984    },
00:11:38.984    {
00:11:38.984      "nbd_device": "/dev/nbd1",
00:11:38.984      "bdev_name": "Malloc1p0"
00:11:38.984    },
00:11:38.984    {
00:11:38.984      "nbd_device": "/dev/nbd10",
00:11:38.984      "bdev_name": "Malloc1p1"
00:11:38.984    },
00:11:38.984    {
00:11:38.984      "nbd_device": "/dev/nbd11",
00:11:38.984      "bdev_name": "Malloc2p0"
00:11:38.984    },
00:11:38.984    {
00:11:38.984      "nbd_device": "/dev/nbd12",
00:11:38.984      "bdev_name": "Malloc2p1"
00:11:38.984    },
00:11:38.984    {
00:11:38.984      "nbd_device": "/dev/nbd13",
00:11:38.984      "bdev_name": "Malloc2p2"
00:11:38.984    },
00:11:38.984    {
00:11:38.984      "nbd_device": "/dev/nbd14",
00:11:38.984      "bdev_name": "Malloc2p3"
00:11:38.984    },
00:11:38.984    {
00:11:38.984      "nbd_device": "/dev/nbd15",
00:11:38.984      "bdev_name": "Malloc2p4"
00:11:38.984    },
00:11:38.984    {
00:11:38.984      "nbd_device": "/dev/nbd2",
00:11:38.984      "bdev_name": "Malloc2p5"
00:11:38.984    },
00:11:38.984    {
00:11:38.984      "nbd_device": "/dev/nbd3",
00:11:38.984      "bdev_name": "Malloc2p6"
00:11:38.984    },
00:11:38.984    {
00:11:38.984      "nbd_device": "/dev/nbd4",
00:11:38.984      "bdev_name": "Malloc2p7"
00:11:38.984    },
00:11:38.984    {
00:11:38.984      "nbd_device": "/dev/nbd5",
00:11:38.984      "bdev_name": "TestPT"
00:11:38.984    },
00:11:38.984    {
00:11:38.984      "nbd_device": "/dev/nbd6",
00:11:38.984      "bdev_name": "raid0"
00:11:38.984    },
00:11:38.984    {
00:11:38.984      "nbd_device": "/dev/nbd7",
00:11:38.984      "bdev_name": "concat0"
00:11:38.984    },
00:11:38.984    {
00:11:38.984      "nbd_device": "/dev/nbd8",
00:11:38.984      "bdev_name": "raid1"
00:11:38.984    },
00:11:38.984    {
00:11:38.984      "nbd_device": "/dev/nbd9",
00:11:38.984      "bdev_name": "AIO0"
00:11:38.984    }
00:11:38.984  ]'
00:11:38.984     16:55:31	-- bdev/nbd_common.sh@64 -- # echo '[
00:11:38.984    {
00:11:38.984      "nbd_device": "/dev/nbd0",
00:11:38.984      "bdev_name": "Malloc0"
00:11:38.984    },
00:11:38.984    {
00:11:38.984      "nbd_device": "/dev/nbd1",
00:11:38.984      "bdev_name": "Malloc1p0"
00:11:38.984    },
00:11:38.984    {
00:11:38.984      "nbd_device": "/dev/nbd10",
00:11:38.984      "bdev_name": "Malloc1p1"
00:11:38.984    },
00:11:38.984    {
00:11:38.984      "nbd_device": "/dev/nbd11",
00:11:38.984      "bdev_name": "Malloc2p0"
00:11:38.984    },
00:11:38.984    {
00:11:38.984      "nbd_device": "/dev/nbd12",
00:11:38.984      "bdev_name": "Malloc2p1"
00:11:38.984    },
00:11:38.984    {
00:11:38.984      "nbd_device": "/dev/nbd13",
00:11:38.984      "bdev_name": "Malloc2p2"
00:11:38.984    },
00:11:38.984    {
00:11:38.984      "nbd_device": "/dev/nbd14",
00:11:38.984      "bdev_name": "Malloc2p3"
00:11:38.984    },
00:11:38.984    {
00:11:38.984      "nbd_device": "/dev/nbd15",
00:11:38.984      "bdev_name": "Malloc2p4"
00:11:38.984    },
00:11:38.984    {
00:11:38.984      "nbd_device": "/dev/nbd2",
00:11:38.984      "bdev_name": "Malloc2p5"
00:11:38.984    },
00:11:38.984    {
00:11:38.984      "nbd_device": "/dev/nbd3",
00:11:38.984      "bdev_name": "Malloc2p6"
00:11:38.984    },
00:11:38.984    {
00:11:38.985      "nbd_device": "/dev/nbd4",
00:11:38.985      "bdev_name": "Malloc2p7"
00:11:38.985    },
00:11:38.985    {
00:11:38.985      "nbd_device": "/dev/nbd5",
00:11:38.985      "bdev_name": "TestPT"
00:11:38.985    },
00:11:38.985    {
00:11:38.985      "nbd_device": "/dev/nbd6",
00:11:38.985      "bdev_name": "raid0"
00:11:38.985    },
00:11:38.985    {
00:11:38.985      "nbd_device": "/dev/nbd7",
00:11:38.985      "bdev_name": "concat0"
00:11:38.985    },
00:11:38.985    {
00:11:38.985      "nbd_device": "/dev/nbd8",
00:11:38.985      "bdev_name": "raid1"
00:11:38.985    },
00:11:38.985    {
00:11:38.985      "nbd_device": "/dev/nbd9",
00:11:38.985      "bdev_name": "AIO0"
00:11:38.985    }
00:11:38.985  ]'
00:11:38.985     16:55:31	-- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:11:38.985    16:55:31	-- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0
00:11:38.985  /dev/nbd1
00:11:38.985  /dev/nbd10
00:11:38.985  /dev/nbd11
00:11:38.985  /dev/nbd12
00:11:38.985  /dev/nbd13
00:11:38.985  /dev/nbd14
00:11:38.985  /dev/nbd15
00:11:38.985  /dev/nbd2
00:11:38.985  /dev/nbd3
00:11:38.985  /dev/nbd4
00:11:38.985  /dev/nbd5
00:11:38.985  /dev/nbd6
00:11:38.985  /dev/nbd7
00:11:38.985  /dev/nbd8
00:11:38.985  /dev/nbd9'
00:11:38.985     16:55:31	-- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0
00:11:38.985  /dev/nbd1
00:11:38.985  /dev/nbd10
00:11:38.985  /dev/nbd11
00:11:38.985  /dev/nbd12
00:11:38.985  /dev/nbd13
00:11:38.985  /dev/nbd14
00:11:38.985  /dev/nbd15
00:11:38.985  /dev/nbd2
00:11:38.985  /dev/nbd3
00:11:38.985  /dev/nbd4
00:11:38.985  /dev/nbd5
00:11:38.985  /dev/nbd6
00:11:38.985  /dev/nbd7
00:11:38.985  /dev/nbd8
00:11:38.985  /dev/nbd9'
00:11:38.985     16:55:31	-- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:11:38.985    16:55:31	-- bdev/nbd_common.sh@65 -- # count=16
00:11:38.985    16:55:31	-- bdev/nbd_common.sh@66 -- # echo 16
00:11:38.985   16:55:31	-- bdev/nbd_common.sh@95 -- # count=16
00:11:38.985   16:55:31	-- bdev/nbd_common.sh@96 -- # '[' 16 -ne 16 ']'
00:11:38.985   16:55:31	-- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' write
00:11:38.985   16:55:31	-- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9')
00:11:38.985   16:55:31	-- bdev/nbd_common.sh@70 -- # local nbd_list
00:11:38.985   16:55:31	-- bdev/nbd_common.sh@71 -- # local operation=write
00:11:38.985   16:55:31	-- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest
00:11:38.985   16:55:31	-- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:11:38.985   16:55:31	-- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256
00:11:38.985  256+0 records in
00:11:38.985  256+0 records out
00:11:38.985  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00868162 s, 121 MB/s
00:11:38.985   16:55:31	-- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:11:38.985   16:55:31	-- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:11:39.244  256+0 records in
00:11:39.244  256+0 records out
00:11:39.244  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.183351 s, 5.7 MB/s
00:11:39.244   16:55:31	-- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:11:39.244   16:55:31	-- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct
00:11:39.504  256+0 records in
00:11:39.504  256+0 records out
00:11:39.504  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.184547 s, 5.7 MB/s
00:11:39.504   16:55:32	-- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:11:39.504   16:55:32	-- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct
00:11:39.504  256+0 records in
00:11:39.504  256+0 records out
00:11:39.504  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.189482 s, 5.5 MB/s
00:11:39.504   16:55:32	-- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:11:39.504   16:55:32	-- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct
00:11:39.762  256+0 records in
00:11:39.762  256+0 records out
00:11:39.762  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.188832 s, 5.6 MB/s
00:11:39.762   16:55:32	-- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:11:39.762   16:55:32	-- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct
00:11:40.022  256+0 records in
00:11:40.022  256+0 records out
00:11:40.022  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.187562 s, 5.6 MB/s
00:11:40.022   16:55:32	-- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:11:40.022   16:55:32	-- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct
00:11:40.281  256+0 records in
00:11:40.281  256+0 records out
00:11:40.281  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.191364 s, 5.5 MB/s
00:11:40.281   16:55:32	-- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:11:40.281   16:55:32	-- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct
00:11:40.281  256+0 records in
00:11:40.281  256+0 records out
00:11:40.281  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.191879 s, 5.5 MB/s
00:11:40.281   16:55:33	-- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:11:40.281   16:55:33	-- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd15 bs=4096 count=256 oflag=direct
00:11:40.540  256+0 records in
00:11:40.540  256+0 records out
00:11:40.540  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.19185 s, 5.5 MB/s
00:11:40.540   16:55:33	-- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:11:40.540   16:55:33	-- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd2 bs=4096 count=256 oflag=direct
00:11:40.799  256+0 records in
00:11:40.799  256+0 records out
00:11:40.799  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.191947 s, 5.5 MB/s
00:11:40.799   16:55:33	-- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:11:40.799   16:55:33	-- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd3 bs=4096 count=256 oflag=direct
00:11:41.059  256+0 records in
00:11:41.059  256+0 records out
00:11:41.059  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.192758 s, 5.4 MB/s
00:11:41.059   16:55:33	-- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:11:41.059   16:55:33	-- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd4 bs=4096 count=256 oflag=direct
00:11:41.059  256+0 records in
00:11:41.059  256+0 records out
00:11:41.059  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.191698 s, 5.5 MB/s
00:11:41.059   16:55:33	-- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:11:41.059   16:55:33	-- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd5 bs=4096 count=256 oflag=direct
00:11:41.318  256+0 records in
00:11:41.318  256+0 records out
00:11:41.318  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.191387 s, 5.5 MB/s
00:11:41.318   16:55:34	-- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:11:41.318   16:55:34	-- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd6 bs=4096 count=256 oflag=direct
00:11:41.577  256+0 records in
00:11:41.577  256+0 records out
00:11:41.577  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.194499 s, 5.4 MB/s
00:11:41.577   16:55:34	-- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:11:41.577   16:55:34	-- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd7 bs=4096 count=256 oflag=direct
00:11:41.837  256+0 records in
00:11:41.837  256+0 records out
00:11:41.837  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.166758 s, 6.3 MB/s
00:11:41.837   16:55:34	-- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:11:41.837   16:55:34	-- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd8 bs=4096 count=256 oflag=direct
00:11:41.837  256+0 records in
00:11:41.837  256+0 records out
00:11:41.837  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.162314 s, 6.5 MB/s
00:11:41.837   16:55:34	-- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:11:41.837   16:55:34	-- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd9 bs=4096 count=256 oflag=direct
00:11:42.096  256+0 records in
00:11:42.096  256+0 records out
00:11:42.096  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.19154 s, 5.5 MB/s
00:11:42.096   16:55:34	-- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' verify
00:11:42.096   16:55:34	-- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9')
00:11:42.096   16:55:34	-- bdev/nbd_common.sh@70 -- # local nbd_list
00:11:42.096   16:55:34	-- bdev/nbd_common.sh@71 -- # local operation=verify
00:11:42.096   16:55:34	-- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest
00:11:42.096   16:55:34	-- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:11:42.096   16:55:34	-- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:11:42.096   16:55:34	-- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:11:42.096   16:55:34	-- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0
00:11:42.096   16:55:34	-- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:11:42.096   16:55:34	-- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1
00:11:42.096   16:55:34	-- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:11:42.096   16:55:34	-- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10
00:11:42.096   16:55:34	-- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:11:42.096   16:55:34	-- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11
00:11:42.096   16:55:34	-- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:11:42.096   16:55:34	-- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12
00:11:42.096   16:55:34	-- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:11:42.096   16:55:34	-- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13
00:11:42.096   16:55:34	-- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:11:42.096   16:55:34	-- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14
00:11:42.096   16:55:34	-- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:11:42.096   16:55:34	-- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd15
00:11:42.096   16:55:34	-- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:11:42.096   16:55:34	-- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd2
00:11:42.096   16:55:34	-- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:11:42.096   16:55:34	-- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd3
00:11:42.096   16:55:34	-- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:11:42.096   16:55:34	-- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd4
00:11:42.096   16:55:34	-- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:11:42.096   16:55:34	-- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd5
00:11:42.096   16:55:34	-- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:11:42.096   16:55:34	-- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd6
00:11:42.355   16:55:34	-- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:11:42.355   16:55:34	-- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd7
00:11:42.355   16:55:34	-- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:11:42.355   16:55:34	-- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd8
00:11:42.355   16:55:34	-- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:11:42.355   16:55:34	-- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd9
00:11:42.355   16:55:34	-- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest
00:11:42.355   16:55:34	-- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9'
00:11:42.355   16:55:34	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:11:42.355   16:55:34	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9')
00:11:42.355   16:55:34	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:11:42.355   16:55:34	-- bdev/nbd_common.sh@51 -- # local i
00:11:42.355   16:55:34	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:11:42.355   16:55:34	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:11:42.614    16:55:35	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:11:42.614   16:55:35	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:11:42.614   16:55:35	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:11:42.614   16:55:35	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:11:42.614   16:55:35	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:11:42.614   16:55:35	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:11:42.614   16:55:35	-- bdev/nbd_common.sh@41 -- # break
00:11:42.614   16:55:35	-- bdev/nbd_common.sh@45 -- # return 0
00:11:42.614   16:55:35	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:11:42.614   16:55:35	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:11:42.873    16:55:35	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:11:42.873   16:55:35	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:11:42.873   16:55:35	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:11:42.873   16:55:35	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:11:42.873   16:55:35	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:11:42.873   16:55:35	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:11:42.873   16:55:35	-- bdev/nbd_common.sh@41 -- # break
00:11:42.873   16:55:35	-- bdev/nbd_common.sh@45 -- # return 0
00:11:42.873   16:55:35	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:11:42.873   16:55:35	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10
00:11:43.132    16:55:35	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd10
00:11:43.132   16:55:35	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10
00:11:43.132   16:55:35	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10
00:11:43.132   16:55:35	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:11:43.132   16:55:35	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:11:43.132   16:55:35	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions
00:11:43.132   16:55:35	-- bdev/nbd_common.sh@41 -- # break
00:11:43.132   16:55:35	-- bdev/nbd_common.sh@45 -- # return 0
00:11:43.132   16:55:35	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:11:43.132   16:55:35	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11
00:11:43.391    16:55:36	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd11
00:11:43.391   16:55:36	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11
00:11:43.391   16:55:36	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11
00:11:43.391   16:55:36	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:11:43.391   16:55:36	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:11:43.391   16:55:36	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions
00:11:43.391   16:55:36	-- bdev/nbd_common.sh@41 -- # break
00:11:43.391   16:55:36	-- bdev/nbd_common.sh@45 -- # return 0
00:11:43.391   16:55:36	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:11:43.391   16:55:36	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12
00:11:43.650    16:55:36	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd12
00:11:43.650   16:55:36	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12
00:11:43.650   16:55:36	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12
00:11:43.650   16:55:36	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:11:43.650   16:55:36	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:11:43.650   16:55:36	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions
00:11:43.650   16:55:36	-- bdev/nbd_common.sh@41 -- # break
00:11:43.650   16:55:36	-- bdev/nbd_common.sh@45 -- # return 0
00:11:43.650   16:55:36	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:11:43.650   16:55:36	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13
00:11:43.909    16:55:36	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd13
00:11:43.909   16:55:36	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13
00:11:43.909   16:55:36	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13
00:11:43.909   16:55:36	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:11:43.909   16:55:36	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:11:43.909   16:55:36	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions
00:11:43.909   16:55:36	-- bdev/nbd_common.sh@41 -- # break
00:11:43.909   16:55:36	-- bdev/nbd_common.sh@45 -- # return 0
00:11:43.909   16:55:36	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:11:43.909   16:55:36	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14
00:11:44.168    16:55:36	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd14
00:11:44.168   16:55:36	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14
00:11:44.168   16:55:36	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14
00:11:44.168   16:55:36	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:11:44.168   16:55:36	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:11:44.168   16:55:36	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions
00:11:44.168   16:55:36	-- bdev/nbd_common.sh@41 -- # break
00:11:44.168   16:55:36	-- bdev/nbd_common.sh@45 -- # return 0
00:11:44.168   16:55:36	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:11:44.168   16:55:36	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15
00:11:44.427    16:55:37	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd15
00:11:44.427   16:55:37	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15
00:11:44.427   16:55:37	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15
00:11:44.427   16:55:37	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:11:44.427   16:55:37	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:11:44.427   16:55:37	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions
00:11:44.427   16:55:37	-- bdev/nbd_common.sh@41 -- # break
00:11:44.427   16:55:37	-- bdev/nbd_common.sh@45 -- # return 0
00:11:44.427   16:55:37	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:11:44.427   16:55:37	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2
00:11:44.688    16:55:37	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd2
00:11:44.688   16:55:37	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2
00:11:44.688   16:55:37	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2
00:11:44.688   16:55:37	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:11:44.688   16:55:37	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:11:44.688   16:55:37	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions
00:11:44.688   16:55:37	-- bdev/nbd_common.sh@41 -- # break
00:11:44.688   16:55:37	-- bdev/nbd_common.sh@45 -- # return 0
00:11:44.688   16:55:37	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:11:44.688   16:55:37	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3
00:11:44.688    16:55:37	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd3
00:11:44.949   16:55:37	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3
00:11:44.949   16:55:37	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3
00:11:44.949   16:55:37	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:11:44.949   16:55:37	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:11:44.949   16:55:37	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions
00:11:44.949   16:55:37	-- bdev/nbd_common.sh@41 -- # break
00:11:44.949   16:55:37	-- bdev/nbd_common.sh@45 -- # return 0
00:11:44.949   16:55:37	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:11:44.949   16:55:37	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4
00:11:44.949    16:55:37	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd4
00:11:44.949   16:55:37	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4
00:11:44.949   16:55:37	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4
00:11:44.949   16:55:37	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:11:44.949   16:55:37	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:11:44.949   16:55:37	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions
00:11:44.949   16:55:37	-- bdev/nbd_common.sh@41 -- # break
00:11:44.949   16:55:37	-- bdev/nbd_common.sh@45 -- # return 0
00:11:44.949   16:55:37	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:11:44.949   16:55:37	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5
00:11:45.207    16:55:37	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd5
00:11:45.207   16:55:37	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5
00:11:45.207   16:55:37	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5
00:11:45.207   16:55:37	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:11:45.207   16:55:37	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:11:45.207   16:55:37	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions
00:11:45.207   16:55:37	-- bdev/nbd_common.sh@41 -- # break
00:11:45.207   16:55:37	-- bdev/nbd_common.sh@45 -- # return 0
00:11:45.207   16:55:37	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:11:45.207   16:55:37	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6
00:11:45.466    16:55:38	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd6
00:11:45.466   16:55:38	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6
00:11:45.466   16:55:38	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6
00:11:45.466   16:55:38	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:11:45.466   16:55:38	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:11:45.466   16:55:38	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions
00:11:45.466   16:55:38	-- bdev/nbd_common.sh@41 -- # break
00:11:45.466   16:55:38	-- bdev/nbd_common.sh@45 -- # return 0
00:11:45.466   16:55:38	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:11:45.466   16:55:38	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7
00:11:45.725    16:55:38	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd7
00:11:45.725   16:55:38	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7
00:11:45.725   16:55:38	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7
00:11:45.725   16:55:38	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:11:45.725   16:55:38	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:11:45.725   16:55:38	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions
00:11:45.725   16:55:38	-- bdev/nbd_common.sh@41 -- # break
00:11:45.725   16:55:38	-- bdev/nbd_common.sh@45 -- # return 0
00:11:45.725   16:55:38	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:11:45.725   16:55:38	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8
00:11:45.984    16:55:38	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd8
00:11:45.984   16:55:38	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8
00:11:45.984   16:55:38	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8
00:11:45.984   16:55:38	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:11:45.984   16:55:38	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:11:45.984   16:55:38	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions
00:11:45.984   16:55:38	-- bdev/nbd_common.sh@41 -- # break
00:11:45.984   16:55:38	-- bdev/nbd_common.sh@45 -- # return 0
00:11:45.984   16:55:38	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:11:45.984   16:55:38	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9
00:11:46.243    16:55:39	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd9
00:11:46.243   16:55:39	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9
00:11:46.244   16:55:39	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9
00:11:46.244   16:55:39	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:11:46.244   16:55:39	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:11:46.244   16:55:39	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions
00:11:46.244   16:55:39	-- bdev/nbd_common.sh@41 -- # break
00:11:46.244   16:55:39	-- bdev/nbd_common.sh@45 -- # return 0
00:11:46.244    16:55:39	-- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:11:46.244    16:55:39	-- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:11:46.244     16:55:39	-- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:11:46.502    16:55:39	-- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:11:46.502     16:55:39	-- bdev/nbd_common.sh@64 -- # echo '[]'
00:11:46.502     16:55:39	-- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:11:46.502    16:55:39	-- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:11:46.502     16:55:39	-- bdev/nbd_common.sh@65 -- # echo ''
00:11:46.502     16:55:39	-- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:11:46.502     16:55:39	-- bdev/nbd_common.sh@65 -- # true
00:11:46.502    16:55:39	-- bdev/nbd_common.sh@65 -- # count=0
00:11:46.502    16:55:39	-- bdev/nbd_common.sh@66 -- # echo 0
00:11:46.502   16:55:39	-- bdev/nbd_common.sh@104 -- # count=0
00:11:46.502   16:55:39	-- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:11:46.502   16:55:39	-- bdev/nbd_common.sh@109 -- # return 0
00:11:46.502   16:55:39	-- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9'
00:11:46.502   16:55:39	-- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:11:46.503   16:55:39	-- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9')
00:11:46.503   16:55:39	-- bdev/nbd_common.sh@132 -- # local nbd_list
00:11:46.503   16:55:39	-- bdev/nbd_common.sh@133 -- # local mkfs_ret
00:11:46.503   16:55:39	-- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512
00:11:46.761  malloc_lvol_verify
00:11:46.761   16:55:39	-- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs
00:11:47.020  fdbe0598-8558-4976-8437-e35e7656c0fb
00:11:47.020   16:55:39	-- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs
00:11:47.280  591f5718-2e49-4b51-b0b0-b5d5bb2df77f
00:11:47.280   16:55:40	-- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0
00:11:47.539  /dev/nbd0
00:11:47.539   16:55:40	-- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0
00:11:47.539  mke2fs 1.46.5 (30-Dec-2021)
00:11:47.539  Discarding device blocks:    0/1024         done                            
00:11:47.539  Creating filesystem with 1024 4k blocks and 1024 inodes
00:11:47.539  
00:11:47.539  Allocating group tables: 0/1   done                            
00:11:47.539  Writing inode tables: 0/1   done                            
00:11:47.539  Writing superblocks and filesystem accounting information: 0/1   done
00:11:47.539  
00:11:47.539  
00:11:47.539  Filesystem too small for a journal
00:11:47.539   16:55:40	-- bdev/nbd_common.sh@141 -- # mkfs_ret=0
00:11:47.539   16:55:40	-- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0
00:11:47.539   16:55:40	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:11:47.539   16:55:40	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:11:47.539   16:55:40	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:11:47.539   16:55:40	-- bdev/nbd_common.sh@51 -- # local i
00:11:47.539   16:55:40	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:11:47.539   16:55:40	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:11:47.797    16:55:40	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:11:47.797   16:55:40	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:11:47.797   16:55:40	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:11:47.797   16:55:40	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:11:47.797   16:55:40	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:11:47.797   16:55:40	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:11:47.797   16:55:40	-- bdev/nbd_common.sh@41 -- # break
00:11:47.797   16:55:40	-- bdev/nbd_common.sh@45 -- # return 0
00:11:47.797   16:55:40	-- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']'
00:11:47.797   16:55:40	-- bdev/nbd_common.sh@147 -- # return 0
00:11:47.797   16:55:40	-- bdev/blockdev.sh@324 -- # killprocess 119617
00:11:47.797   16:55:40	-- common/autotest_common.sh@936 -- # '[' -z 119617 ']'
00:11:47.797   16:55:40	-- common/autotest_common.sh@940 -- # kill -0 119617
00:11:47.797    16:55:40	-- common/autotest_common.sh@941 -- # uname
00:11:47.797   16:55:40	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:11:47.797    16:55:40	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 119617
00:11:47.797   16:55:40	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:11:47.797   16:55:40	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:11:47.797   16:55:40	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 119617'
00:11:47.797  killing process with pid 119617
00:11:47.797   16:55:40	-- common/autotest_common.sh@955 -- # kill 119617
00:11:47.797   16:55:40	-- common/autotest_common.sh@960 -- # wait 119617
00:11:48.363   16:55:40	-- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT
00:11:48.363  
00:11:48.364  real	0m24.553s
00:11:48.364  user	0m31.565s
00:11:48.364  sys	0m11.832s
00:11:48.364   16:55:40	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:11:48.364   16:55:40	-- common/autotest_common.sh@10 -- # set +x
00:11:48.364  ************************************
00:11:48.364  END TEST bdev_nbd
00:11:48.364  ************************************
00:11:48.364   16:55:41	-- bdev/blockdev.sh@761 -- # [[ y == y ]]
00:11:48.364   16:55:41	-- bdev/blockdev.sh@762 -- # '[' bdev = nvme ']'
00:11:48.364   16:55:41	-- bdev/blockdev.sh@762 -- # '[' bdev = gpt ']'
00:11:48.364   16:55:41	-- bdev/blockdev.sh@766 -- # run_test bdev_fio fio_test_suite ''
00:11:48.364   16:55:41	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:11:48.364   16:55:41	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:11:48.364   16:55:41	-- common/autotest_common.sh@10 -- # set +x
00:11:48.364  ************************************
00:11:48.364  START TEST bdev_fio
00:11:48.364  ************************************
00:11:48.364   16:55:41	-- common/autotest_common.sh@1114 -- # fio_test_suite ''
00:11:48.364   16:55:41	-- bdev/blockdev.sh@329 -- # local env_context
00:11:48.364   16:55:41	-- bdev/blockdev.sh@333 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev
00:11:48.364  /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk
00:11:48.364   16:55:41	-- bdev/blockdev.sh@334 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT
00:11:48.364    16:55:41	-- bdev/blockdev.sh@337 -- # echo ''
00:11:48.364    16:55:41	-- bdev/blockdev.sh@337 -- # sed s/--env-context=//
00:11:48.364   16:55:41	-- bdev/blockdev.sh@337 -- # env_context=
00:11:48.364   16:55:41	-- bdev/blockdev.sh@338 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO ''
00:11:48.364   16:55:41	-- common/autotest_common.sh@1269 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio
00:11:48.364   16:55:41	-- common/autotest_common.sh@1270 -- # local workload=verify
00:11:48.364   16:55:41	-- common/autotest_common.sh@1271 -- # local bdev_type=AIO
00:11:48.364   16:55:41	-- common/autotest_common.sh@1272 -- # local env_context=
00:11:48.364   16:55:41	-- common/autotest_common.sh@1273 -- # local fio_dir=/usr/src/fio
00:11:48.364   16:55:41	-- common/autotest_common.sh@1275 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']'
00:11:48.364   16:55:41	-- common/autotest_common.sh@1280 -- # '[' -z verify ']'
00:11:48.364   16:55:41	-- common/autotest_common.sh@1284 -- # '[' -n '' ']'
00:11:48.364   16:55:41	-- common/autotest_common.sh@1288 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio
00:11:48.364   16:55:41	-- common/autotest_common.sh@1290 -- # cat
00:11:48.364   16:55:41	-- common/autotest_common.sh@1302 -- # '[' verify == verify ']'
00:11:48.364   16:55:41	-- common/autotest_common.sh@1303 -- # cat
00:11:48.364   16:55:41	-- common/autotest_common.sh@1312 -- # '[' AIO == AIO ']'
00:11:48.364    16:55:41	-- common/autotest_common.sh@1313 -- # /usr/src/fio/fio --version
00:11:48.364   16:55:41	-- common/autotest_common.sh@1313 -- # [[ fio-3.35 == *\f\i\o\-\3* ]]
00:11:48.364   16:55:41	-- common/autotest_common.sh@1314 -- # echo serialize_overlap=1
00:11:48.364   16:55:41	-- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}"
00:11:48.364   16:55:41	-- bdev/blockdev.sh@340 -- # echo '[job_Malloc0]'
00:11:48.364   16:55:41	-- bdev/blockdev.sh@341 -- # echo filename=Malloc0
00:11:48.364   16:55:41	-- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}"
00:11:48.364   16:55:41	-- bdev/blockdev.sh@340 -- # echo '[job_Malloc1p0]'
00:11:48.364   16:55:41	-- bdev/blockdev.sh@341 -- # echo filename=Malloc1p0
00:11:48.364   16:55:41	-- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}"
00:11:48.364   16:55:41	-- bdev/blockdev.sh@340 -- # echo '[job_Malloc1p1]'
00:11:48.364   16:55:41	-- bdev/blockdev.sh@341 -- # echo filename=Malloc1p1
00:11:48.364   16:55:41	-- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}"
00:11:48.364   16:55:41	-- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p0]'
00:11:48.364   16:55:41	-- bdev/blockdev.sh@341 -- # echo filename=Malloc2p0
00:11:48.364   16:55:41	-- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}"
00:11:48.364   16:55:41	-- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p1]'
00:11:48.364   16:55:41	-- bdev/blockdev.sh@341 -- # echo filename=Malloc2p1
00:11:48.364   16:55:41	-- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}"
00:11:48.364   16:55:41	-- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p2]'
00:11:48.364   16:55:41	-- bdev/blockdev.sh@341 -- # echo filename=Malloc2p2
00:11:48.364   16:55:41	-- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}"
00:11:48.364   16:55:41	-- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p3]'
00:11:48.364   16:55:41	-- bdev/blockdev.sh@341 -- # echo filename=Malloc2p3
00:11:48.364   16:55:41	-- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}"
00:11:48.364   16:55:41	-- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p4]'
00:11:48.364   16:55:41	-- bdev/blockdev.sh@341 -- # echo filename=Malloc2p4
00:11:48.364   16:55:41	-- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}"
00:11:48.364   16:55:41	-- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p5]'
00:11:48.364   16:55:41	-- bdev/blockdev.sh@341 -- # echo filename=Malloc2p5
00:11:48.364   16:55:41	-- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}"
00:11:48.364   16:55:41	-- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p6]'
00:11:48.364   16:55:41	-- bdev/blockdev.sh@341 -- # echo filename=Malloc2p6
00:11:48.364   16:55:41	-- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}"
00:11:48.364   16:55:41	-- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p7]'
00:11:48.364   16:55:41	-- bdev/blockdev.sh@341 -- # echo filename=Malloc2p7
00:11:48.364   16:55:41	-- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}"
00:11:48.364   16:55:41	-- bdev/blockdev.sh@340 -- # echo '[job_TestPT]'
00:11:48.364   16:55:41	-- bdev/blockdev.sh@341 -- # echo filename=TestPT
00:11:48.364   16:55:41	-- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}"
00:11:48.364   16:55:41	-- bdev/blockdev.sh@340 -- # echo '[job_raid0]'
00:11:48.364   16:55:41	-- bdev/blockdev.sh@341 -- # echo filename=raid0
00:11:48.364   16:55:41	-- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}"
00:11:48.364   16:55:41	-- bdev/blockdev.sh@340 -- # echo '[job_concat0]'
00:11:48.364   16:55:41	-- bdev/blockdev.sh@341 -- # echo filename=concat0
00:11:48.364   16:55:41	-- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}"
00:11:48.364   16:55:41	-- bdev/blockdev.sh@340 -- # echo '[job_raid1]'
00:11:48.364   16:55:41	-- bdev/blockdev.sh@341 -- # echo filename=raid1
00:11:48.364   16:55:41	-- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}"
00:11:48.364   16:55:41	-- bdev/blockdev.sh@340 -- # echo '[job_AIO0]'
00:11:48.364   16:55:41	-- bdev/blockdev.sh@341 -- # echo filename=AIO0
00:11:48.364   16:55:41	-- bdev/blockdev.sh@345 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 			--verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json'
00:11:48.364   16:55:41	-- bdev/blockdev.sh@347 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output
00:11:48.364   16:55:41	-- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']'
00:11:48.364   16:55:41	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:11:48.364   16:55:41	-- common/autotest_common.sh@10 -- # set +x
00:11:48.364  ************************************
00:11:48.364  START TEST bdev_fio_rw_verify
00:11:48.364  ************************************
00:11:48.364   16:55:41	-- common/autotest_common.sh@1114 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output
00:11:48.364   16:55:41	-- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output
00:11:48.364   16:55:41	-- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio
00:11:48.364   16:55:41	-- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:11:48.364   16:55:41	-- common/autotest_common.sh@1328 -- # local sanitizers
00:11:48.364   16:55:41	-- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:11:48.364   16:55:41	-- common/autotest_common.sh@1330 -- # shift
00:11:48.364   16:55:41	-- common/autotest_common.sh@1332 -- # local asan_lib=
00:11:48.364   16:55:41	-- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}"
00:11:48.364    16:55:41	-- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:11:48.364    16:55:41	-- common/autotest_common.sh@1334 -- # grep libasan
00:11:48.364    16:55:41	-- common/autotest_common.sh@1334 -- # awk '{print $3}'
00:11:48.624   16:55:41	-- common/autotest_common.sh@1334 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.6
00:11:48.624   16:55:41	-- common/autotest_common.sh@1335 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.6 ]]
00:11:48.624   16:55:41	-- common/autotest_common.sh@1336 -- # break
00:11:48.624   16:55:41	-- common/autotest_common.sh@1341 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev'
00:11:48.624   16:55:41	-- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output
00:11:48.624  job_Malloc0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:11:48.624  job_Malloc1p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:11:48.624  job_Malloc1p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:11:48.624  job_Malloc2p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:11:48.624  job_Malloc2p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:11:48.624  job_Malloc2p2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:11:48.624  job_Malloc2p3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:11:48.624  job_Malloc2p4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:11:48.624  job_Malloc2p5: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:11:48.624  job_Malloc2p6: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:11:48.624  job_Malloc2p7: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:11:48.624  job_TestPT: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:11:48.624  job_raid0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:11:48.624  job_concat0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:11:48.624  job_raid1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:11:48.624  job_AIO0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:11:48.624  fio-3.35
00:11:48.624  Starting 16 threads
00:12:00.872  
00:12:00.872  job_Malloc0: (groupid=0, jobs=16): err= 0: pid=120775: Tue Nov 19 16:55:52 2024
00:12:00.872    read: IOPS=63.7k, BW=249MiB/s (261MB/s)(2490MiB/10005msec)
00:12:00.872      slat (nsec): min=1948, max=32066k, avg=47929.37, stdev=501167.63
00:12:00.872      clat (usec): min=9, max=32456, avg=380.43, stdev=1429.78
00:12:00.872       lat (usec): min=30, max=32488, avg=428.36, stdev=1514.54
00:12:00.872      clat percentiles (usec):
00:12:00.872       | 50.000th=[  227], 99.000th=[ 1696], 99.900th=[16450], 99.990th=[24511],
00:12:00.872       | 99.999th=[32375]
00:12:00.872    write: IOPS=98.0k, BW=383MiB/s (402MB/s)(3792MiB/9904msec); 0 zone resets
00:12:00.872      slat (usec): min=4, max=80051, avg=79.98, stdev=723.94
00:12:00.872      clat (usec): min=11, max=80390, avg=490.69, stdev=1748.79
00:12:00.872       lat (usec): min=49, max=80425, avg=570.67, stdev=1892.13
00:12:00.872      clat percentiles (usec):
00:12:00.872       | 50.000th=[  281], 99.000th=[10814], 99.900th=[20841], 99.990th=[33817],
00:12:00.872       | 99.999th=[47973]
00:12:00.872     bw (  KiB/s): min=237875, max=617737, per=98.62%, avg=386710.47, stdev=6712.11, samples=304
00:12:00.872     iops        : min=59468, max=154434, avg=96677.47, stdev=1678.03, samples=304
00:12:00.872    lat (usec)   : 10=0.01%, 20=0.01%, 50=0.30%, 100=5.85%, 250=42.42%
00:12:00.872    lat (usec)   : 500=43.93%, 750=5.29%, 1000=0.55%
00:12:00.872    lat (msec)   : 2=0.36%, 4=0.08%, 10=0.20%, 20=0.91%, 50=0.10%
00:12:00.872    lat (msec)   : 100=0.01%
00:12:00.872    cpu          : usr=56.47%, sys=2.19%, ctx=265123, majf=2, minf=77686
00:12:00.872    IO depths    : 1=10.9%, 2=23.4%, 4=52.5%, 8=13.2%, 16=0.0%, 32=0.0%, >=64=0.0%
00:12:00.872       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:12:00.872       complete  : 0=0.0%, 4=88.8%, 8=11.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:12:00.872       issued rwts: total=637505,970864,0,0 short=0,0,0,0 dropped=0,0,0,0
00:12:00.872       latency   : target=0, window=0, percentile=100.00%, depth=8
00:12:00.872  
00:12:00.872  Run status group 0 (all jobs):
00:12:00.872     READ: bw=249MiB/s (261MB/s), 249MiB/s-249MiB/s (261MB/s-261MB/s), io=2490MiB (2611MB), run=10005-10005msec
00:12:00.872    WRITE: bw=383MiB/s (402MB/s), 383MiB/s-383MiB/s (402MB/s-402MB/s), io=3792MiB (3977MB), run=9904-9904msec
00:12:00.872  -----------------------------------------------------
00:12:00.872  Suppressions used:
00:12:00.872    count      bytes template
00:12:00.872       16        140 /usr/src/fio/parse.c
00:12:00.872    11942    1146432 /usr/src/fio/iolog.c
00:12:00.872        1        904 libcrypto.so
00:12:00.872  -----------------------------------------------------
00:12:00.872  
00:12:00.872  ************************************
00:12:00.872  END TEST bdev_fio_rw_verify
00:12:00.872  ************************************
00:12:00.872  
00:12:00.872  real	0m12.088s
00:12:00.872  user	1m33.678s
00:12:00.872  sys	0m4.372s
00:12:00.872   16:55:53	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:12:00.872   16:55:53	-- common/autotest_common.sh@10 -- # set +x
00:12:00.872   16:55:53	-- bdev/blockdev.sh@348 -- # rm -f
00:12:00.872   16:55:53	-- bdev/blockdev.sh@349 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio
00:12:00.872   16:55:53	-- bdev/blockdev.sh@352 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' ''
00:12:00.872   16:55:53	-- common/autotest_common.sh@1269 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio
00:12:00.872   16:55:53	-- common/autotest_common.sh@1270 -- # local workload=trim
00:12:00.872   16:55:53	-- common/autotest_common.sh@1271 -- # local bdev_type=
00:12:00.872   16:55:53	-- common/autotest_common.sh@1272 -- # local env_context=
00:12:00.872   16:55:53	-- common/autotest_common.sh@1273 -- # local fio_dir=/usr/src/fio
00:12:00.872   16:55:53	-- common/autotest_common.sh@1275 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']'
00:12:00.872   16:55:53	-- common/autotest_common.sh@1280 -- # '[' -z trim ']'
00:12:00.872   16:55:53	-- common/autotest_common.sh@1284 -- # '[' -n '' ']'
00:12:00.872   16:55:53	-- common/autotest_common.sh@1288 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio
00:12:00.872   16:55:53	-- common/autotest_common.sh@1290 -- # cat
00:12:00.872   16:55:53	-- common/autotest_common.sh@1302 -- # '[' trim == verify ']'
00:12:00.872   16:55:53	-- common/autotest_common.sh@1317 -- # '[' trim == trim ']'
00:12:00.872   16:55:53	-- common/autotest_common.sh@1318 -- # echo rw=trimwrite
00:12:00.872    16:55:53	-- bdev/blockdev.sh@353 -- # jq -r 'select(.supported_io_types.unmap == true) | .name'
00:12:00.873    16:55:53	-- bdev/blockdev.sh@353 -- # printf '%s\n' '{' '  "name": "Malloc0",' '  "aliases": [' '    "b62e43e8-a683-4f4b-add9-e442c3f3ddab"' '  ],' '  "product_name": "Malloc disk",' '  "block_size": 512,' '  "num_blocks": 65536,' '  "uuid": "b62e43e8-a683-4f4b-add9-e442c3f3ddab",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 20000,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "memory_domains": [' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    }' '  ],' '  "driver_specific": {}' '}' '{' '  "name": "Malloc1p0",' '  "aliases": [' '    "a88f24b5-b89f-5bf6-bfe1-a6b648f431a8"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 32768,' '  "uuid": "a88f24b5-b89f-5bf6-bfe1-a6b648f431a8",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc1",' '      "offset_blocks": 0' '    }' '  }' '}' '{' '  "name": "Malloc1p1",' '  "aliases": [' '    "a7267782-9e1a-5727-9932-15e5353a8a4a"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 32768,' '  "uuid": "a7267782-9e1a-5727-9932-15e5353a8a4a",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc1",' '      "offset_blocks": 32768' '    }' '  }' '}' '{' '  "name": "Malloc2p0",' '  "aliases": [' '    "de8f2ef3-ec97-5d3b-a5ff-e5a96142e5fe"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "de8f2ef3-ec97-5d3b-a5ff-e5a96142e5fe",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 0' '    }' '  }' '}' '{' '  "name": "Malloc2p1",' '  "aliases": [' '    "5f787359-a152-560d-97ed-a2cec882bfcb"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "5f787359-a152-560d-97ed-a2cec882bfcb",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 8192' '    }' '  }' '}' '{' '  "name": "Malloc2p2",' '  "aliases": [' '    "ccf78d51-1259-5bad-b89c-f47ae05576f6"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "ccf78d51-1259-5bad-b89c-f47ae05576f6",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 16384' '    }' '  }' '}' '{' '  "name": "Malloc2p3",' '  "aliases": [' '    "de7fdfab-0018-58f7-a974-aadd0bf6f1d4"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "de7fdfab-0018-58f7-a974-aadd0bf6f1d4",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 24576' '    }' '  }' '}' '{' '  "name": "Malloc2p4",' '  "aliases": [' '    "6e412625-f267-5120-a11a-2ec4b63328f3"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "6e412625-f267-5120-a11a-2ec4b63328f3",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 32768' '    }' '  }' '}' '{' '  "name": "Malloc2p5",' '  "aliases": [' '    "9e422c63-ecbb-597a-86a5-1a3aa6e72ade"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "9e422c63-ecbb-597a-86a5-1a3aa6e72ade",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 40960' '    }' '  }' '}' '{' '  "name": "Malloc2p6",' '  "aliases": [' '    "42c74c99-2958-5bb3-96fb-a8f1da5f66ff"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "42c74c99-2958-5bb3-96fb-a8f1da5f66ff",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 49152' '    }' '  }' '}' '{' '  "name": "Malloc2p7",' '  "aliases": [' '    "5a6df78e-dde9-596e-b50f-1f1ebba5717c"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "5a6df78e-dde9-596e-b50f-1f1ebba5717c",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 57344' '    }' '  }' '}' '{' '  "name": "TestPT",' '  "aliases": [' '    "1cf32d11-d96a-5a51-a03e-4613a9e884b5"' '  ],' '  "product_name": "passthru",' '  "block_size": 512,' '  "num_blocks": 65536,' '  "uuid": "1cf32d11-d96a-5a51-a03e-4613a9e884b5",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "memory_domains": [' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    }' '  ],' '  "driver_specific": {' '    "passthru": {' '      "name": "TestPT",' '      "base_bdev_name": "Malloc3"' '    }' '  }' '}' '{' '  "name": "raid0",' '  "aliases": [' '    "aadbe5dd-9ccb-4ca6-add4-d686bb123f4c"' '  ],' '  "product_name": "Raid Volume",' '  "block_size": 512,' '  "num_blocks": 131072,' '  "uuid": "aadbe5dd-9ccb-4ca6-add4-d686bb123f4c",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "memory_domains": [' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    }' '  ],' '  "driver_specific": {' '    "raid": {' '      "uuid": "aadbe5dd-9ccb-4ca6-add4-d686bb123f4c",' '      "strip_size_kb": 64,' '      "state": "online",' '      "raid_level": "raid0",' '      "superblock": false,' '      "num_base_bdevs": 2,' '      "num_base_bdevs_discovered": 2,' '      "num_base_bdevs_operational": 2,' '      "base_bdevs_list": [' '        {' '          "name": "Malloc4",' '          "uuid": "8c11f5f9-2ff6-4966-8580-1cc0824801d7",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        },' '        {' '          "name": "Malloc5",' '          "uuid": "466653f4-400a-4eae-a94f-101330cb8103",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        }' '      ]' '    }' '  }' '}' '{' '  "name": "concat0",' '  "aliases": [' '    "3b6738f6-c3f2-4ad7-9cf0-833682f0af90"' '  ],' '  "product_name": "Raid Volume",' '  "block_size": 512,' '  "num_blocks": 131072,' '  "uuid": "3b6738f6-c3f2-4ad7-9cf0-833682f0af90",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "memory_domains": [' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    }' '  ],' '  "driver_specific": {' '    "raid": {' '      "uuid": "3b6738f6-c3f2-4ad7-9cf0-833682f0af90",' '      "strip_size_kb": 64,' '      "state": "online",' '      "raid_level": "concat",' '      "superblock": false,' '      "num_base_bdevs": 2,' '      "num_base_bdevs_discovered": 2,' '      "num_base_bdevs_operational": 2,' '      "base_bdevs_list": [' '        {' '          "name": "Malloc6",' '          "uuid": "81763f77-e608-4fc5-ba88-af3f895bd7ed",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        },' '        {' '          "name": "Malloc7",' '          "uuid": "d8265720-7716-48bd-8647-c3ed8153f75c",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        }' '      ]' '    }' '  }' '}' '{' '  "name": "raid1",' '  "aliases": [' '    "237c6f5a-3e13-4100-ad5a-96133c0921ba"' '  ],' '  "product_name": "Raid Volume",' '  "block_size": 512,' '  "num_blocks": 65536,' '  "uuid": "237c6f5a-3e13-4100-ad5a-96133c0921ba",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": false,' '    "write_zeroes": true,' '    "flush": false,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "memory_domains": [' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    }' '  ],' '  "driver_specific": {' '    "raid": {' '      "uuid": "237c6f5a-3e13-4100-ad5a-96133c0921ba",' '      "strip_size_kb": 0,' '      "state": "online",' '      "raid_level": "raid1",' '      "superblock": false,' '      "num_base_bdevs": 2,' '      "num_base_bdevs_discovered": 2,' '      "num_base_bdevs_operational": 2,' '      "base_bdevs_list": [' '        {' '          "name": "Malloc8",' '          "uuid": "9bac68fb-3846-41dd-9fc2-de159656762b",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        },' '        {' '          "name": "Malloc9",' '          "uuid": "098e42da-c770-48e7-92c0-19e19e04c8ae",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        }' '      ]' '    }' '  }' '}' '{' '  "name": "AIO0",' '  "aliases": [' '    "587e9156-b8b7-46ed-bc41-2e5a3b52334a"' '  ],' '  "product_name": "AIO disk",' '  "block_size": 2048,' '  "num_blocks": 5000,' '  "uuid": "587e9156-b8b7-46ed-bc41-2e5a3b52334a",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": false,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "driver_specific": {' '    "aio": {' '      "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' '      "block_size_override": true,' '      "readonly": false' '    }' '  }' '}'
00:12:00.873   16:55:53	-- bdev/blockdev.sh@353 -- # [[ -n Malloc0
00:12:00.873  Malloc1p0
00:12:00.873  Malloc1p1
00:12:00.873  Malloc2p0
00:12:00.873  Malloc2p1
00:12:00.873  Malloc2p2
00:12:00.873  Malloc2p3
00:12:00.873  Malloc2p4
00:12:00.873  Malloc2p5
00:12:00.873  Malloc2p6
00:12:00.873  Malloc2p7
00:12:00.873  TestPT
00:12:00.873  raid0
00:12:00.873  concat0 ]]
00:12:00.874    16:55:53	-- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name'
00:12:00.875    16:55:53	-- bdev/blockdev.sh@354 -- # printf '%s\n' '{' '  "name": "Malloc0",' '  "aliases": [' '    "b62e43e8-a683-4f4b-add9-e442c3f3ddab"' '  ],' '  "product_name": "Malloc disk",' '  "block_size": 512,' '  "num_blocks": 65536,' '  "uuid": "b62e43e8-a683-4f4b-add9-e442c3f3ddab",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 20000,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "memory_domains": [' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    }' '  ],' '  "driver_specific": {}' '}' '{' '  "name": "Malloc1p0",' '  "aliases": [' '    "a88f24b5-b89f-5bf6-bfe1-a6b648f431a8"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 32768,' '  "uuid": "a88f24b5-b89f-5bf6-bfe1-a6b648f431a8",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc1",' '      "offset_blocks": 0' '    }' '  }' '}' '{' '  "name": "Malloc1p1",' '  "aliases": [' '    "a7267782-9e1a-5727-9932-15e5353a8a4a"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 32768,' '  "uuid": "a7267782-9e1a-5727-9932-15e5353a8a4a",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc1",' '      "offset_blocks": 32768' '    }' '  }' '}' '{' '  "name": "Malloc2p0",' '  "aliases": [' '    "de8f2ef3-ec97-5d3b-a5ff-e5a96142e5fe"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "de8f2ef3-ec97-5d3b-a5ff-e5a96142e5fe",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 0' '    }' '  }' '}' '{' '  "name": "Malloc2p1",' '  "aliases": [' '    "5f787359-a152-560d-97ed-a2cec882bfcb"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "5f787359-a152-560d-97ed-a2cec882bfcb",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 8192' '    }' '  }' '}' '{' '  "name": "Malloc2p2",' '  "aliases": [' '    "ccf78d51-1259-5bad-b89c-f47ae05576f6"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "ccf78d51-1259-5bad-b89c-f47ae05576f6",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 16384' '    }' '  }' '}' '{' '  "name": "Malloc2p3",' '  "aliases": [' '    "de7fdfab-0018-58f7-a974-aadd0bf6f1d4"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "de7fdfab-0018-58f7-a974-aadd0bf6f1d4",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 24576' '    }' '  }' '}' '{' '  "name": "Malloc2p4",' '  "aliases": [' '    "6e412625-f267-5120-a11a-2ec4b63328f3"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "6e412625-f267-5120-a11a-2ec4b63328f3",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 32768' '    }' '  }' '}' '{' '  "name": "Malloc2p5",' '  "aliases": [' '    "9e422c63-ecbb-597a-86a5-1a3aa6e72ade"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "9e422c63-ecbb-597a-86a5-1a3aa6e72ade",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 40960' '    }' '  }' '}' '{' '  "name": "Malloc2p6",' '  "aliases": [' '    "42c74c99-2958-5bb3-96fb-a8f1da5f66ff"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "42c74c99-2958-5bb3-96fb-a8f1da5f66ff",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 49152' '    }' '  }' '}' '{' '  "name": "Malloc2p7",' '  "aliases": [' '    "5a6df78e-dde9-596e-b50f-1f1ebba5717c"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "5a6df78e-dde9-596e-b50f-1f1ebba5717c",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 57344' '    }' '  }' '}' '{' '  "name": "TestPT",' '  "aliases": [' '    "1cf32d11-d96a-5a51-a03e-4613a9e884b5"' '  ],' '  "product_name": "passthru",' '  "block_size": 512,' '  "num_blocks": 65536,' '  "uuid": "1cf32d11-d96a-5a51-a03e-4613a9e884b5",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "memory_domains": [' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    }' '  ],' '  "driver_specific": {' '    "passthru": {' '      "name": "TestPT",' '      "base_bdev_name": "Malloc3"' '    }' '  }' '}' '{' '  "name": "raid0",' '  "aliases": [' '    "aadbe5dd-9ccb-4ca6-add4-d686bb123f4c"' '  ],' '  "product_name": "Raid Volume",' '  "block_size": 512,' '  "num_blocks": 131072,' '  "uuid": "aadbe5dd-9ccb-4ca6-add4-d686bb123f4c",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "memory_domains": [' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    }' '  ],' '  "driver_specific": {' '    "raid": {' '      "uuid": "aadbe5dd-9ccb-4ca6-add4-d686bb123f4c",' '      "strip_size_kb": 64,' '      "state": "online",' '      "raid_level": "raid0",' '      "superblock": false,' '      "num_base_bdevs": 2,' '      "num_base_bdevs_discovered": 2,' '      "num_base_bdevs_operational": 2,' '      "base_bdevs_list": [' '        {' '          "name": "Malloc4",' '          "uuid": "8c11f5f9-2ff6-4966-8580-1cc0824801d7",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        },' '        {' '          "name": "Malloc5",' '          "uuid": "466653f4-400a-4eae-a94f-101330cb8103",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        }' '      ]' '    }' '  }' '}' '{' '  "name": "concat0",' '  "aliases": [' '    "3b6738f6-c3f2-4ad7-9cf0-833682f0af90"' '  ],' '  "product_name": "Raid Volume",' '  "block_size": 512,' '  "num_blocks": 131072,' '  "uuid": "3b6738f6-c3f2-4ad7-9cf0-833682f0af90",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "memory_domains": [' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    }' '  ],' '  "driver_specific": {' '    "raid": {' '      "uuid": "3b6738f6-c3f2-4ad7-9cf0-833682f0af90",' '      "strip_size_kb": 64,' '      "state": "online",' '      "raid_level": "concat",' '      "superblock": false,' '      "num_base_bdevs": 2,' '      "num_base_bdevs_discovered": 2,' '      "num_base_bdevs_operational": 2,' '      "base_bdevs_list": [' '        {' '          "name": "Malloc6",' '          "uuid": "81763f77-e608-4fc5-ba88-af3f895bd7ed",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        },' '        {' '          "name": "Malloc7",' '          "uuid": "d8265720-7716-48bd-8647-c3ed8153f75c",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        }' '      ]' '    }' '  }' '}' '{' '  "name": "raid1",' '  "aliases": [' '    "237c6f5a-3e13-4100-ad5a-96133c0921ba"' '  ],' '  "product_name": "Raid Volume",' '  "block_size": 512,' '  "num_blocks": 65536,' '  "uuid": "237c6f5a-3e13-4100-ad5a-96133c0921ba",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": false,' '    "write_zeroes": true,' '    "flush": false,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "memory_domains": [' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    }' '  ],' '  "driver_specific": {' '    "raid": {' '      "uuid": "237c6f5a-3e13-4100-ad5a-96133c0921ba",' '      "strip_size_kb": 0,' '      "state": "online",' '      "raid_level": "raid1",' '      "superblock": false,' '      "num_base_bdevs": 2,' '      "num_base_bdevs_discovered": 2,' '      "num_base_bdevs_operational": 2,' '      "base_bdevs_list": [' '        {' '          "name": "Malloc8",' '          "uuid": "9bac68fb-3846-41dd-9fc2-de159656762b",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        },' '        {' '          "name": "Malloc9",' '          "uuid": "098e42da-c770-48e7-92c0-19e19e04c8ae",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        }' '      ]' '    }' '  }' '}' '{' '  "name": "AIO0",' '  "aliases": [' '    "587e9156-b8b7-46ed-bc41-2e5a3b52334a"' '  ],' '  "product_name": "AIO disk",' '  "block_size": 2048,' '  "num_blocks": 5000,' '  "uuid": "587e9156-b8b7-46ed-bc41-2e5a3b52334a",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": false,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "driver_specific": {' '    "aio": {' '      "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' '      "block_size_override": true,' '      "readonly": false' '    }' '  }' '}'
00:12:00.875   16:55:53	-- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name')
00:12:00.875   16:55:53	-- bdev/blockdev.sh@355 -- # echo '[job_Malloc0]'
00:12:00.875   16:55:53	-- bdev/blockdev.sh@356 -- # echo filename=Malloc0
00:12:00.875   16:55:53	-- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name')
00:12:00.875   16:55:53	-- bdev/blockdev.sh@355 -- # echo '[job_Malloc1p0]'
00:12:00.875   16:55:53	-- bdev/blockdev.sh@356 -- # echo filename=Malloc1p0
00:12:00.875   16:55:53	-- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name')
00:12:00.875   16:55:53	-- bdev/blockdev.sh@355 -- # echo '[job_Malloc1p1]'
00:12:00.875   16:55:53	-- bdev/blockdev.sh@356 -- # echo filename=Malloc1p1
00:12:00.875   16:55:53	-- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name')
00:12:00.875   16:55:53	-- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p0]'
00:12:00.875   16:55:53	-- bdev/blockdev.sh@356 -- # echo filename=Malloc2p0
00:12:00.875   16:55:53	-- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name')
00:12:00.875   16:55:53	-- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p1]'
00:12:00.875   16:55:53	-- bdev/blockdev.sh@356 -- # echo filename=Malloc2p1
00:12:00.875   16:55:53	-- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name')
00:12:00.875   16:55:53	-- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p2]'
00:12:00.875   16:55:53	-- bdev/blockdev.sh@356 -- # echo filename=Malloc2p2
00:12:00.875   16:55:53	-- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name')
00:12:00.875   16:55:53	-- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p3]'
00:12:00.875   16:55:53	-- bdev/blockdev.sh@356 -- # echo filename=Malloc2p3
00:12:00.875   16:55:53	-- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name')
00:12:00.875   16:55:53	-- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p4]'
00:12:00.875   16:55:53	-- bdev/blockdev.sh@356 -- # echo filename=Malloc2p4
00:12:00.875   16:55:53	-- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name')
00:12:00.875   16:55:53	-- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p5]'
00:12:00.875   16:55:53	-- bdev/blockdev.sh@356 -- # echo filename=Malloc2p5
00:12:00.875   16:55:53	-- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name')
00:12:00.875   16:55:53	-- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p6]'
00:12:00.875   16:55:53	-- bdev/blockdev.sh@356 -- # echo filename=Malloc2p6
00:12:00.875   16:55:53	-- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name')
00:12:00.875   16:55:53	-- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p7]'
00:12:00.875   16:55:53	-- bdev/blockdev.sh@356 -- # echo filename=Malloc2p7
00:12:00.875   16:55:53	-- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name')
00:12:00.875   16:55:53	-- bdev/blockdev.sh@355 -- # echo '[job_TestPT]'
00:12:00.875   16:55:53	-- bdev/blockdev.sh@356 -- # echo filename=TestPT
00:12:00.875   16:55:53	-- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name')
00:12:00.875   16:55:53	-- bdev/blockdev.sh@355 -- # echo '[job_raid0]'
00:12:00.875   16:55:53	-- bdev/blockdev.sh@356 -- # echo filename=raid0
00:12:00.875   16:55:53	-- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name')
00:12:00.875   16:55:53	-- bdev/blockdev.sh@355 -- # echo '[job_concat0]'
00:12:00.875   16:55:53	-- bdev/blockdev.sh@356 -- # echo filename=concat0
00:12:00.875   16:55:53	-- bdev/blockdev.sh@365 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output
00:12:00.875   16:55:53	-- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']'
00:12:00.875   16:55:53	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:12:00.875   16:55:53	-- common/autotest_common.sh@10 -- # set +x
00:12:00.875  ************************************
00:12:00.875  START TEST bdev_fio_trim
00:12:00.875  ************************************
00:12:00.875   16:55:53	-- common/autotest_common.sh@1114 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output
00:12:00.875   16:55:53	-- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output
00:12:00.875   16:55:53	-- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio
00:12:00.875   16:55:53	-- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:12:00.875   16:55:53	-- common/autotest_common.sh@1328 -- # local sanitizers
00:12:00.875   16:55:53	-- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:12:00.875   16:55:53	-- common/autotest_common.sh@1330 -- # shift
00:12:00.875   16:55:53	-- common/autotest_common.sh@1332 -- # local asan_lib=
00:12:00.875   16:55:53	-- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}"
00:12:00.875    16:55:53	-- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:12:00.875    16:55:53	-- common/autotest_common.sh@1334 -- # grep libasan
00:12:00.875    16:55:53	-- common/autotest_common.sh@1334 -- # awk '{print $3}'
00:12:00.875   16:55:53	-- common/autotest_common.sh@1334 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.6
00:12:00.875   16:55:53	-- common/autotest_common.sh@1335 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.6 ]]
00:12:00.875   16:55:53	-- common/autotest_common.sh@1336 -- # break
00:12:00.875   16:55:53	-- common/autotest_common.sh@1341 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev'
00:12:00.875   16:55:53	-- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output
00:12:00.875  job_Malloc0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:12:00.875  job_Malloc1p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:12:00.875  job_Malloc1p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:12:00.875  job_Malloc2p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:12:00.875  job_Malloc2p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:12:00.875  job_Malloc2p2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:12:00.875  job_Malloc2p3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:12:00.875  job_Malloc2p4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:12:00.875  job_Malloc2p5: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:12:00.875  job_Malloc2p6: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:12:00.875  job_Malloc2p7: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:12:00.875  job_TestPT: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:12:00.875  job_raid0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:12:00.875  job_concat0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:12:00.875  fio-3.35
00:12:00.875  Starting 14 threads
00:12:13.085  
00:12:13.085  job_Malloc0: (groupid=0, jobs=14): err= 0: pid=120980: Tue Nov 19 16:56:04 2024
00:12:13.085    write: IOPS=116k, BW=451MiB/s (473MB/s)(4516MiB/10005msec); 0 zone resets
00:12:13.085      slat (nsec): min=1935, max=40059k, avg=43961.29, stdev=454232.84
00:12:13.085      clat (usec): min=18, max=40431, avg=302.43, stdev=1189.89
00:12:13.085       lat (usec): min=32, max=40464, avg=346.39, stdev=1273.01
00:12:13.085      clat percentiles (usec):
00:12:13.085       | 50.000th=[  206], 99.000th=[  457], 99.900th=[16319], 99.990th=[24249],
00:12:13.085       | 99.999th=[32113]
00:12:13.085     bw (  KiB/s): min=277889, max=690923, per=100.00%, avg=462761.42, stdev=9424.71, samples=266
00:12:13.085     iops        : min=69472, max=172730, avg=115690.26, stdev=2356.17, samples=266
00:12:13.085    trim: IOPS=116k, BW=451MiB/s (473MB/s)(4516MiB/10005msec); 0 zone resets
00:12:13.085      slat (usec): min=3, max=32033, avg=30.16, stdev=371.67
00:12:13.085      clat (usec): min=3, max=40464, avg=336.47, stdev=1252.25
00:12:13.085       lat (usec): min=12, max=40485, avg=366.63, stdev=1306.17
00:12:13.085      clat percentiles (usec):
00:12:13.085       | 50.000th=[  233], 99.000th=[  506], 99.900th=[16319], 99.990th=[24249],
00:12:13.085       | 99.999th=[32375]
00:12:13.085     bw (  KiB/s): min=277889, max=690971, per=100.00%, avg=462761.42, stdev=9425.27, samples=266
00:12:13.085     iops        : min=69472, max=172742, avg=115690.26, stdev=2356.31, samples=266
00:12:13.085    lat (usec)   : 4=0.01%, 10=0.03%, 20=0.11%, 50=0.78%, 100=4.82%
00:12:13.085    lat (usec)   : 250=56.62%, 500=36.75%, 750=0.24%, 1000=0.02%
00:12:13.085    lat (msec)   : 2=0.02%, 4=0.01%, 10=0.03%, 20=0.54%, 50=0.03%
00:12:13.085    cpu          : usr=69.15%, sys=0.35%, ctx=167157, majf=0, minf=8914
00:12:13.085    IO depths    : 1=12.3%, 2=24.7%, 4=50.1%, 8=12.8%, 16=0.0%, 32=0.0%, >=64=0.0%
00:12:13.085       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:12:13.085       complete  : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:12:13.085       issued rwts: total=0,1156175,1156178,0 short=0,0,0,0 dropped=0,0,0,0
00:12:13.085       latency   : target=0, window=0, percentile=100.00%, depth=8
00:12:13.085  
00:12:13.085  Run status group 0 (all jobs):
00:12:13.086    WRITE: bw=451MiB/s (473MB/s), 451MiB/s-451MiB/s (473MB/s-473MB/s), io=4516MiB (4736MB), run=10005-10005msec
00:12:13.086     TRIM: bw=451MiB/s (473MB/s), 451MiB/s-451MiB/s (473MB/s-473MB/s), io=4516MiB (4736MB), run=10005-10005msec
00:12:13.086  -----------------------------------------------------
00:12:13.086  Suppressions used:
00:12:13.086    count      bytes template
00:12:13.086       14        129 /usr/src/fio/parse.c
00:12:13.086        1        904 libcrypto.so
00:12:13.086  -----------------------------------------------------
00:12:13.086  
00:12:13.086  ************************************
00:12:13.086  END TEST bdev_fio_trim
00:12:13.086  ************************************
00:12:13.086  
00:12:13.086  real	0m11.694s
00:12:13.086  user	1m39.560s
00:12:13.086  sys	0m1.190s
00:12:13.086   16:56:05	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:12:13.086   16:56:05	-- common/autotest_common.sh@10 -- # set +x
00:12:13.086   16:56:05	-- bdev/blockdev.sh@366 -- # rm -f
00:12:13.086   16:56:05	-- bdev/blockdev.sh@367 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio
00:12:13.086  /home/vagrant/spdk_repo/spdk
00:12:13.086  ************************************
00:12:13.086  END TEST bdev_fio
00:12:13.086  ************************************
00:12:13.086   16:56:05	-- bdev/blockdev.sh@368 -- # popd
00:12:13.086   16:56:05	-- bdev/blockdev.sh@369 -- # trap - SIGINT SIGTERM EXIT
00:12:13.086  
00:12:13.086  real	0m24.151s
00:12:13.086  user	3m13.416s
00:12:13.086  sys	0m5.717s
00:12:13.086   16:56:05	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:12:13.086   16:56:05	-- common/autotest_common.sh@10 -- # set +x
00:12:13.086   16:56:05	-- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT
00:12:13.086   16:56:05	-- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 ''
00:12:13.086   16:56:05	-- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']'
00:12:13.086   16:56:05	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:12:13.086   16:56:05	-- common/autotest_common.sh@10 -- # set +x
00:12:13.086  ************************************
00:12:13.086  START TEST bdev_verify
00:12:13.086  ************************************
00:12:13.086   16:56:05	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 ''
00:12:13.086  [2024-11-19 16:56:05.343033] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:12:13.086  [2024-11-19 16:56:05.343500] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121155 ]
00:12:13.086  [2024-11-19 16:56:05.495019] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2
00:12:13.086  [2024-11-19 16:56:05.551126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:12:13.086  [2024-11-19 16:56:05.551127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:12:13.086  [2024-11-19 16:56:05.684351] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1
00:12:13.086  [2024-11-19 16:56:05.684722] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1
00:12:13.086  [2024-11-19 16:56:05.692273] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2
00:12:13.086  [2024-11-19 16:56:05.692511] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2
00:12:13.086  [2024-11-19 16:56:05.700360] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3
00:12:13.086  [2024-11-19 16:56:05.700603] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3
00:12:13.086  [2024-11-19 16:56:05.700769] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival
00:12:13.086  [2024-11-19 16:56:05.812347] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3
00:12:13.086  [2024-11-19 16:56:05.812709] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:12:13.086  [2024-11-19 16:56:05.812871] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80
00:12:13.086  [2024-11-19 16:56:05.813157] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:12:13.086  [2024-11-19 16:56:05.817127] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:12:13.086  [2024-11-19 16:56:05.817369] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT
00:12:13.345  Running I/O for 5 seconds...
00:12:18.720  
00:12:18.720                                                                                                  Latency(us)
00:12:18.720  
[2024-11-19T16:56:11.584Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:12:18.720  
[2024-11-19T16:56:11.584Z]  Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:12:18.720  	 Verification LBA range: start 0x0 length 0x1000
00:12:18.720  	 Malloc0             :       5.17    1375.05       5.37       0.00     0.00   92203.52    1849.05  202724.69
00:12:18.720  
[2024-11-19T16:56:11.584Z]  Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:12:18.720  	 Verification LBA range: start 0x1000 length 0x1000
00:12:18.720  	 Malloc0             :       5.19    1523.71       5.95       0.00     0.00   83687.74    1895.86  267636.54
00:12:18.720  
[2024-11-19T16:56:11.584Z]  Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:12:18.720  	 Verification LBA range: start 0x0 length 0x800
00:12:18.720  	 Malloc1p0           :       5.18     964.07       3.77       0.00     0.00  131450.26    3651.29  143804.71
00:12:18.720  
[2024-11-19T16:56:11.584Z]  Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:12:18.720  	 Verification LBA range: start 0x800 length 0x800
00:12:18.720  	 Malloc1p0           :       5.19    1077.49       4.21       0.00     0.00  118170.18    3682.50  126328.44
00:12:18.720  
[2024-11-19T16:56:11.584Z]  Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:12:18.720  	 Verification LBA range: start 0x0 length 0x800
00:12:18.720  	 Malloc1p1           :       5.18     963.77       3.76       0.00     0.00  131292.37    3666.90  137812.85
00:12:18.720  
[2024-11-19T16:56:11.585Z]  Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:12:18.721  	 Verification LBA range: start 0x800 length 0x800
00:12:18.721  	 Malloc1p1           :       5.19    1076.85       4.21       0.00     0.00  118021.62    3666.90  121834.54
00:12:18.721  
[2024-11-19T16:56:11.585Z]  Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:12:18.721  	 Verification LBA range: start 0x0 length 0x200
00:12:18.721  	 Malloc2p0           :       5.18     963.46       3.76       0.00     0.00  131149.87    4213.03  131820.98
00:12:18.721  
[2024-11-19T16:56:11.585Z]  Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:12:18.721  	 Verification LBA range: start 0x200 length 0x200
00:12:18.721  	 Malloc2p0           :       5.20    1076.41       4.20       0.00     0.00  117877.92    4181.82  117340.65
00:12:18.721  
[2024-11-19T16:56:11.585Z]  Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:12:18.721  	 Verification LBA range: start 0x0 length 0x200
00:12:18.721  	 Malloc2p1           :       5.18     963.16       3.76       0.00     0.00  130989.44    3651.29  127327.09
00:12:18.721  
[2024-11-19T16:56:11.585Z]  Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:12:18.721  	 Verification LBA range: start 0x200 length 0x200
00:12:18.721  	 Malloc2p1           :       5.20    1076.13       4.20       0.00     0.00  117713.22    3729.31  112347.43
00:12:18.721  
[2024-11-19T16:56:11.585Z]  Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:12:18.721  	 Verification LBA range: start 0x0 length 0x200
00:12:18.721  	 Malloc2p2           :       5.18     962.88       3.76       0.00     0.00  130860.73    3339.22  124331.15
00:12:18.721  
[2024-11-19T16:56:11.585Z]  Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:12:18.721  	 Verification LBA range: start 0x200 length 0x200
00:12:18.721  	 Malloc2p2           :       5.20    1075.86       4.20       0.00     0.00  117561.19    3308.01  109351.50
00:12:18.721  
[2024-11-19T16:56:11.585Z]  Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:12:18.721  	 Verification LBA range: start 0x0 length 0x200
00:12:18.721  	 Malloc2p3           :       5.18     962.58       3.76       0.00     0.00  130718.90    3292.40  122333.87
00:12:18.721  
[2024-11-19T16:56:11.585Z]  Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:12:18.721  	 Verification LBA range: start 0x200 length 0x200
00:12:18.721  	 Malloc2p3           :       5.20    1075.59       4.20       0.00     0.00  117429.51    3339.22  106355.57
00:12:18.721  
[2024-11-19T16:56:11.585Z]  Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:12:18.721  	 Verification LBA range: start 0x0 length 0x200
00:12:18.721  	 Malloc2p4           :       5.19     962.30       3.76       0.00     0.00  130630.62    3386.03  120835.90
00:12:18.721  
[2024-11-19T16:56:11.585Z]  Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:12:18.721  	 Verification LBA range: start 0x200 length 0x200
00:12:18.721  	 Malloc2p4           :       5.20    1075.34       4.20       0.00     0.00  117310.95    3354.82  102860.31
00:12:18.721  
[2024-11-19T16:56:11.585Z]  Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:12:18.721  	 Verification LBA range: start 0x0 length 0x200
00:12:18.721  	 Malloc2p5           :       5.19     962.06       3.76       0.00     0.00  130495.48    3151.97  121335.22
00:12:18.721  
[2024-11-19T16:56:11.585Z]  Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:12:18.721  	 Verification LBA range: start 0x200 length 0x200
00:12:18.721  	 Malloc2p5           :       5.20    1075.10       4.20       0.00     0.00  117170.52    3105.16   99864.38
00:12:18.721  
[2024-11-19T16:56:11.585Z]  Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:12:18.721  	 Verification LBA range: start 0x0 length 0x200
00:12:18.721  	 Malloc2p6           :       5.19     961.76       3.76       0.00     0.00  130365.67    3292.40  124830.48
00:12:18.721  
[2024-11-19T16:56:11.585Z]  Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:12:18.721  	 Verification LBA range: start 0x200 length 0x200
00:12:18.721  	 Malloc2p6           :       5.20    1074.78       4.20       0.00     0.00  117029.11    3276.80   98865.74
00:12:18.721  
[2024-11-19T16:56:11.585Z]  Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:12:18.721  	 Verification LBA range: start 0x0 length 0x200
00:12:18.721  	 Malloc2p7           :       5.19     961.18       3.75       0.00     0.00  130268.14    3276.80  130822.34
00:12:18.721  
[2024-11-19T16:56:11.585Z]  Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:12:18.721  	 Verification LBA range: start 0x200 length 0x200
00:12:18.721  	 Malloc2p7           :       5.21    1073.69       4.19       0.00     0.00  116950.74    3229.99   98865.74
00:12:18.721  
[2024-11-19T16:56:11.585Z]  Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:12:18.721  	 Verification LBA range: start 0x0 length 0x1000
00:12:18.721  	 TestPT              :       5.20     961.80       3.76       0.00     0.00  130700.89    7365.00  136814.20
00:12:18.721  
[2024-11-19T16:56:11.585Z]  Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:12:18.721  	 Verification LBA range: start 0x1000 length 0x1000
00:12:18.721  	 TestPT              :       5.21    1040.72       4.07       0.00     0.00  120367.28   10173.68  154789.79
00:12:18.721  
[2024-11-19T16:56:11.585Z]  Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:12:18.721  	 Verification LBA range: start 0x0 length 0x2000
00:12:18.721  	 raid0               :       5.21     975.20       3.81       0.00     0.00  128601.27    3510.86  141807.42
00:12:18.721  
[2024-11-19T16:56:11.585Z]  Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:12:18.721  	 Verification LBA range: start 0x2000 length 0x2000
00:12:18.721  	 raid0               :       5.22    1071.63       4.19       0.00     0.00  116651.79    3510.86   97867.09
00:12:18.721  
[2024-11-19T16:56:11.585Z]  Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:12:18.721  	 Verification LBA range: start 0x0 length 0x2000
00:12:18.721  	 concat0             :       5.21     974.15       3.81       0.00     0.00  128504.03    3448.44  147799.28
00:12:18.721  
[2024-11-19T16:56:11.585Z]  Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:12:18.721  	 Verification LBA range: start 0x2000 length 0x2000
00:12:18.721  	 concat0             :       5.22    1070.79       4.18       0.00     0.00  116538.67    3432.84   97867.09
00:12:18.721  
[2024-11-19T16:56:11.585Z]  Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:12:18.721  	 Verification LBA range: start 0x0 length 0x1000
00:12:18.721  	 raid1               :       5.22     973.30       3.80       0.00     0.00  128382.60    3791.73  152792.50
00:12:18.721  
[2024-11-19T16:56:11.585Z]  Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:12:18.721  	 Verification LBA range: start 0x1000 length 0x1000
00:12:18.721  	 raid1               :       5.23    1069.89       4.18       0.00     0.00  116439.61    3776.12   97367.77
00:12:18.721  
[2024-11-19T16:56:11.585Z]  Job: AIO0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:12:18.721  	 Verification LBA range: start 0x0 length 0x4e2
00:12:18.721  	 AIO0                :       5.22     972.36       3.80       0.00     0.00  128121.71    7115.34  152792.50
00:12:18.721  
[2024-11-19T16:56:11.585Z]  Job: AIO0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:12:18.721  	 Verification LBA range: start 0x4e2 length 0x4e2
00:12:18.721  	 AIO0                :       5.23    1069.52       4.18       0.00     0.00  116167.55    7115.34  101861.67
00:12:18.721  
[2024-11-19T16:56:11.585Z]  ===================================================================================================================
00:12:18.721  
[2024-11-19T16:56:11.585Z]  Total                       :              33462.59     130.71       0.00     0.00  120365.46    1849.05  267636.54
00:12:19.290  
00:12:19.290  real	0m6.722s
00:12:19.290  user	0m11.589s
00:12:19.290  sys	0m0.462s
00:12:19.290   16:56:11	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:12:19.290   16:56:11	-- common/autotest_common.sh@10 -- # set +x
00:12:19.290  ************************************
00:12:19.290  END TEST bdev_verify
00:12:19.290  ************************************
00:12:19.290   16:56:12	-- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 ''
00:12:19.290   16:56:12	-- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']'
00:12:19.290   16:56:12	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:12:19.290   16:56:12	-- common/autotest_common.sh@10 -- # set +x
00:12:19.290  ************************************
00:12:19.290  START TEST bdev_verify_big_io
00:12:19.290  ************************************
00:12:19.291   16:56:12	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 ''
00:12:19.291  [2024-11-19 16:56:12.116464] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:12:19.291  [2024-11-19 16:56:12.116626] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121265 ]
00:12:19.550  [2024-11-19 16:56:12.260964] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2
00:12:19.550  [2024-11-19 16:56:12.333629] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:12:19.550  [2024-11-19 16:56:12.333643] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:12:19.809  [2024-11-19 16:56:12.512430] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1
00:12:19.809  [2024-11-19 16:56:12.512572] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1
00:12:19.809  [2024-11-19 16:56:12.520338] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2
00:12:19.809  [2024-11-19 16:56:12.520421] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2
00:12:19.809  [2024-11-19 16:56:12.528451] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3
00:12:19.809  [2024-11-19 16:56:12.528531] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3
00:12:19.809  [2024-11-19 16:56:12.528592] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival
00:12:19.809  [2024-11-19 16:56:12.645261] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3
00:12:19.809  [2024-11-19 16:56:12.645447] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:12:19.809  [2024-11-19 16:56:12.645536] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80
00:12:19.809  [2024-11-19 16:56:12.645588] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:12:19.809  [2024-11-19 16:56:12.648833] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:12:19.809  [2024-11-19 16:56:12.648897] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT
00:12:20.069  [2024-11-19 16:56:12.863875] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32
00:12:20.069  [2024-11-19 16:56:12.865227] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32
00:12:20.069  [2024-11-19 16:56:12.867388] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32
00:12:20.070  [2024-11-19 16:56:12.869485] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32
00:12:20.070  [2024-11-19 16:56:12.870769] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32
00:12:20.070  [2024-11-19 16:56:12.872952] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32
00:12:20.070  [2024-11-19 16:56:12.874276] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32
00:12:20.070  [2024-11-19 16:56:12.876327] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32
00:12:20.070  [2024-11-19 16:56:12.877734] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32
00:12:20.070  [2024-11-19 16:56:12.879836] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32
00:12:20.070  [2024-11-19 16:56:12.881138] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32
00:12:20.070  [2024-11-19 16:56:12.883299] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32
00:12:20.070  [2024-11-19 16:56:12.884587] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32
00:12:20.070  [2024-11-19 16:56:12.886700] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32
00:12:20.070  [2024-11-19 16:56:12.888861] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32
00:12:20.070  [2024-11-19 16:56:12.890164] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32
00:12:20.070  [2024-11-19 16:56:12.927511] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78
00:12:20.329  [2024-11-19 16:56:12.930642] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78
00:12:20.329  Running I/O for 5 seconds...
00:12:26.902  
00:12:26.902                                                                                                  Latency(us)
00:12:26.902  
[2024-11-19T16:56:19.766Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:12:26.902  
[2024-11-19T16:56:19.766Z]  Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:12:26.902  	 Verification LBA range: start 0x0 length 0x100
00:12:26.902  	 Malloc0             :       5.86     207.91      12.99       0.00     0.00  588815.12   38947.11 1677721.60
00:12:26.902  
[2024-11-19T16:56:19.766Z]  Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:12:26.902  	 Verification LBA range: start 0x100 length 0x100
00:12:26.902  	 Malloc0             :       5.95     204.79      12.80       0.00     0.00  604587.82   41693.38 1829515.46
00:12:26.902  
[2024-11-19T16:56:19.766Z]  Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:12:26.902  	 Verification LBA range: start 0x0 length 0x80
00:12:26.902  	 Malloc1p0           :       5.94     175.14      10.95       0.00     0.00  683431.03   62165.58 1493971.14
00:12:26.902  
[2024-11-19T16:56:19.766Z]  Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:12:26.902  	 Verification LBA range: start 0x80 length 0x80
00:12:26.902  	 Malloc1p0           :       6.12     122.68       7.67       0.00     0.00  977242.78   69905.07 1637775.85
00:12:26.902  
[2024-11-19T16:56:19.766Z]  Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:12:26.902  	 Verification LBA range: start 0x0 length 0x80
00:12:26.902  	 Malloc1p1           :       6.15      87.28       5.45       0.00     0.00 1342917.43   64911.85 2716311.16
00:12:26.902  
[2024-11-19T16:56:19.766Z]  Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:12:26.902  	 Verification LBA range: start 0x80 length 0x80
00:12:26.902  	 Malloc1p1           :       6.30      85.25       5.33       0.00     0.00 1361873.99   72901.00 2748267.76
00:12:26.902  
[2024-11-19T16:56:19.766Z]  Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536)
00:12:26.902  	 Verification LBA range: start 0x0 length 0x20
00:12:26.902  	 Malloc2p0           :       5.95      48.27       3.02       0.00     0.00  610813.35   11858.90  970681.78
00:12:26.902  
[2024-11-19T16:56:19.766Z]  Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536)
00:12:26.902  	 Verification LBA range: start 0x20 length 0x20
00:12:26.902  	 Malloc2p0           :       5.95      44.51       2.78       0.00     0.00  646593.96   12670.29 1030600.41
00:12:26.902  
[2024-11-19T16:56:19.766Z]  Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536)
00:12:26.902  	 Verification LBA range: start 0x0 length 0x20
00:12:26.902  	 Malloc2p1           :       5.95      48.26       3.02       0.00     0.00  606901.64   11796.48  946714.33
00:12:26.902  
[2024-11-19T16:56:19.766Z]  Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536)
00:12:26.902  	 Verification LBA range: start 0x20 length 0x20
00:12:26.902  	 Malloc2p1           :       5.96      44.48       2.78       0.00     0.00  641674.81   11609.23 1002638.38
00:12:26.902  
[2024-11-19T16:56:19.766Z]  Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536)
00:12:26.902  	 Verification LBA range: start 0x0 length 0x20
00:12:26.902  	 Malloc2p2           :       5.95      48.25       3.02       0.00     0.00  603027.11   10735.42  922746.88
00:12:26.902  
[2024-11-19T16:56:19.766Z]  Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536)
00:12:26.902  	 Verification LBA range: start 0x20 length 0x20
00:12:26.902  	 Malloc2p2           :       5.96      44.44       2.78       0.00     0.00  637211.42   10673.01  974676.36
00:12:26.902  
[2024-11-19T16:56:19.766Z]  Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536)
00:12:26.902  	 Verification LBA range: start 0x0 length 0x20
00:12:26.902  	 Malloc2p3           :       5.95      48.22       3.01       0.00     0.00  599370.27   10985.08  894784.85
00:12:26.902  
[2024-11-19T16:56:19.766Z]  Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536)
00:12:26.902  	 Verification LBA range: start 0x20 length 0x20
00:12:26.902  	 Malloc2p3           :       5.97      44.41       2.78       0.00     0.00  632788.65   14293.09  946714.33
00:12:26.902  
[2024-11-19T16:56:19.766Z]  Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536)
00:12:26.902  	 Verification LBA range: start 0x0 length 0x20
00:12:26.902  	 Malloc2p4           :       5.95      48.20       3.01       0.00     0.00  595793.25   10173.68  874811.98
00:12:26.902  
[2024-11-19T16:56:19.766Z]  Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536)
00:12:26.902  	 Verification LBA range: start 0x20 length 0x20
00:12:26.902  	 Malloc2p4           :       6.05      47.47       2.97       0.00     0.00  597236.95   13294.45  922746.88
00:12:26.902  
[2024-11-19T16:56:19.766Z]  Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536)
00:12:26.902  	 Verification LBA range: start 0x0 length 0x20
00:12:26.902  	 Malloc2p5           :       5.96      48.17       3.01       0.00     0.00  592308.65   11546.82  850844.53
00:12:26.902  
[2024-11-19T16:56:19.766Z]  Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536)
00:12:26.902  	 Verification LBA range: start 0x20 length 0x20
00:12:26.902  	 Malloc2p5           :       6.05      47.46       2.97       0.00     0.00  593070.24   15728.64  894784.85
00:12:26.902  
[2024-11-19T16:56:19.766Z]  Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536)
00:12:26.902  	 Verification LBA range: start 0x0 length 0x20
00:12:26.902  	 Malloc2p6           :       5.96      48.13       3.01       0.00     0.00  588944.53   11047.50  830871.65
00:12:26.902  
[2024-11-19T16:56:19.766Z]  Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536)
00:12:26.902  	 Verification LBA range: start 0x20 length 0x20
00:12:26.902  	 Malloc2p6           :       6.05      47.45       2.97       0.00     0.00  588232.74   12670.29  862828.25
00:12:26.902  
[2024-11-19T16:56:19.766Z]  Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536)
00:12:26.902  	 Verification LBA range: start 0x0 length 0x20
00:12:26.902  	 Malloc2p7           :       5.97      48.09       3.01       0.00     0.00  585463.20    9924.02  810898.77
00:12:26.902  
[2024-11-19T16:56:19.766Z]  Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536)
00:12:26.902  	 Verification LBA range: start 0x20 length 0x20
00:12:26.902  	 Malloc2p7           :       6.05      47.45       2.97       0.00     0.00  583850.51   13481.69  834866.22
00:12:26.902  
[2024-11-19T16:56:19.766Z]  Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:12:26.902  	 Verification LBA range: start 0x0 length 0x100
00:12:26.902  	 TestPT              :       6.23      87.05       5.44       0.00     0.00 1257628.82   81888.79 2620441.36
00:12:26.902  
[2024-11-19T16:56:19.766Z]  Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:12:26.902  	 Verification LBA range: start 0x100 length 0x100
00:12:26.902  	 TestPT              :       6.20      86.56       5.41       0.00     0.00 1247454.89   71902.35 2716311.16
00:12:26.902  
[2024-11-19T16:56:19.766Z]  Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:12:26.902  	 Verification LBA range: start 0x0 length 0x200
00:12:26.902  	 raid0               :       6.23      91.37       5.71       0.00     0.00 1179198.48   63164.22 2700332.86
00:12:26.902  
[2024-11-19T16:56:19.766Z]  Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:12:26.902  	 Verification LBA range: start 0x200 length 0x200
00:12:26.902  	 raid0               :       6.34      94.54       5.91       0.00     0.00 1124679.04   54426.09 2796202.67
00:12:26.902  
[2024-11-19T16:56:19.766Z]  Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:12:26.902  	 Verification LBA range: start 0x0 length 0x200
00:12:26.902  	 concat0             :       6.16     102.79       6.42       0.00     0.00 1039302.65   39196.77 2700332.86
00:12:26.902  
[2024-11-19T16:56:19.766Z]  Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:12:26.902  	 Verification LBA range: start 0x200 length 0x200
00:12:26.902  	 concat0             :       6.32     105.73       6.61       0.00     0.00  990231.77   48184.56 2764246.06
00:12:26.902  
[2024-11-19T16:56:19.766Z]  Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:12:26.902  	 Verification LBA range: start 0x0 length 0x100
00:12:26.902  	 raid1               :       6.27     110.92       6.93       0.00     0.00  938729.76   29709.65 2684354.56
00:12:26.902  
[2024-11-19T16:56:19.766Z]  Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:12:26.902  	 Verification LBA range: start 0x100 length 0x100
00:12:26.902  	 raid1               :       6.32     129.91       8.12       0.00     0.00  787276.58   13419.28 2748267.76
00:12:26.902  
[2024-11-19T16:56:19.766Z]  Job: AIO0 (Core Mask 0x1, workload: verify, depth: 78, IO size: 65536)
00:12:26.902  	 Verification LBA range: start 0x0 length 0x4e
00:12:26.902  	 AIO0                :       6.26     121.34       7.58       0.00     0.00  514185.23    1646.20 1398101.33
00:12:26.902  
[2024-11-19T16:56:19.766Z]  Job: AIO0 (Core Mask 0x2, workload: verify, depth: 78, IO size: 65536)
00:12:26.902  	 Verification LBA range: start 0x4e length 0x4e
00:12:26.902  	 AIO0                :       6.37     152.63       9.54       0.00     0.00  399355.53    1521.37 1446036.24
00:12:26.902  
[2024-11-19T16:56:19.766Z]  ===================================================================================================================
00:12:26.902  
[2024-11-19T16:56:19.766Z]  Total                       :               2719.15     169.95       0.00     0.00  793434.75    1521.37 2796202.67
00:12:27.471  
00:12:27.471  real	0m8.007s
00:12:27.471  user	0m14.530s
00:12:27.471  sys	0m0.629s
00:12:27.471  ************************************
00:12:27.471  END TEST bdev_verify_big_io
00:12:27.471  ************************************
00:12:27.471   16:56:20	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:12:27.471   16:56:20	-- common/autotest_common.sh@10 -- # set +x
00:12:27.471   16:56:20	-- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:12:27.471   16:56:20	-- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']'
00:12:27.471   16:56:20	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:12:27.471   16:56:20	-- common/autotest_common.sh@10 -- # set +x
00:12:27.471  ************************************
00:12:27.471  START TEST bdev_write_zeroes
00:12:27.471  ************************************
00:12:27.471   16:56:20	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:12:27.471  [2024-11-19 16:56:20.212653] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:12:27.471  [2024-11-19 16:56:20.212947] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121386 ]
00:12:27.730  [2024-11-19 16:56:20.368678] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:27.730  [2024-11-19 16:56:20.444920] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:12:27.990  [2024-11-19 16:56:20.624549] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1
00:12:27.990  [2024-11-19 16:56:20.624665] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1
00:12:27.990  [2024-11-19 16:56:20.632445] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2
00:12:27.990  [2024-11-19 16:56:20.632517] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2
00:12:27.990  [2024-11-19 16:56:20.640522] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3
00:12:27.990  [2024-11-19 16:56:20.640588] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3
00:12:27.990  [2024-11-19 16:56:20.640661] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival
00:12:27.990  [2024-11-19 16:56:20.755017] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3
00:12:27.990  [2024-11-19 16:56:20.755145] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:12:27.990  [2024-11-19 16:56:20.755220] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80
00:12:27.990  [2024-11-19 16:56:20.755278] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:12:27.990  [2024-11-19 16:56:20.758221] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:12:27.990  [2024-11-19 16:56:20.758286] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT
00:12:28.249  Running I/O for 1 seconds...
00:12:29.629  
00:12:29.629                                                                                                  Latency(us)
00:12:29.629  
[2024-11-19T16:56:22.493Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:12:29.629  
[2024-11-19T16:56:22.493Z]  Job: Malloc0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:12:29.629  	 Malloc0             :       1.03    6113.97      23.88       0.00     0.00   20923.20     663.16   36450.50
00:12:29.629  
[2024-11-19T16:56:22.493Z]  Job: Malloc1p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:12:29.629  	 Malloc1p0           :       1.03    6107.24      23.86       0.00     0.00   20909.34     881.62   35701.52
00:12:29.629  
[2024-11-19T16:56:22.493Z]  Job: Malloc1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:12:29.629  	 Malloc1p1           :       1.03    6101.03      23.83       0.00     0.00   20893.22     834.80   34952.53
00:12:29.629  
[2024-11-19T16:56:22.493Z]  Job: Malloc2p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:12:29.629  	 Malloc2p0           :       1.03    6094.92      23.81       0.00     0.00   20879.01     850.41   33953.89
00:12:29.629  
[2024-11-19T16:56:22.493Z]  Job: Malloc2p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:12:29.629  	 Malloc2p1           :       1.03    6088.51      23.78       0.00     0.00   20860.00     842.61   33204.91
00:12:29.629  
[2024-11-19T16:56:22.493Z]  Job: Malloc2p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:12:29.629  	 Malloc2p2           :       1.03    6082.27      23.76       0.00     0.00   20842.63     834.80   32455.92
00:12:29.629  
[2024-11-19T16:56:22.493Z]  Job: Malloc2p3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:12:29.629  	 Malloc2p3           :       1.05    6114.26      23.88       0.00     0.00   20695.93     830.90   31706.94
00:12:29.629  
[2024-11-19T16:56:22.493Z]  Job: Malloc2p4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:12:29.629  	 Malloc2p4           :       1.05    6107.98      23.86       0.00     0.00   20679.58     854.31   30833.13
00:12:29.629  
[2024-11-19T16:56:22.493Z]  Job: Malloc2p5 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:12:29.629  	 Malloc2p5           :       1.05    6101.99      23.84       0.00     0.00   20660.88     838.70   30084.14
00:12:29.629  
[2024-11-19T16:56:22.493Z]  Job: Malloc2p6 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:12:29.629  	 Malloc2p6           :       1.05    6095.97      23.81       0.00     0.00   20640.76     850.41   29335.16
00:12:29.629  
[2024-11-19T16:56:22.493Z]  Job: Malloc2p7 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:12:29.629  	 Malloc2p7           :       1.05    6090.01      23.79       0.00     0.00   20610.97     838.70   28461.35
00:12:29.629  
[2024-11-19T16:56:22.493Z]  Job: TestPT (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:12:29.629  	 TestPT              :       1.05    6083.91      23.77       0.00     0.00   20595.78     881.62   27712.37
00:12:29.629  
[2024-11-19T16:56:22.493Z]  Job: raid0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:12:29.629  	 raid0               :       1.05    6077.08      23.74       0.00     0.00   20582.62    1240.50   26464.06
00:12:29.629  
[2024-11-19T16:56:22.493Z]  Job: concat0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:12:29.629  	 concat0             :       1.05    6070.32      23.71       0.00     0.00   20546.49    1263.91   25215.76
00:12:29.629  
[2024-11-19T16:56:22.493Z]  Job: raid1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:12:29.629  	 raid1               :       1.06    6061.80      23.68       0.00     0.00   20501.16    2075.31   23093.64
00:12:29.629  
[2024-11-19T16:56:22.493Z]  Job: AIO0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:12:29.629  	 AIO0                :       1.06    6037.63      23.58       0.00     0.00   20496.68    1419.95   21970.16
00:12:29.629  
[2024-11-19T16:56:22.493Z]  ===================================================================================================================
00:12:29.629  
[2024-11-19T16:56:22.493Z]  Total                       :              97428.89     380.58       0.00     0.00   20706.09     663.16   36450.50
00:12:30.198  
00:12:30.198  real	0m2.604s
00:12:30.198  user	0m1.859s
00:12:30.198  sys	0m0.556s
00:12:30.198  ************************************
00:12:30.198  END TEST bdev_write_zeroes
00:12:30.198  ************************************
00:12:30.198   16:56:22	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:12:30.198   16:56:22	-- common/autotest_common.sh@10 -- # set +x
00:12:30.198   16:56:22	-- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:12:30.198   16:56:22	-- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']'
00:12:30.198   16:56:22	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:12:30.198   16:56:22	-- common/autotest_common.sh@10 -- # set +x
00:12:30.198  ************************************
00:12:30.198  START TEST bdev_json_nonenclosed
00:12:30.198  ************************************
00:12:30.198   16:56:22	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:12:30.198  [2024-11-19 16:56:22.893649] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:12:30.198  [2024-11-19 16:56:22.893905] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121436 ]
00:12:30.198  [2024-11-19 16:56:23.048455] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:30.456  [2024-11-19 16:56:23.123944] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:12:30.456  [2024-11-19 16:56:23.124250] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}.
00:12:30.456  [2024-11-19 16:56:23.124299] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:12:30.715  
00:12:30.715  real	0m0.500s
00:12:30.715  user	0m0.263s
00:12:30.715  sys	0m0.137s
00:12:30.715  ************************************
00:12:30.715  END TEST bdev_json_nonenclosed
00:12:30.715  ************************************
00:12:30.715   16:56:23	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:12:30.715   16:56:23	-- common/autotest_common.sh@10 -- # set +x
00:12:30.715   16:56:23	-- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:12:30.715   16:56:23	-- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']'
00:12:30.715   16:56:23	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:12:30.715   16:56:23	-- common/autotest_common.sh@10 -- # set +x
00:12:30.715  ************************************
00:12:30.715  START TEST bdev_json_nonarray
00:12:30.715  ************************************
00:12:30.715   16:56:23	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:12:30.715  [2024-11-19 16:56:23.459088] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:12:30.715  [2024-11-19 16:56:23.459366] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121474 ]
00:12:30.974  [2024-11-19 16:56:23.611860] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:30.974  [2024-11-19 16:56:23.685224] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:12:30.974  [2024-11-19 16:56:23.685550] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array.
00:12:30.974  [2024-11-19 16:56:23.685600] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:12:31.233  
00:12:31.233  real	0m0.490s
00:12:31.233  user	0m0.245s
00:12:31.233  sys	0m0.145s
00:12:31.233  ************************************
00:12:31.233  END TEST bdev_json_nonarray
00:12:31.233  ************************************
00:12:31.233   16:56:23	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:12:31.233   16:56:23	-- common/autotest_common.sh@10 -- # set +x
00:12:31.233   16:56:23	-- bdev/blockdev.sh@785 -- # [[ bdev == bdev ]]
00:12:31.233   16:56:23	-- bdev/blockdev.sh@786 -- # run_test bdev_qos qos_test_suite ''
00:12:31.233   16:56:23	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:12:31.233   16:56:23	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:12:31.233   16:56:23	-- common/autotest_common.sh@10 -- # set +x
00:12:31.233  ************************************
00:12:31.233  START TEST bdev_qos
00:12:31.233  ************************************
00:12:31.233   16:56:23	-- common/autotest_common.sh@1114 -- # qos_test_suite ''
00:12:31.233   16:56:23	-- bdev/blockdev.sh@444 -- # QOS_PID=121500
00:12:31.233  Process qos testing pid: 121500
00:12:31.233   16:56:23	-- bdev/blockdev.sh@443 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 256 -o 4096 -w randread -t 60 ''
00:12:31.233   16:56:23	-- bdev/blockdev.sh@445 -- # echo 'Process qos testing pid: 121500'
00:12:31.233   16:56:23	-- bdev/blockdev.sh@446 -- # trap 'cleanup; killprocess $QOS_PID; exit 1' SIGINT SIGTERM EXIT
00:12:31.233   16:56:23	-- bdev/blockdev.sh@447 -- # waitforlisten 121500
00:12:31.233   16:56:23	-- common/autotest_common.sh@829 -- # '[' -z 121500 ']'
00:12:31.233   16:56:23	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:12:31.233  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:12:31.233   16:56:23	-- common/autotest_common.sh@834 -- # local max_retries=100
00:12:31.233   16:56:23	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:12:31.233   16:56:23	-- common/autotest_common.sh@838 -- # xtrace_disable
00:12:31.233   16:56:23	-- common/autotest_common.sh@10 -- # set +x
00:12:31.233  [2024-11-19 16:56:24.021657] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:12:31.233  [2024-11-19 16:56:24.022408] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121500 ]
00:12:31.491  [2024-11-19 16:56:24.181191] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:31.492  [2024-11-19 16:56:24.235276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:12:32.059   16:56:24	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:12:32.059   16:56:24	-- common/autotest_common.sh@862 -- # return 0
00:12:32.059   16:56:24	-- bdev/blockdev.sh@449 -- # rpc_cmd bdev_malloc_create -b Malloc_0 128 512
00:12:32.059   16:56:24	-- common/autotest_common.sh@561 -- # xtrace_disable
00:12:32.059   16:56:24	-- common/autotest_common.sh@10 -- # set +x
00:12:32.385  Malloc_0
00:12:32.385   16:56:24	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:32.385   16:56:24	-- bdev/blockdev.sh@450 -- # waitforbdev Malloc_0
00:12:32.385   16:56:24	-- common/autotest_common.sh@897 -- # local bdev_name=Malloc_0
00:12:32.385   16:56:24	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:12:32.385   16:56:24	-- common/autotest_common.sh@899 -- # local i
00:12:32.385   16:56:24	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:12:32.385   16:56:24	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:12:32.385   16:56:24	-- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine
00:12:32.385   16:56:24	-- common/autotest_common.sh@561 -- # xtrace_disable
00:12:32.385   16:56:24	-- common/autotest_common.sh@10 -- # set +x
00:12:32.385   16:56:24	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:32.385   16:56:24	-- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Malloc_0 -t 2000
00:12:32.385   16:56:24	-- common/autotest_common.sh@561 -- # xtrace_disable
00:12:32.385   16:56:24	-- common/autotest_common.sh@10 -- # set +x
00:12:32.385  [
00:12:32.385  {
00:12:32.385  "name": "Malloc_0",
00:12:32.385  "aliases": [
00:12:32.385  "b80bde34-0b63-4d17-a8fa-a6a9210dc588"
00:12:32.385  ],
00:12:32.385  "product_name": "Malloc disk",
00:12:32.385  "block_size": 512,
00:12:32.385  "num_blocks": 262144,
00:12:32.385  "uuid": "b80bde34-0b63-4d17-a8fa-a6a9210dc588",
00:12:32.385  "assigned_rate_limits": {
00:12:32.385  "rw_ios_per_sec": 0,
00:12:32.385  "rw_mbytes_per_sec": 0,
00:12:32.385  "r_mbytes_per_sec": 0,
00:12:32.385  "w_mbytes_per_sec": 0
00:12:32.385  },
00:12:32.385  "claimed": false,
00:12:32.385  "zoned": false,
00:12:32.385  "supported_io_types": {
00:12:32.385  "read": true,
00:12:32.385  "write": true,
00:12:32.385  "unmap": true,
00:12:32.385  "write_zeroes": true,
00:12:32.385  "flush": true,
00:12:32.385  "reset": true,
00:12:32.385  "compare": false,
00:12:32.385  "compare_and_write": false,
00:12:32.385  "abort": true,
00:12:32.385  "nvme_admin": false,
00:12:32.385  "nvme_io": false
00:12:32.385  },
00:12:32.385  "memory_domains": [
00:12:32.385  {
00:12:32.385  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:12:32.385  "dma_device_type": 2
00:12:32.385  }
00:12:32.385  ],
00:12:32.385  "driver_specific": {}
00:12:32.385  }
00:12:32.385  ]
00:12:32.385   16:56:24	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:32.385   16:56:24	-- common/autotest_common.sh@905 -- # return 0
00:12:32.385   16:56:24	-- bdev/blockdev.sh@451 -- # rpc_cmd bdev_null_create Null_1 128 512
00:12:32.385   16:56:24	-- common/autotest_common.sh@561 -- # xtrace_disable
00:12:32.385   16:56:24	-- common/autotest_common.sh@10 -- # set +x
00:12:32.385  Null_1
00:12:32.385   16:56:24	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:32.385   16:56:24	-- bdev/blockdev.sh@452 -- # waitforbdev Null_1
00:12:32.385   16:56:24	-- common/autotest_common.sh@897 -- # local bdev_name=Null_1
00:12:32.385   16:56:24	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:12:32.385   16:56:24	-- common/autotest_common.sh@899 -- # local i
00:12:32.385   16:56:24	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:12:32.385   16:56:24	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:12:32.385   16:56:24	-- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine
00:12:32.385   16:56:24	-- common/autotest_common.sh@561 -- # xtrace_disable
00:12:32.385   16:56:24	-- common/autotest_common.sh@10 -- # set +x
00:12:32.385   16:56:24	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:32.385   16:56:24	-- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Null_1 -t 2000
00:12:32.385   16:56:24	-- common/autotest_common.sh@561 -- # xtrace_disable
00:12:32.385   16:56:24	-- common/autotest_common.sh@10 -- # set +x
00:12:32.385  [
00:12:32.385  {
00:12:32.385  "name": "Null_1",
00:12:32.385  "aliases": [
00:12:32.385  "c3f3c0ed-300e-441a-8674-0b690a2452e4"
00:12:32.385  ],
00:12:32.385  "product_name": "Null disk",
00:12:32.385  "block_size": 512,
00:12:32.385  "num_blocks": 262144,
00:12:32.385  "uuid": "c3f3c0ed-300e-441a-8674-0b690a2452e4",
00:12:32.385  "assigned_rate_limits": {
00:12:32.385  "rw_ios_per_sec": 0,
00:12:32.385  "rw_mbytes_per_sec": 0,
00:12:32.385  "r_mbytes_per_sec": 0,
00:12:32.385  "w_mbytes_per_sec": 0
00:12:32.385  },
00:12:32.385  "claimed": false,
00:12:32.385  "zoned": false,
00:12:32.385  "supported_io_types": {
00:12:32.385  "read": true,
00:12:32.385  "write": true,
00:12:32.385  "unmap": false,
00:12:32.385  "write_zeroes": true,
00:12:32.385  "flush": false,
00:12:32.385  "reset": true,
00:12:32.385  "compare": false,
00:12:32.385  "compare_and_write": false,
00:12:32.385  "abort": true,
00:12:32.385  "nvme_admin": false,
00:12:32.385  "nvme_io": false
00:12:32.385  },
00:12:32.385  "driver_specific": {}
00:12:32.385  }
00:12:32.385  ]
00:12:32.385   16:56:24	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:32.385   16:56:24	-- common/autotest_common.sh@905 -- # return 0
00:12:32.385   16:56:24	-- bdev/blockdev.sh@455 -- # qos_function_test
00:12:32.385   16:56:24	-- bdev/blockdev.sh@408 -- # local qos_lower_iops_limit=1000
00:12:32.385   16:56:24	-- bdev/blockdev.sh@454 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests
00:12:32.385   16:56:24	-- bdev/blockdev.sh@409 -- # local qos_lower_bw_limit=2
00:12:32.385   16:56:24	-- bdev/blockdev.sh@410 -- # local io_result=0
00:12:32.385   16:56:24	-- bdev/blockdev.sh@411 -- # local iops_limit=0
00:12:32.385   16:56:24	-- bdev/blockdev.sh@412 -- # local bw_limit=0
00:12:32.385    16:56:25	-- bdev/blockdev.sh@414 -- # get_io_result IOPS Malloc_0
00:12:32.385    16:56:25	-- bdev/blockdev.sh@373 -- # local limit_type=IOPS
00:12:32.385    16:56:25	-- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0
00:12:32.385    16:56:25	-- bdev/blockdev.sh@375 -- # local iostat_result
00:12:32.385     16:56:25	-- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5
00:12:32.385     16:56:25	-- bdev/blockdev.sh@376 -- # grep Malloc_0
00:12:32.385     16:56:25	-- bdev/blockdev.sh@376 -- # tail -1
00:12:32.385  Running I/O for 60 seconds...
00:12:37.678    16:56:30	-- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0  89511.36  358045.44  0.00       0.00       362496.00  0.00     0.00   '
00:12:37.678    16:56:30	-- bdev/blockdev.sh@377 -- # '[' IOPS = IOPS ']'
00:12:37.678     16:56:30	-- bdev/blockdev.sh@378 -- # awk '{print $2}'
00:12:37.678    16:56:30	-- bdev/blockdev.sh@378 -- # iostat_result=89511.36
00:12:37.678    16:56:30	-- bdev/blockdev.sh@383 -- # echo 89511
00:12:37.678   16:56:30	-- bdev/blockdev.sh@414 -- # io_result=89511
00:12:37.678   16:56:30	-- bdev/blockdev.sh@416 -- # iops_limit=22000
00:12:37.678   16:56:30	-- bdev/blockdev.sh@417 -- # '[' 22000 -gt 1000 ']'
00:12:37.678   16:56:30	-- bdev/blockdev.sh@420 -- # rpc_cmd bdev_set_qos_limit --rw_ios_per_sec 22000 Malloc_0
00:12:37.678   16:56:30	-- common/autotest_common.sh@561 -- # xtrace_disable
00:12:37.678   16:56:30	-- common/autotest_common.sh@10 -- # set +x
00:12:37.678   16:56:30	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:37.678   16:56:30	-- bdev/blockdev.sh@421 -- # run_test bdev_qos_iops run_qos_test 22000 IOPS Malloc_0
00:12:37.678   16:56:30	-- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']'
00:12:37.678   16:56:30	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:12:37.678   16:56:30	-- common/autotest_common.sh@10 -- # set +x
00:12:37.678  ************************************
00:12:37.678  START TEST bdev_qos_iops
00:12:37.678  ************************************
00:12:37.678   16:56:30	-- common/autotest_common.sh@1114 -- # run_qos_test 22000 IOPS Malloc_0
00:12:37.678   16:56:30	-- bdev/blockdev.sh@387 -- # local qos_limit=22000
00:12:37.678   16:56:30	-- bdev/blockdev.sh@388 -- # local qos_result=0
00:12:37.678    16:56:30	-- bdev/blockdev.sh@390 -- # get_io_result IOPS Malloc_0
00:12:37.678    16:56:30	-- bdev/blockdev.sh@373 -- # local limit_type=IOPS
00:12:37.678    16:56:30	-- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0
00:12:37.678    16:56:30	-- bdev/blockdev.sh@375 -- # local iostat_result
00:12:37.678     16:56:30	-- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5
00:12:37.678     16:56:30	-- bdev/blockdev.sh@376 -- # grep Malloc_0
00:12:37.678     16:56:30	-- bdev/blockdev.sh@376 -- # tail -1
00:12:42.950    16:56:35	-- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0  22008.18  88032.73   0.00       0.00       89320.00   0.00     0.00   '
00:12:42.950    16:56:35	-- bdev/blockdev.sh@377 -- # '[' IOPS = IOPS ']'
00:12:42.950     16:56:35	-- bdev/blockdev.sh@378 -- # awk '{print $2}'
00:12:42.950    16:56:35	-- bdev/blockdev.sh@378 -- # iostat_result=22008.18
00:12:42.950    16:56:35	-- bdev/blockdev.sh@383 -- # echo 22008
00:12:42.950   16:56:35	-- bdev/blockdev.sh@390 -- # qos_result=22008
00:12:42.950   16:56:35	-- bdev/blockdev.sh@391 -- # '[' IOPS = BANDWIDTH ']'
00:12:42.950   16:56:35	-- bdev/blockdev.sh@394 -- # lower_limit=19800
00:12:42.950   16:56:35	-- bdev/blockdev.sh@395 -- # upper_limit=24200
00:12:42.950   16:56:35	-- bdev/blockdev.sh@398 -- # '[' 22008 -lt 19800 ']'
00:12:42.950   16:56:35	-- bdev/blockdev.sh@398 -- # '[' 22008 -gt 24200 ']'
00:12:42.950  
00:12:42.950  real	0m5.215s
00:12:42.950  user	0m0.106s
00:12:42.950  sys	0m0.049s
00:12:42.950   16:56:35	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:12:42.950  ************************************
00:12:42.950   16:56:35	-- common/autotest_common.sh@10 -- # set +x
00:12:42.950  END TEST bdev_qos_iops
00:12:42.950  ************************************
00:12:42.951    16:56:35	-- bdev/blockdev.sh@425 -- # get_io_result BANDWIDTH Null_1
00:12:42.951    16:56:35	-- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH
00:12:42.951    16:56:35	-- bdev/blockdev.sh@374 -- # local qos_dev=Null_1
00:12:42.951    16:56:35	-- bdev/blockdev.sh@375 -- # local iostat_result
00:12:42.951     16:56:35	-- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5
00:12:42.951     16:56:35	-- bdev/blockdev.sh@376 -- # grep Null_1
00:12:42.951     16:56:35	-- bdev/blockdev.sh@376 -- # tail -1
00:12:48.217    16:56:40	-- bdev/blockdev.sh@376 -- # iostat_result='Null_1    30693.17  122772.67  0.00       0.00       124928.00  0.00     0.00   '
00:12:48.217    16:56:40	-- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']'
00:12:48.217    16:56:40	-- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']'
00:12:48.217     16:56:40	-- bdev/blockdev.sh@380 -- # awk '{print $6}'
00:12:48.217    16:56:40	-- bdev/blockdev.sh@380 -- # iostat_result=124928.00
00:12:48.217    16:56:40	-- bdev/blockdev.sh@383 -- # echo 124928
00:12:48.217   16:56:40	-- bdev/blockdev.sh@425 -- # bw_limit=124928
00:12:48.217   16:56:40	-- bdev/blockdev.sh@426 -- # bw_limit=12
00:12:48.217   16:56:40	-- bdev/blockdev.sh@427 -- # '[' 12 -lt 2 ']'
00:12:48.217   16:56:40	-- bdev/blockdev.sh@430 -- # rpc_cmd bdev_set_qos_limit --rw_mbytes_per_sec 12 Null_1
00:12:48.217   16:56:40	-- common/autotest_common.sh@561 -- # xtrace_disable
00:12:48.217   16:56:40	-- common/autotest_common.sh@10 -- # set +x
00:12:48.217   16:56:40	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:48.217   16:56:40	-- bdev/blockdev.sh@431 -- # run_test bdev_qos_bw run_qos_test 12 BANDWIDTH Null_1
00:12:48.217   16:56:40	-- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']'
00:12:48.217   16:56:40	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:12:48.217   16:56:40	-- common/autotest_common.sh@10 -- # set +x
00:12:48.217  ************************************
00:12:48.217  START TEST bdev_qos_bw
00:12:48.217  ************************************
00:12:48.217   16:56:40	-- common/autotest_common.sh@1114 -- # run_qos_test 12 BANDWIDTH Null_1
00:12:48.217   16:56:40	-- bdev/blockdev.sh@387 -- # local qos_limit=12
00:12:48.217   16:56:40	-- bdev/blockdev.sh@388 -- # local qos_result=0
00:12:48.217    16:56:40	-- bdev/blockdev.sh@390 -- # get_io_result BANDWIDTH Null_1
00:12:48.217    16:56:40	-- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH
00:12:48.217    16:56:40	-- bdev/blockdev.sh@374 -- # local qos_dev=Null_1
00:12:48.217    16:56:40	-- bdev/blockdev.sh@375 -- # local iostat_result
00:12:48.217     16:56:40	-- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5
00:12:48.217     16:56:40	-- bdev/blockdev.sh@376 -- # grep Null_1
00:12:48.217     16:56:40	-- bdev/blockdev.sh@376 -- # tail -1
00:12:53.482    16:56:45	-- bdev/blockdev.sh@376 -- # iostat_result='Null_1    3072.23   12288.93   0.00       0.00       12556.00  0.00     0.00   '
00:12:53.482    16:56:45	-- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']'
00:12:53.482    16:56:45	-- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']'
00:12:53.482     16:56:45	-- bdev/blockdev.sh@380 -- # awk '{print $6}'
00:12:53.482    16:56:45	-- bdev/blockdev.sh@380 -- # iostat_result=12556.00
00:12:53.482    16:56:46	-- bdev/blockdev.sh@383 -- # echo 12556
00:12:53.482   16:56:46	-- bdev/blockdev.sh@390 -- # qos_result=12556
00:12:53.482   16:56:46	-- bdev/blockdev.sh@391 -- # '[' BANDWIDTH = BANDWIDTH ']'
00:12:53.482   16:56:46	-- bdev/blockdev.sh@392 -- # qos_limit=12288
00:12:53.482   16:56:46	-- bdev/blockdev.sh@394 -- # lower_limit=11059
00:12:53.482   16:56:46	-- bdev/blockdev.sh@395 -- # upper_limit=13516
00:12:53.482   16:56:46	-- bdev/blockdev.sh@398 -- # '[' 12556 -lt 11059 ']'
00:12:53.482   16:56:46	-- bdev/blockdev.sh@398 -- # '[' 12556 -gt 13516 ']'
00:12:53.482  
00:12:53.482  real	0m5.238s
00:12:53.482  user	0m0.106s
00:12:53.482  sys	0m0.051s
00:12:53.482   16:56:46	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:12:53.482  ************************************
00:12:53.482  END TEST bdev_qos_bw
00:12:53.482  ************************************
00:12:53.482   16:56:46	-- common/autotest_common.sh@10 -- # set +x
00:12:53.482   16:56:46	-- bdev/blockdev.sh@434 -- # rpc_cmd bdev_set_qos_limit --r_mbytes_per_sec 2 Malloc_0
00:12:53.482   16:56:46	-- common/autotest_common.sh@561 -- # xtrace_disable
00:12:53.482   16:56:46	-- common/autotest_common.sh@10 -- # set +x
00:12:53.482   16:56:46	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:53.482   16:56:46	-- bdev/blockdev.sh@435 -- # run_test bdev_qos_ro_bw run_qos_test 2 BANDWIDTH Malloc_0
00:12:53.482   16:56:46	-- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']'
00:12:53.482   16:56:46	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:12:53.482   16:56:46	-- common/autotest_common.sh@10 -- # set +x
00:12:53.482  ************************************
00:12:53.482  START TEST bdev_qos_ro_bw
00:12:53.482  ************************************
00:12:53.482   16:56:46	-- common/autotest_common.sh@1114 -- # run_qos_test 2 BANDWIDTH Malloc_0
00:12:53.482   16:56:46	-- bdev/blockdev.sh@387 -- # local qos_limit=2
00:12:53.482   16:56:46	-- bdev/blockdev.sh@388 -- # local qos_result=0
00:12:53.482    16:56:46	-- bdev/blockdev.sh@390 -- # get_io_result BANDWIDTH Malloc_0
00:12:53.482    16:56:46	-- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH
00:12:53.482    16:56:46	-- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0
00:12:53.482    16:56:46	-- bdev/blockdev.sh@375 -- # local iostat_result
00:12:53.482     16:56:46	-- bdev/blockdev.sh@376 -- # tail -1
00:12:53.482     16:56:46	-- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5
00:12:53.482     16:56:46	-- bdev/blockdev.sh@376 -- # grep Malloc_0
00:12:58.745    16:56:51	-- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0  511.97   2047.87    0.00       0.00       2068.00   0.00     0.00   '
00:12:58.745    16:56:51	-- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']'
00:12:58.745    16:56:51	-- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']'
00:12:58.745     16:56:51	-- bdev/blockdev.sh@380 -- # awk '{print $6}'
00:12:58.745    16:56:51	-- bdev/blockdev.sh@380 -- # iostat_result=2068.00
00:12:58.745    16:56:51	-- bdev/blockdev.sh@383 -- # echo 2068
00:12:58.745   16:56:51	-- bdev/blockdev.sh@390 -- # qos_result=2068
00:12:58.745   16:56:51	-- bdev/blockdev.sh@391 -- # '[' BANDWIDTH = BANDWIDTH ']'
00:12:58.745   16:56:51	-- bdev/blockdev.sh@392 -- # qos_limit=2048
00:12:58.745   16:56:51	-- bdev/blockdev.sh@394 -- # lower_limit=1843
00:12:58.745   16:56:51	-- bdev/blockdev.sh@395 -- # upper_limit=2252
00:12:58.745   16:56:51	-- bdev/blockdev.sh@398 -- # '[' 2068 -lt 1843 ']'
00:12:58.745   16:56:51	-- bdev/blockdev.sh@398 -- # '[' 2068 -gt 2252 ']'
00:12:58.745  
00:12:58.745  real	0m5.180s
00:12:58.745  user	0m0.097s
00:12:58.745  sys	0m0.050s
00:12:58.745  ************************************
00:12:58.745  END TEST bdev_qos_ro_bw
00:12:58.745  ************************************
00:12:58.745   16:56:51	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:12:58.745   16:56:51	-- common/autotest_common.sh@10 -- # set +x
00:12:58.745   16:56:51	-- bdev/blockdev.sh@457 -- # rpc_cmd bdev_malloc_delete Malloc_0
00:12:58.745   16:56:51	-- common/autotest_common.sh@561 -- # xtrace_disable
00:12:58.745   16:56:51	-- common/autotest_common.sh@10 -- # set +x
00:12:59.314   16:56:51	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:59.314   16:56:51	-- bdev/blockdev.sh@458 -- # rpc_cmd bdev_null_delete Null_1
00:12:59.314   16:56:51	-- common/autotest_common.sh@561 -- # xtrace_disable
00:12:59.314   16:56:51	-- common/autotest_common.sh@10 -- # set +x
00:12:59.314  
00:12:59.314                                                                                                  Latency(us)
00:12:59.314  
[2024-11-19T16:56:52.178Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:12:59.314  
[2024-11-19T16:56:52.178Z]  Job: Malloc_0 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096)
00:12:59.314  	 Malloc_0            :      26.77   30161.06     117.82       0.00     0.00    8407.26    2044.10  503316.48
00:12:59.314  
[2024-11-19T16:56:52.178Z]  Job: Null_1 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096)
00:12:59.314  	 Null_1              :      26.87   30011.30     117.23       0.00     0.00    8512.64     553.94  103858.96
00:12:59.314  
[2024-11-19T16:56:52.178Z]  ===================================================================================================================
00:12:59.314  
[2024-11-19T16:56:52.178Z]  Total                       :              60172.35     235.05       0.00     0.00    8459.92     553.94  503316.48
00:12:59.314  0
00:12:59.314   16:56:51	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:59.314   16:56:51	-- bdev/blockdev.sh@459 -- # killprocess 121500
00:12:59.314   16:56:51	-- common/autotest_common.sh@936 -- # '[' -z 121500 ']'
00:12:59.314   16:56:51	-- common/autotest_common.sh@940 -- # kill -0 121500
00:12:59.314    16:56:51	-- common/autotest_common.sh@941 -- # uname
00:12:59.314   16:56:51	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:12:59.314    16:56:51	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 121500
00:12:59.314   16:56:51	-- common/autotest_common.sh@942 -- # process_name=reactor_1
00:12:59.314   16:56:51	-- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']'
00:12:59.314  killing process with pid 121500
00:12:59.314   16:56:51	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 121500'
00:12:59.314  Received shutdown signal, test time was about 26.912112 seconds
00:12:59.314  
00:12:59.314                                                                                                  Latency(us)
00:12:59.314  
[2024-11-19T16:56:52.178Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:12:59.314  
[2024-11-19T16:56:52.178Z]  ===================================================================================================================
00:12:59.314  
[2024-11-19T16:56:52.178Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:12:59.314   16:56:51	-- common/autotest_common.sh@955 -- # kill 121500
00:12:59.314   16:56:51	-- common/autotest_common.sh@960 -- # wait 121500
00:12:59.574   16:56:52	-- bdev/blockdev.sh@460 -- # trap - SIGINT SIGTERM EXIT
00:12:59.574  
00:12:59.574  real	0m28.308s
00:12:59.574  user	0m29.049s
00:12:59.574  sys	0m0.695s
00:12:59.574   16:56:52	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:12:59.574   16:56:52	-- common/autotest_common.sh@10 -- # set +x
00:12:59.574  ************************************
00:12:59.574  END TEST bdev_qos
00:12:59.574  ************************************
00:12:59.574   16:56:52	-- bdev/blockdev.sh@787 -- # run_test bdev_qd_sampling qd_sampling_test_suite ''
00:12:59.574   16:56:52	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:12:59.574   16:56:52	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:12:59.574   16:56:52	-- common/autotest_common.sh@10 -- # set +x
00:12:59.574  ************************************
00:12:59.574  START TEST bdev_qd_sampling
00:12:59.574  ************************************
00:12:59.574   16:56:52	-- common/autotest_common.sh@1114 -- # qd_sampling_test_suite ''
00:12:59.574   16:56:52	-- bdev/blockdev.sh@536 -- # QD_DEV=Malloc_QD
00:12:59.574   16:56:52	-- bdev/blockdev.sh@539 -- # QD_PID=121967
00:12:59.574  Process bdev QD sampling period testing pid: 121967
00:12:59.574   16:56:52	-- bdev/blockdev.sh@540 -- # echo 'Process bdev QD sampling period testing pid: 121967'
00:12:59.574   16:56:52	-- bdev/blockdev.sh@538 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 5 -C ''
00:12:59.574   16:56:52	-- bdev/blockdev.sh@541 -- # trap 'cleanup; killprocess $QD_PID; exit 1' SIGINT SIGTERM EXIT
00:12:59.574   16:56:52	-- bdev/blockdev.sh@542 -- # waitforlisten 121967
00:12:59.574   16:56:52	-- common/autotest_common.sh@829 -- # '[' -z 121967 ']'
00:12:59.574   16:56:52	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:12:59.574  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:12:59.574   16:56:52	-- common/autotest_common.sh@834 -- # local max_retries=100
00:12:59.574   16:56:52	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:12:59.574   16:56:52	-- common/autotest_common.sh@838 -- # xtrace_disable
00:12:59.574   16:56:52	-- common/autotest_common.sh@10 -- # set +x
00:12:59.574  [2024-11-19 16:56:52.384638] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:12:59.574  [2024-11-19 16:56:52.384818] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121967 ]
00:12:59.834  [2024-11-19 16:56:52.536723] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2
00:12:59.834  [2024-11-19 16:56:52.622356] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:12:59.834  [2024-11-19 16:56:52.622371] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:13:00.826   16:56:53	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:13:00.826   16:56:53	-- common/autotest_common.sh@862 -- # return 0
00:13:00.826   16:56:53	-- bdev/blockdev.sh@544 -- # rpc_cmd bdev_malloc_create -b Malloc_QD 128 512
00:13:00.826   16:56:53	-- common/autotest_common.sh@561 -- # xtrace_disable
00:13:00.826   16:56:53	-- common/autotest_common.sh@10 -- # set +x
00:13:00.826  Malloc_QD
00:13:00.826   16:56:53	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:00.826   16:56:53	-- bdev/blockdev.sh@545 -- # waitforbdev Malloc_QD
00:13:00.826   16:56:53	-- common/autotest_common.sh@897 -- # local bdev_name=Malloc_QD
00:13:00.826   16:56:53	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:13:00.826   16:56:53	-- common/autotest_common.sh@899 -- # local i
00:13:00.826   16:56:53	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:13:00.826   16:56:53	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:13:00.826   16:56:53	-- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine
00:13:00.826   16:56:53	-- common/autotest_common.sh@561 -- # xtrace_disable
00:13:00.826   16:56:53	-- common/autotest_common.sh@10 -- # set +x
00:13:00.826   16:56:53	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:00.826   16:56:53	-- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Malloc_QD -t 2000
00:13:00.826   16:56:53	-- common/autotest_common.sh@561 -- # xtrace_disable
00:13:00.826   16:56:53	-- common/autotest_common.sh@10 -- # set +x
00:13:00.826  [
00:13:00.826  {
00:13:00.826  "name": "Malloc_QD",
00:13:00.826  "aliases": [
00:13:00.826  "6bcbed13-f9a2-4f0b-a268-facefb947718"
00:13:00.826  ],
00:13:00.826  "product_name": "Malloc disk",
00:13:00.826  "block_size": 512,
00:13:00.826  "num_blocks": 262144,
00:13:00.826  "uuid": "6bcbed13-f9a2-4f0b-a268-facefb947718",
00:13:00.826  "assigned_rate_limits": {
00:13:00.826  "rw_ios_per_sec": 0,
00:13:00.826  "rw_mbytes_per_sec": 0,
00:13:00.826  "r_mbytes_per_sec": 0,
00:13:00.826  "w_mbytes_per_sec": 0
00:13:00.826  },
00:13:00.826  "claimed": false,
00:13:00.826  "zoned": false,
00:13:00.826  "supported_io_types": {
00:13:00.826  "read": true,
00:13:00.826  "write": true,
00:13:00.826  "unmap": true,
00:13:00.826  "write_zeroes": true,
00:13:00.826  "flush": true,
00:13:00.826  "reset": true,
00:13:00.826  "compare": false,
00:13:00.826  "compare_and_write": false,
00:13:00.826  "abort": true,
00:13:00.826  "nvme_admin": false,
00:13:00.826  "nvme_io": false
00:13:00.826  },
00:13:00.826  "memory_domains": [
00:13:00.826  {
00:13:00.826  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:13:00.826  "dma_device_type": 2
00:13:00.826  }
00:13:00.826  ],
00:13:00.826  "driver_specific": {}
00:13:00.826  }
00:13:00.826  ]
00:13:00.826   16:56:53	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:00.826   16:56:53	-- common/autotest_common.sh@905 -- # return 0
00:13:00.826   16:56:53	-- bdev/blockdev.sh@548 -- # sleep 2
00:13:00.826   16:56:53	-- bdev/blockdev.sh@547 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests
00:13:00.826  Running I/O for 5 seconds...
00:13:02.729   16:56:55	-- bdev/blockdev.sh@549 -- # qd_sampling_function_test Malloc_QD
00:13:02.729   16:56:55	-- bdev/blockdev.sh@517 -- # local bdev_name=Malloc_QD
00:13:02.729   16:56:55	-- bdev/blockdev.sh@518 -- # local sampling_period=10
00:13:02.729   16:56:55	-- bdev/blockdev.sh@519 -- # local iostats
00:13:02.729   16:56:55	-- bdev/blockdev.sh@521 -- # rpc_cmd bdev_set_qd_sampling_period Malloc_QD 10
00:13:02.729   16:56:55	-- common/autotest_common.sh@561 -- # xtrace_disable
00:13:02.729   16:56:55	-- common/autotest_common.sh@10 -- # set +x
00:13:02.729   16:56:55	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:02.729    16:56:55	-- bdev/blockdev.sh@523 -- # rpc_cmd bdev_get_iostat -b Malloc_QD
00:13:02.729    16:56:55	-- common/autotest_common.sh@561 -- # xtrace_disable
00:13:02.729    16:56:55	-- common/autotest_common.sh@10 -- # set +x
00:13:02.729    16:56:55	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:02.729   16:56:55	-- bdev/blockdev.sh@523 -- # iostats='{
00:13:02.729  "tick_rate": 2100000000,
00:13:02.729  "ticks": 1498395600930,
00:13:02.729  "bdevs": [
00:13:02.729  {
00:13:02.729  "name": "Malloc_QD",
00:13:02.729  "bytes_read": 536908288,
00:13:02.729  "num_read_ops": 131075,
00:13:02.729  "bytes_written": 0,
00:13:02.729  "num_write_ops": 0,
00:13:02.729  "bytes_unmapped": 0,
00:13:02.729  "num_unmap_ops": 0,
00:13:02.729  "bytes_copied": 0,
00:13:02.729  "num_copy_ops": 0,
00:13:02.729  "read_latency_ticks": 2054510355884,
00:13:02.729  "max_read_latency_ticks": 22758164,
00:13:02.729  "min_read_latency_ticks": 422038,
00:13:02.729  "write_latency_ticks": 0,
00:13:02.729  "max_write_latency_ticks": 0,
00:13:02.729  "min_write_latency_ticks": 0,
00:13:02.729  "unmap_latency_ticks": 0,
00:13:02.729  "max_unmap_latency_ticks": 0,
00:13:02.729  "min_unmap_latency_ticks": 0,
00:13:02.729  "copy_latency_ticks": 0,
00:13:02.729  "max_copy_latency_ticks": 0,
00:13:02.729  "min_copy_latency_ticks": 0,
00:13:02.729  "io_error": {},
00:13:02.729  "queue_depth_polling_period": 10,
00:13:02.729  "queue_depth": 512,
00:13:02.729  "io_time": 20,
00:13:02.729  "weighted_io_time": 10240
00:13:02.729  }
00:13:02.729  ]
00:13:02.729  }'
00:13:02.729    16:56:55	-- bdev/blockdev.sh@525 -- # jq -r '.bdevs[0].queue_depth_polling_period'
00:13:02.729   16:56:55	-- bdev/blockdev.sh@525 -- # qd_sampling_period=10
00:13:02.729   16:56:55	-- bdev/blockdev.sh@527 -- # '[' 10 == null ']'
00:13:02.729   16:56:55	-- bdev/blockdev.sh@527 -- # '[' 10 -ne 10 ']'
00:13:02.729   16:56:55	-- bdev/blockdev.sh@551 -- # rpc_cmd bdev_malloc_delete Malloc_QD
00:13:02.729   16:56:55	-- common/autotest_common.sh@561 -- # xtrace_disable
00:13:02.729   16:56:55	-- common/autotest_common.sh@10 -- # set +x
00:13:02.729  
00:13:02.729                                                                                                  Latency(us)
00:13:02.729  
[2024-11-19T16:56:55.593Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:13:02.729  
[2024-11-19T16:56:55.593Z]  Job: Malloc_QD (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096)
00:13:02.729  	 Malloc_QD           :       1.98   38872.82     151.85       0.00     0.00    6563.19    1630.60   10797.84
00:13:02.729  
[2024-11-19T16:56:55.593Z]  Job: Malloc_QD (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096)
00:13:02.730  	 Malloc_QD           :       1.98   29688.99     115.97       0.00     0.00    8600.81     670.96   10860.25
00:13:02.730  
[2024-11-19T16:56:55.594Z]  ===================================================================================================================
00:13:02.730  
[2024-11-19T16:56:55.594Z]  Total                       :              68561.81     267.82       0.00     0.00    7445.78     670.96   10860.25
00:13:02.730  0
00:13:02.730   16:56:55	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:02.730   16:56:55	-- bdev/blockdev.sh@552 -- # killprocess 121967
00:13:02.730   16:56:55	-- common/autotest_common.sh@936 -- # '[' -z 121967 ']'
00:13:02.730   16:56:55	-- common/autotest_common.sh@940 -- # kill -0 121967
00:13:02.730    16:56:55	-- common/autotest_common.sh@941 -- # uname
00:13:02.730   16:56:55	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:13:02.730    16:56:55	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 121967
00:13:02.989   16:56:55	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:13:02.989   16:56:55	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:13:02.989  killing process with pid 121967
00:13:02.989  Received shutdown signal, test time was about 2.039869 seconds
00:13:02.989  
00:13:02.989                                                                                                  Latency(us)
00:13:02.989  
[2024-11-19T16:56:55.853Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:13:02.989  
[2024-11-19T16:56:55.853Z]  ===================================================================================================================
00:13:02.989  
[2024-11-19T16:56:55.853Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:13:02.989   16:56:55	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 121967'
00:13:02.989   16:56:55	-- common/autotest_common.sh@955 -- # kill 121967
00:13:02.989   16:56:55	-- common/autotest_common.sh@960 -- # wait 121967
00:13:03.249   16:56:55	-- bdev/blockdev.sh@553 -- # trap - SIGINT SIGTERM EXIT
00:13:03.249  
00:13:03.249  real	0m3.556s
00:13:03.249  user	0m6.841s
00:13:03.249  sys	0m0.429s
00:13:03.249   16:56:55	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:13:03.249  ************************************
00:13:03.249  END TEST bdev_qd_sampling
00:13:03.249  ************************************
00:13:03.249   16:56:55	-- common/autotest_common.sh@10 -- # set +x
00:13:03.249   16:56:55	-- bdev/blockdev.sh@788 -- # run_test bdev_error error_test_suite ''
00:13:03.249   16:56:55	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:13:03.249   16:56:55	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:13:03.249   16:56:55	-- common/autotest_common.sh@10 -- # set +x
00:13:03.249  ************************************
00:13:03.249  START TEST bdev_error
00:13:03.249  ************************************
00:13:03.249   16:56:55	-- common/autotest_common.sh@1114 -- # error_test_suite ''
00:13:03.249   16:56:55	-- bdev/blockdev.sh@464 -- # DEV_1=Dev_1
00:13:03.249   16:56:55	-- bdev/blockdev.sh@465 -- # DEV_2=Dev_2
00:13:03.249   16:56:55	-- bdev/blockdev.sh@466 -- # ERR_DEV=EE_Dev_1
00:13:03.249   16:56:55	-- bdev/blockdev.sh@470 -- # ERR_PID=122042
00:13:03.249  Process error testing pid: 122042
00:13:03.249   16:56:55	-- bdev/blockdev.sh@471 -- # echo 'Process error testing pid: 122042'
00:13:03.249   16:56:55	-- bdev/blockdev.sh@472 -- # waitforlisten 122042
00:13:03.249   16:56:55	-- bdev/blockdev.sh@469 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 -f ''
00:13:03.249   16:56:55	-- common/autotest_common.sh@829 -- # '[' -z 122042 ']'
00:13:03.249   16:56:55	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:13:03.249   16:56:55	-- common/autotest_common.sh@834 -- # local max_retries=100
00:13:03.249   16:56:55	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:13:03.249  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:13:03.249   16:56:55	-- common/autotest_common.sh@838 -- # xtrace_disable
00:13:03.249   16:56:55	-- common/autotest_common.sh@10 -- # set +x
00:13:03.249  [2024-11-19 16:56:56.025730] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:13:03.249  [2024-11-19 16:56:56.025988] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122042 ]
00:13:03.508  [2024-11-19 16:56:56.176909] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:03.508  [2024-11-19 16:56:56.251546] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:13:04.446   16:56:56	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:13:04.446   16:56:56	-- common/autotest_common.sh@862 -- # return 0
00:13:04.446   16:56:56	-- bdev/blockdev.sh@474 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512
00:13:04.446   16:56:56	-- common/autotest_common.sh@561 -- # xtrace_disable
00:13:04.446   16:56:56	-- common/autotest_common.sh@10 -- # set +x
00:13:04.446  Dev_1
00:13:04.446   16:56:57	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:04.446   16:56:57	-- bdev/blockdev.sh@475 -- # waitforbdev Dev_1
00:13:04.446   16:56:57	-- common/autotest_common.sh@897 -- # local bdev_name=Dev_1
00:13:04.447   16:56:57	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:13:04.447   16:56:57	-- common/autotest_common.sh@899 -- # local i
00:13:04.447   16:56:57	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:13:04.447   16:56:57	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:13:04.447   16:56:57	-- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine
00:13:04.447   16:56:57	-- common/autotest_common.sh@561 -- # xtrace_disable
00:13:04.447   16:56:57	-- common/autotest_common.sh@10 -- # set +x
00:13:04.447   16:56:57	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:04.447   16:56:57	-- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000
00:13:04.447   16:56:57	-- common/autotest_common.sh@561 -- # xtrace_disable
00:13:04.447   16:56:57	-- common/autotest_common.sh@10 -- # set +x
00:13:04.447  [
00:13:04.447  {
00:13:04.447  "name": "Dev_1",
00:13:04.447  "aliases": [
00:13:04.447  "5a2c3014-9ff9-47d8-9fe7-87e9c9a11dfe"
00:13:04.447  ],
00:13:04.447  "product_name": "Malloc disk",
00:13:04.447  "block_size": 512,
00:13:04.447  "num_blocks": 262144,
00:13:04.447  "uuid": "5a2c3014-9ff9-47d8-9fe7-87e9c9a11dfe",
00:13:04.447  "assigned_rate_limits": {
00:13:04.447  "rw_ios_per_sec": 0,
00:13:04.447  "rw_mbytes_per_sec": 0,
00:13:04.447  "r_mbytes_per_sec": 0,
00:13:04.447  "w_mbytes_per_sec": 0
00:13:04.447  },
00:13:04.447  "claimed": false,
00:13:04.447  "zoned": false,
00:13:04.447  "supported_io_types": {
00:13:04.447  "read": true,
00:13:04.447  "write": true,
00:13:04.447  "unmap": true,
00:13:04.447  "write_zeroes": true,
00:13:04.447  "flush": true,
00:13:04.447  "reset": true,
00:13:04.447  "compare": false,
00:13:04.447  "compare_and_write": false,
00:13:04.447  "abort": true,
00:13:04.447  "nvme_admin": false,
00:13:04.447  "nvme_io": false
00:13:04.447  },
00:13:04.447  "memory_domains": [
00:13:04.447  {
00:13:04.447  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:13:04.447  "dma_device_type": 2
00:13:04.447  }
00:13:04.447  ],
00:13:04.447  "driver_specific": {}
00:13:04.447  }
00:13:04.447  ]
00:13:04.447   16:56:57	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:04.447   16:56:57	-- common/autotest_common.sh@905 -- # return 0
00:13:04.447   16:56:57	-- bdev/blockdev.sh@476 -- # rpc_cmd bdev_error_create Dev_1
00:13:04.447   16:56:57	-- common/autotest_common.sh@561 -- # xtrace_disable
00:13:04.447   16:56:57	-- common/autotest_common.sh@10 -- # set +x
00:13:04.447  true
00:13:04.447   16:56:57	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:04.447   16:56:57	-- bdev/blockdev.sh@477 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512
00:13:04.447   16:56:57	-- common/autotest_common.sh@561 -- # xtrace_disable
00:13:04.447   16:56:57	-- common/autotest_common.sh@10 -- # set +x
00:13:04.447  Dev_2
00:13:04.447   16:56:57	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:04.447   16:56:57	-- bdev/blockdev.sh@478 -- # waitforbdev Dev_2
00:13:04.447   16:56:57	-- common/autotest_common.sh@897 -- # local bdev_name=Dev_2
00:13:04.447   16:56:57	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:13:04.447   16:56:57	-- common/autotest_common.sh@899 -- # local i
00:13:04.447   16:56:57	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:13:04.447   16:56:57	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:13:04.447   16:56:57	-- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine
00:13:04.447   16:56:57	-- common/autotest_common.sh@561 -- # xtrace_disable
00:13:04.447   16:56:57	-- common/autotest_common.sh@10 -- # set +x
00:13:04.447   16:56:57	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:04.447   16:56:57	-- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000
00:13:04.447   16:56:57	-- common/autotest_common.sh@561 -- # xtrace_disable
00:13:04.447   16:56:57	-- common/autotest_common.sh@10 -- # set +x
00:13:04.447  [
00:13:04.447  {
00:13:04.447  "name": "Dev_2",
00:13:04.447  "aliases": [
00:13:04.447  "73b77242-2ac4-41b5-92bb-07c595f2f5fb"
00:13:04.447  ],
00:13:04.447  "product_name": "Malloc disk",
00:13:04.447  "block_size": 512,
00:13:04.447  "num_blocks": 262144,
00:13:04.447  "uuid": "73b77242-2ac4-41b5-92bb-07c595f2f5fb",
00:13:04.447  "assigned_rate_limits": {
00:13:04.447  "rw_ios_per_sec": 0,
00:13:04.447  "rw_mbytes_per_sec": 0,
00:13:04.447  "r_mbytes_per_sec": 0,
00:13:04.447  "w_mbytes_per_sec": 0
00:13:04.447  },
00:13:04.447  "claimed": false,
00:13:04.447  "zoned": false,
00:13:04.447  "supported_io_types": {
00:13:04.447  "read": true,
00:13:04.447  "write": true,
00:13:04.447  "unmap": true,
00:13:04.447  "write_zeroes": true,
00:13:04.447  "flush": true,
00:13:04.447  "reset": true,
00:13:04.447  "compare": false,
00:13:04.447  "compare_and_write": false,
00:13:04.447  "abort": true,
00:13:04.447  "nvme_admin": false,
00:13:04.447  "nvme_io": false
00:13:04.447  },
00:13:04.447  "memory_domains": [
00:13:04.447  {
00:13:04.447  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:13:04.447  "dma_device_type": 2
00:13:04.447  }
00:13:04.447  ],
00:13:04.447  "driver_specific": {}
00:13:04.447  }
00:13:04.447  ]
00:13:04.447   16:56:57	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:04.447   16:56:57	-- common/autotest_common.sh@905 -- # return 0
00:13:04.447   16:56:57	-- bdev/blockdev.sh@479 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5
00:13:04.447   16:56:57	-- common/autotest_common.sh@561 -- # xtrace_disable
00:13:04.447   16:56:57	-- common/autotest_common.sh@10 -- # set +x
00:13:04.447   16:56:57	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:04.447   16:56:57	-- bdev/blockdev.sh@482 -- # sleep 1
00:13:04.447   16:56:57	-- bdev/blockdev.sh@481 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests
00:13:04.447  Running I/O for 5 seconds...
00:13:05.384  Process is existed as continue on error is set. Pid: 122042
00:13:05.384   16:56:58	-- bdev/blockdev.sh@485 -- # kill -0 122042
00:13:05.384   16:56:58	-- bdev/blockdev.sh@486 -- # echo 'Process is existed as continue on error is set. Pid: 122042'
00:13:05.384   16:56:58	-- bdev/blockdev.sh@493 -- # rpc_cmd bdev_error_delete EE_Dev_1
00:13:05.384   16:56:58	-- common/autotest_common.sh@561 -- # xtrace_disable
00:13:05.384   16:56:58	-- common/autotest_common.sh@10 -- # set +x
00:13:05.384   16:56:58	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:05.384   16:56:58	-- bdev/blockdev.sh@494 -- # rpc_cmd bdev_malloc_delete Dev_1
00:13:05.384   16:56:58	-- common/autotest_common.sh@561 -- # xtrace_disable
00:13:05.384   16:56:58	-- common/autotest_common.sh@10 -- # set +x
00:13:05.384   16:56:58	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:05.384   16:56:58	-- bdev/blockdev.sh@495 -- # sleep 5
00:13:05.384  Timeout while waiting for response:
00:13:05.384  
00:13:05.384  
00:13:09.575  
00:13:09.575                                                                                                  Latency(us)
00:13:09.575  
[2024-11-19T16:57:02.439Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:13:09.575  
[2024-11-19T16:57:02.439Z]  Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096)
00:13:09.576  	 EE_Dev_1            :       0.94   49296.05     192.56       5.34     0.00     322.23     140.43     663.16
00:13:09.576  
[2024-11-19T16:57:02.440Z]  Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096)
00:13:09.576  	 Dev_2               :       5.00  106342.18     415.40       0.00     0.00     148.16      64.85   35202.19
00:13:09.576  
[2024-11-19T16:57:02.440Z]  ===================================================================================================================
00:13:09.576  
[2024-11-19T16:57:02.440Z]  Total                       :             155638.24     607.96       5.34     0.00     162.07      64.85   35202.19
00:13:10.512   16:57:03	-- bdev/blockdev.sh@497 -- # killprocess 122042
00:13:10.513   16:57:03	-- common/autotest_common.sh@936 -- # '[' -z 122042 ']'
00:13:10.513   16:57:03	-- common/autotest_common.sh@940 -- # kill -0 122042
00:13:10.513    16:57:03	-- common/autotest_common.sh@941 -- # uname
00:13:10.513   16:57:03	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:13:10.513    16:57:03	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 122042
00:13:10.513  killing process with pid 122042
00:13:10.513  Received shutdown signal, test time was about 5.000000 seconds
00:13:10.513  
00:13:10.513                                                                                                  Latency(us)
00:13:10.513  
[2024-11-19T16:57:03.377Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:13:10.513  
[2024-11-19T16:57:03.377Z]  ===================================================================================================================
00:13:10.513  
[2024-11-19T16:57:03.377Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:13:10.513   16:57:03	-- common/autotest_common.sh@942 -- # process_name=reactor_1
00:13:10.513   16:57:03	-- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']'
00:13:10.513   16:57:03	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 122042'
00:13:10.513   16:57:03	-- common/autotest_common.sh@955 -- # kill 122042
00:13:10.513   16:57:03	-- common/autotest_common.sh@960 -- # wait 122042
00:13:11.080  Process error testing pid: 122152
00:13:11.080  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:13:11.080   16:57:03	-- bdev/blockdev.sh@501 -- # ERR_PID=122152
00:13:11.080   16:57:03	-- bdev/blockdev.sh@500 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 ''
00:13:11.080   16:57:03	-- bdev/blockdev.sh@502 -- # echo 'Process error testing pid: 122152'
00:13:11.080   16:57:03	-- bdev/blockdev.sh@503 -- # waitforlisten 122152
00:13:11.080   16:57:03	-- common/autotest_common.sh@829 -- # '[' -z 122152 ']'
00:13:11.080   16:57:03	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:13:11.080   16:57:03	-- common/autotest_common.sh@834 -- # local max_retries=100
00:13:11.080   16:57:03	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:13:11.080   16:57:03	-- common/autotest_common.sh@838 -- # xtrace_disable
00:13:11.080   16:57:03	-- common/autotest_common.sh@10 -- # set +x
00:13:11.080  [2024-11-19 16:57:03.762313] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:13:11.080  [2024-11-19 16:57:03.762805] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122152 ]
00:13:11.080  [2024-11-19 16:57:03.916193] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:11.338  [2024-11-19 16:57:03.990607] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:13:11.904   16:57:04	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:13:11.904   16:57:04	-- common/autotest_common.sh@862 -- # return 0
00:13:11.904   16:57:04	-- bdev/blockdev.sh@505 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512
00:13:11.904   16:57:04	-- common/autotest_common.sh@561 -- # xtrace_disable
00:13:11.904   16:57:04	-- common/autotest_common.sh@10 -- # set +x
00:13:11.904  Dev_1
00:13:11.904   16:57:04	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:11.904   16:57:04	-- bdev/blockdev.sh@506 -- # waitforbdev Dev_1
00:13:11.904   16:57:04	-- common/autotest_common.sh@897 -- # local bdev_name=Dev_1
00:13:11.904   16:57:04	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:13:11.904   16:57:04	-- common/autotest_common.sh@899 -- # local i
00:13:11.904   16:57:04	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:13:11.905   16:57:04	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:13:11.905   16:57:04	-- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine
00:13:11.905   16:57:04	-- common/autotest_common.sh@561 -- # xtrace_disable
00:13:11.905   16:57:04	-- common/autotest_common.sh@10 -- # set +x
00:13:12.164   16:57:04	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:12.164   16:57:04	-- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000
00:13:12.164   16:57:04	-- common/autotest_common.sh@561 -- # xtrace_disable
00:13:12.165   16:57:04	-- common/autotest_common.sh@10 -- # set +x
00:13:12.165  [
00:13:12.165  {
00:13:12.165  "name": "Dev_1",
00:13:12.165  "aliases": [
00:13:12.165  "93a65be8-08bf-41d1-a183-467b1d9049b5"
00:13:12.165  ],
00:13:12.165  "product_name": "Malloc disk",
00:13:12.165  "block_size": 512,
00:13:12.165  "num_blocks": 262144,
00:13:12.165  "uuid": "93a65be8-08bf-41d1-a183-467b1d9049b5",
00:13:12.165  "assigned_rate_limits": {
00:13:12.165  "rw_ios_per_sec": 0,
00:13:12.165  "rw_mbytes_per_sec": 0,
00:13:12.165  "r_mbytes_per_sec": 0,
00:13:12.165  "w_mbytes_per_sec": 0
00:13:12.165  },
00:13:12.165  "claimed": false,
00:13:12.165  "zoned": false,
00:13:12.165  "supported_io_types": {
00:13:12.165  "read": true,
00:13:12.165  "write": true,
00:13:12.165  "unmap": true,
00:13:12.165  "write_zeroes": true,
00:13:12.165  "flush": true,
00:13:12.165  "reset": true,
00:13:12.165  "compare": false,
00:13:12.165  "compare_and_write": false,
00:13:12.165  "abort": true,
00:13:12.165  "nvme_admin": false,
00:13:12.165  "nvme_io": false
00:13:12.165  },
00:13:12.165  "memory_domains": [
00:13:12.165  {
00:13:12.165  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:13:12.165  "dma_device_type": 2
00:13:12.165  }
00:13:12.165  ],
00:13:12.165  "driver_specific": {}
00:13:12.165  }
00:13:12.165  ]
00:13:12.165   16:57:04	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:12.165   16:57:04	-- common/autotest_common.sh@905 -- # return 0
00:13:12.165   16:57:04	-- bdev/blockdev.sh@507 -- # rpc_cmd bdev_error_create Dev_1
00:13:12.165   16:57:04	-- common/autotest_common.sh@561 -- # xtrace_disable
00:13:12.165   16:57:04	-- common/autotest_common.sh@10 -- # set +x
00:13:12.165  true
00:13:12.165   16:57:04	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:12.165   16:57:04	-- bdev/blockdev.sh@508 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512
00:13:12.165   16:57:04	-- common/autotest_common.sh@561 -- # xtrace_disable
00:13:12.165   16:57:04	-- common/autotest_common.sh@10 -- # set +x
00:13:12.165  Dev_2
00:13:12.165   16:57:04	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:12.165   16:57:04	-- bdev/blockdev.sh@509 -- # waitforbdev Dev_2
00:13:12.165   16:57:04	-- common/autotest_common.sh@897 -- # local bdev_name=Dev_2
00:13:12.165   16:57:04	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:13:12.165   16:57:04	-- common/autotest_common.sh@899 -- # local i
00:13:12.165   16:57:04	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:13:12.165   16:57:04	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:13:12.165   16:57:04	-- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine
00:13:12.165   16:57:04	-- common/autotest_common.sh@561 -- # xtrace_disable
00:13:12.165   16:57:04	-- common/autotest_common.sh@10 -- # set +x
00:13:12.165   16:57:04	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:12.165   16:57:04	-- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000
00:13:12.165   16:57:04	-- common/autotest_common.sh@561 -- # xtrace_disable
00:13:12.165   16:57:04	-- common/autotest_common.sh@10 -- # set +x
00:13:12.165  [
00:13:12.165  {
00:13:12.165  "name": "Dev_2",
00:13:12.165  "aliases": [
00:13:12.165  "e5d7062f-bb16-4bab-a351-249365dd3932"
00:13:12.165  ],
00:13:12.165  "product_name": "Malloc disk",
00:13:12.165  "block_size": 512,
00:13:12.165  "num_blocks": 262144,
00:13:12.165  "uuid": "e5d7062f-bb16-4bab-a351-249365dd3932",
00:13:12.165  "assigned_rate_limits": {
00:13:12.165  "rw_ios_per_sec": 0,
00:13:12.165  "rw_mbytes_per_sec": 0,
00:13:12.165  "r_mbytes_per_sec": 0,
00:13:12.165  "w_mbytes_per_sec": 0
00:13:12.165  },
00:13:12.165  "claimed": false,
00:13:12.165  "zoned": false,
00:13:12.165  "supported_io_types": {
00:13:12.165  "read": true,
00:13:12.165  "write": true,
00:13:12.165  "unmap": true,
00:13:12.165  "write_zeroes": true,
00:13:12.165  "flush": true,
00:13:12.165  "reset": true,
00:13:12.165  "compare": false,
00:13:12.165  "compare_and_write": false,
00:13:12.165  "abort": true,
00:13:12.165  "nvme_admin": false,
00:13:12.165  "nvme_io": false
00:13:12.165  },
00:13:12.165  "memory_domains": [
00:13:12.165  {
00:13:12.165  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:13:12.165  "dma_device_type": 2
00:13:12.165  }
00:13:12.165  ],
00:13:12.165  "driver_specific": {}
00:13:12.165  }
00:13:12.165  ]
00:13:12.165   16:57:04	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:12.165   16:57:04	-- common/autotest_common.sh@905 -- # return 0
00:13:12.165   16:57:04	-- bdev/blockdev.sh@510 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5
00:13:12.165   16:57:04	-- common/autotest_common.sh@561 -- # xtrace_disable
00:13:12.165   16:57:04	-- common/autotest_common.sh@10 -- # set +x
00:13:12.165   16:57:04	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:12.165   16:57:04	-- bdev/blockdev.sh@513 -- # NOT wait 122152
00:13:12.165   16:57:04	-- bdev/blockdev.sh@512 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests
00:13:12.165   16:57:04	-- common/autotest_common.sh@650 -- # local es=0
00:13:12.165   16:57:04	-- common/autotest_common.sh@652 -- # valid_exec_arg wait 122152
00:13:12.165   16:57:04	-- common/autotest_common.sh@638 -- # local arg=wait
00:13:12.165   16:57:04	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:13:12.165    16:57:04	-- common/autotest_common.sh@642 -- # type -t wait
00:13:12.165   16:57:04	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:13:12.165   16:57:04	-- common/autotest_common.sh@653 -- # wait 122152
00:13:12.165  Running I/O for 5 seconds...
00:13:12.165  task offset: 50896 on job bdev=EE_Dev_1 fails
00:13:12.165  
00:13:12.165                                                                                                  Latency(us)
00:13:12.165  
[2024-11-19T16:57:05.029Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:13:12.165  
[2024-11-19T16:57:05.029Z]  Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096)
00:13:12.165  
[2024-11-19T16:57:05.029Z]  Job: EE_Dev_1 ended in about 0.00 seconds with error
00:13:12.165  	 EE_Dev_1            :       0.00   30178.33     117.88    6858.71     0.00     351.79     145.31     643.66
00:13:12.165  
[2024-11-19T16:57:05.029Z]  Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096)
00:13:12.165  	 Dev_2               :       0.00   22023.40      86.03       0.00     0.00     494.57     139.46     897.22
00:13:12.165  
[2024-11-19T16:57:05.029Z]  ===================================================================================================================
00:13:12.165  
[2024-11-19T16:57:05.029Z]  Total                       :              52201.73     203.91    6858.71     0.00     429.23     139.46     897.22
00:13:12.165  [2024-11-19 16:57:04.958507] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:13:12.165  request:
00:13:12.165  {
00:13:12.165    "method": "perform_tests",
00:13:12.165    "req_id": 1
00:13:12.165  }
00:13:12.165  Got JSON-RPC error response
00:13:12.165  response:
00:13:12.165  {
00:13:12.165    "code": -32603,
00:13:12.165    "message": "bdevperf failed with error Operation not permitted"
00:13:12.165  }
00:13:12.755   16:57:05	-- common/autotest_common.sh@653 -- # es=255
00:13:12.755   16:57:05	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:13:12.755   16:57:05	-- common/autotest_common.sh@662 -- # es=127
00:13:12.755   16:57:05	-- common/autotest_common.sh@663 -- # case "$es" in
00:13:12.755   16:57:05	-- common/autotest_common.sh@670 -- # es=1
00:13:12.755   16:57:05	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:13:12.755  
00:13:12.755  real	0m9.546s
00:13:12.755  user	0m9.567s
00:13:12.755  sys	0m0.945s
00:13:12.755   16:57:05	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:13:12.755  ************************************
00:13:12.755  END TEST bdev_error
00:13:12.755  ************************************
00:13:12.755   16:57:05	-- common/autotest_common.sh@10 -- # set +x
00:13:12.755   16:57:05	-- bdev/blockdev.sh@789 -- # run_test bdev_stat stat_test_suite ''
00:13:12.755   16:57:05	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:13:12.755   16:57:05	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:13:12.755   16:57:05	-- common/autotest_common.sh@10 -- # set +x
00:13:12.755  ************************************
00:13:12.755  START TEST bdev_stat
00:13:12.755  ************************************
00:13:12.755   16:57:05	-- common/autotest_common.sh@1114 -- # stat_test_suite ''
00:13:12.755   16:57:05	-- bdev/blockdev.sh@590 -- # STAT_DEV=Malloc_STAT
00:13:12.755   16:57:05	-- bdev/blockdev.sh@594 -- # STAT_PID=122203
00:13:12.755   16:57:05	-- bdev/blockdev.sh@595 -- # echo 'Process Bdev IO statistics testing pid: 122203'
00:13:12.755  Process Bdev IO statistics testing pid: 122203
00:13:12.755   16:57:05	-- bdev/blockdev.sh@596 -- # trap 'cleanup; killprocess $STAT_PID; exit 1' SIGINT SIGTERM EXIT
00:13:12.755   16:57:05	-- bdev/blockdev.sh@593 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 10 -C ''
00:13:12.755   16:57:05	-- bdev/blockdev.sh@597 -- # waitforlisten 122203
00:13:12.755   16:57:05	-- common/autotest_common.sh@829 -- # '[' -z 122203 ']'
00:13:12.755   16:57:05	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:13:12.755   16:57:05	-- common/autotest_common.sh@834 -- # local max_retries=100
00:13:12.755   16:57:05	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:13:12.755  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:13:12.755   16:57:05	-- common/autotest_common.sh@838 -- # xtrace_disable
00:13:12.755   16:57:05	-- common/autotest_common.sh@10 -- # set +x
00:13:13.040  [2024-11-19 16:57:05.657034] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:13:13.040  [2024-11-19 16:57:05.657601] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122203 ]
00:13:13.040  [2024-11-19 16:57:05.821384] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2
00:13:13.040  [2024-11-19 16:57:05.874530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:13:13.040  [2024-11-19 16:57:05.874540] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:13:13.975   16:57:06	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:13:13.975   16:57:06	-- common/autotest_common.sh@862 -- # return 0
00:13:13.975   16:57:06	-- bdev/blockdev.sh@599 -- # rpc_cmd bdev_malloc_create -b Malloc_STAT 128 512
00:13:13.975   16:57:06	-- common/autotest_common.sh@561 -- # xtrace_disable
00:13:13.975   16:57:06	-- common/autotest_common.sh@10 -- # set +x
00:13:13.975  Malloc_STAT
00:13:13.975   16:57:06	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:13.975   16:57:06	-- bdev/blockdev.sh@600 -- # waitforbdev Malloc_STAT
00:13:13.975   16:57:06	-- common/autotest_common.sh@897 -- # local bdev_name=Malloc_STAT
00:13:13.975   16:57:06	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:13:13.975   16:57:06	-- common/autotest_common.sh@899 -- # local i
00:13:13.975   16:57:06	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:13:13.975   16:57:06	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:13:13.975   16:57:06	-- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine
00:13:13.975   16:57:06	-- common/autotest_common.sh@561 -- # xtrace_disable
00:13:13.975   16:57:06	-- common/autotest_common.sh@10 -- # set +x
00:13:13.975   16:57:06	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:13.975   16:57:06	-- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Malloc_STAT -t 2000
00:13:13.975   16:57:06	-- common/autotest_common.sh@561 -- # xtrace_disable
00:13:13.975   16:57:06	-- common/autotest_common.sh@10 -- # set +x
00:13:13.975  [
00:13:13.975  {
00:13:13.975  "name": "Malloc_STAT",
00:13:13.975  "aliases": [
00:13:13.975  "5159803a-6f2b-472d-ae60-a6d30269be30"
00:13:13.975  ],
00:13:13.975  "product_name": "Malloc disk",
00:13:13.975  "block_size": 512,
00:13:13.975  "num_blocks": 262144,
00:13:13.975  "uuid": "5159803a-6f2b-472d-ae60-a6d30269be30",
00:13:13.975  "assigned_rate_limits": {
00:13:13.975  "rw_ios_per_sec": 0,
00:13:13.975  "rw_mbytes_per_sec": 0,
00:13:13.975  "r_mbytes_per_sec": 0,
00:13:13.975  "w_mbytes_per_sec": 0
00:13:13.975  },
00:13:13.975  "claimed": false,
00:13:13.975  "zoned": false,
00:13:13.975  "supported_io_types": {
00:13:13.975  "read": true,
00:13:13.975  "write": true,
00:13:13.975  "unmap": true,
00:13:13.975  "write_zeroes": true,
00:13:13.975  "flush": true,
00:13:13.975  "reset": true,
00:13:13.975  "compare": false,
00:13:13.975  "compare_and_write": false,
00:13:13.975  "abort": true,
00:13:13.975  "nvme_admin": false,
00:13:13.975  "nvme_io": false
00:13:13.975  },
00:13:13.975  "memory_domains": [
00:13:13.975  {
00:13:13.975  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:13:13.975  "dma_device_type": 2
00:13:13.975  }
00:13:13.975  ],
00:13:13.975  "driver_specific": {}
00:13:13.975  }
00:13:13.975  ]
00:13:13.975   16:57:06	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:13.975   16:57:06	-- common/autotest_common.sh@905 -- # return 0
00:13:13.975   16:57:06	-- bdev/blockdev.sh@603 -- # sleep 2
00:13:13.975   16:57:06	-- bdev/blockdev.sh@602 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests
00:13:13.975  Running I/O for 10 seconds...
00:13:15.878   16:57:08	-- bdev/blockdev.sh@604 -- # stat_function_test Malloc_STAT
00:13:15.878   16:57:08	-- bdev/blockdev.sh@557 -- # local bdev_name=Malloc_STAT
00:13:15.878   16:57:08	-- bdev/blockdev.sh@558 -- # local iostats
00:13:15.878   16:57:08	-- bdev/blockdev.sh@559 -- # local io_count1
00:13:15.878   16:57:08	-- bdev/blockdev.sh@560 -- # local io_count2
00:13:15.878   16:57:08	-- bdev/blockdev.sh@561 -- # local iostats_per_channel
00:13:15.878   16:57:08	-- bdev/blockdev.sh@562 -- # local io_count_per_channel1
00:13:15.878   16:57:08	-- bdev/blockdev.sh@563 -- # local io_count_per_channel2
00:13:15.878   16:57:08	-- bdev/blockdev.sh@564 -- # local io_count_per_channel_all=0
00:13:15.878    16:57:08	-- bdev/blockdev.sh@566 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT
00:13:15.878    16:57:08	-- common/autotest_common.sh@561 -- # xtrace_disable
00:13:15.878    16:57:08	-- common/autotest_common.sh@10 -- # set +x
00:13:15.878    16:57:08	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:15.878   16:57:08	-- bdev/blockdev.sh@566 -- # iostats='{
00:13:15.878  "tick_rate": 2100000000,
00:13:15.878  "ticks": 1526149654848,
00:13:15.878  "bdevs": [
00:13:15.878  {
00:13:15.878  "name": "Malloc_STAT",
00:13:15.878  "bytes_read": 525373952,
00:13:15.878  "num_read_ops": 128259,
00:13:15.878  "bytes_written": 0,
00:13:15.878  "num_write_ops": 0,
00:13:15.878  "bytes_unmapped": 0,
00:13:15.878  "num_unmap_ops": 0,
00:13:15.878  "bytes_copied": 0,
00:13:15.878  "num_copy_ops": 0,
00:13:15.878  "read_latency_ticks": 2019719522398,
00:13:15.878  "max_read_latency_ticks": 22435666,
00:13:15.878  "min_read_latency_ticks": 349794,
00:13:15.878  "write_latency_ticks": 0,
00:13:15.878  "max_write_latency_ticks": 0,
00:13:15.878  "min_write_latency_ticks": 0,
00:13:15.878  "unmap_latency_ticks": 0,
00:13:15.878  "max_unmap_latency_ticks": 0,
00:13:15.878  "min_unmap_latency_ticks": 0,
00:13:15.878  "copy_latency_ticks": 0,
00:13:15.878  "max_copy_latency_ticks": 0,
00:13:15.878  "min_copy_latency_ticks": 0,
00:13:15.878  "io_error": {}
00:13:15.878  }
00:13:15.878  ]
00:13:15.878  }'
00:13:15.878    16:57:08	-- bdev/blockdev.sh@567 -- # jq -r '.bdevs[0].num_read_ops'
00:13:16.137   16:57:08	-- bdev/blockdev.sh@567 -- # io_count1=128259
00:13:16.137    16:57:08	-- bdev/blockdev.sh@569 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT -c
00:13:16.137    16:57:08	-- common/autotest_common.sh@561 -- # xtrace_disable
00:13:16.137    16:57:08	-- common/autotest_common.sh@10 -- # set +x
00:13:16.137    16:57:08	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:16.137   16:57:08	-- bdev/blockdev.sh@569 -- # iostats_per_channel='{
00:13:16.137  "tick_rate": 2100000000,
00:13:16.137  "ticks": 1526303543618,
00:13:16.137  "name": "Malloc_STAT",
00:13:16.137  "channels": [
00:13:16.137  {
00:13:16.137  "thread_id": 2,
00:13:16.137  "bytes_read": 341835776,
00:13:16.137  "num_read_ops": 83456,
00:13:16.137  "bytes_written": 0,
00:13:16.137  "num_write_ops": 0,
00:13:16.137  "bytes_unmapped": 0,
00:13:16.137  "num_unmap_ops": 0,
00:13:16.137  "bytes_copied": 0,
00:13:16.137  "num_copy_ops": 0,
00:13:16.138  "read_latency_ticks": 1047315062182,
00:13:16.138  "max_read_latency_ticks": 17290810,
00:13:16.138  "min_read_latency_ticks": 10285740,
00:13:16.138  "write_latency_ticks": 0,
00:13:16.138  "max_write_latency_ticks": 0,
00:13:16.138  "min_write_latency_ticks": 0,
00:13:16.138  "unmap_latency_ticks": 0,
00:13:16.138  "max_unmap_latency_ticks": 0,
00:13:16.138  "min_unmap_latency_ticks": 0,
00:13:16.138  "copy_latency_ticks": 0,
00:13:16.138  "max_copy_latency_ticks": 0,
00:13:16.138  "min_copy_latency_ticks": 0
00:13:16.138  },
00:13:16.138  {
00:13:16.138  "thread_id": 3,
00:13:16.138  "bytes_read": 203423744,
00:13:16.138  "num_read_ops": 49664,
00:13:16.138  "bytes_written": 0,
00:13:16.138  "num_write_ops": 0,
00:13:16.138  "bytes_unmapped": 0,
00:13:16.138  "num_unmap_ops": 0,
00:13:16.138  "bytes_copied": 0,
00:13:16.138  "num_copy_ops": 0,
00:13:16.138  "read_latency_ticks": 1049623204446,
00:13:16.138  "max_read_latency_ticks": 22435666,
00:13:16.138  "min_read_latency_ticks": 10440732,
00:13:16.138  "write_latency_ticks": 0,
00:13:16.138  "max_write_latency_ticks": 0,
00:13:16.138  "min_write_latency_ticks": 0,
00:13:16.138  "unmap_latency_ticks": 0,
00:13:16.138  "max_unmap_latency_ticks": 0,
00:13:16.138  "min_unmap_latency_ticks": 0,
00:13:16.138  "copy_latency_ticks": 0,
00:13:16.138  "max_copy_latency_ticks": 0,
00:13:16.138  "min_copy_latency_ticks": 0
00:13:16.138  }
00:13:16.138  ]
00:13:16.138  }'
00:13:16.138    16:57:08	-- bdev/blockdev.sh@570 -- # jq -r '.channels[0].num_read_ops'
00:13:16.138   16:57:08	-- bdev/blockdev.sh@570 -- # io_count_per_channel1=83456
00:13:16.138   16:57:08	-- bdev/blockdev.sh@571 -- # io_count_per_channel_all=83456
00:13:16.138    16:57:08	-- bdev/blockdev.sh@572 -- # jq -r '.channels[1].num_read_ops'
00:13:16.138   16:57:08	-- bdev/blockdev.sh@572 -- # io_count_per_channel2=49664
00:13:16.138   16:57:08	-- bdev/blockdev.sh@573 -- # io_count_per_channel_all=133120
00:13:16.138    16:57:08	-- bdev/blockdev.sh@575 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT
00:13:16.138    16:57:08	-- common/autotest_common.sh@561 -- # xtrace_disable
00:13:16.138    16:57:08	-- common/autotest_common.sh@10 -- # set +x
00:13:16.138    16:57:08	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:16.138   16:57:08	-- bdev/blockdev.sh@575 -- # iostats='{
00:13:16.138  "tick_rate": 2100000000,
00:13:16.138  "ticks": 1526548852770,
00:13:16.138  "bdevs": [
00:13:16.138  {
00:13:16.138  "name": "Malloc_STAT",
00:13:16.138  "bytes_read": 576754176,
00:13:16.138  "num_read_ops": 140803,
00:13:16.138  "bytes_written": 0,
00:13:16.138  "num_write_ops": 0,
00:13:16.138  "bytes_unmapped": 0,
00:13:16.138  "num_unmap_ops": 0,
00:13:16.138  "bytes_copied": 0,
00:13:16.138  "num_copy_ops": 0,
00:13:16.138  "read_latency_ticks": 2219940001134,
00:13:16.138  "max_read_latency_ticks": 22600840,
00:13:16.138  "min_read_latency_ticks": 349794,
00:13:16.138  "write_latency_ticks": 0,
00:13:16.138  "max_write_latency_ticks": 0,
00:13:16.138  "min_write_latency_ticks": 0,
00:13:16.138  "unmap_latency_ticks": 0,
00:13:16.138  "max_unmap_latency_ticks": 0,
00:13:16.138  "min_unmap_latency_ticks": 0,
00:13:16.138  "copy_latency_ticks": 0,
00:13:16.138  "max_copy_latency_ticks": 0,
00:13:16.138  "min_copy_latency_ticks": 0,
00:13:16.138  "io_error": {}
00:13:16.138  }
00:13:16.138  ]
00:13:16.138  }'
00:13:16.138    16:57:08	-- bdev/blockdev.sh@576 -- # jq -r '.bdevs[0].num_read_ops'
00:13:16.138   16:57:08	-- bdev/blockdev.sh@576 -- # io_count2=140803
00:13:16.138   16:57:08	-- bdev/blockdev.sh@581 -- # '[' 133120 -lt 128259 ']'
00:13:16.138   16:57:08	-- bdev/blockdev.sh@581 -- # '[' 133120 -gt 140803 ']'
00:13:16.138   16:57:08	-- bdev/blockdev.sh@606 -- # rpc_cmd bdev_malloc_delete Malloc_STAT
00:13:16.138   16:57:08	-- common/autotest_common.sh@561 -- # xtrace_disable
00:13:16.138   16:57:08	-- common/autotest_common.sh@10 -- # set +x
00:13:16.138  
00:13:16.138                                                                                                  Latency(us)
00:13:16.138  
[2024-11-19T16:57:09.002Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:13:16.138  
[2024-11-19T16:57:09.002Z]  Job: Malloc_STAT (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096)
00:13:16.138  	 Malloc_STAT         :       2.15   42500.10     166.02       0.00     0.00    6007.56    1521.37    8426.06
00:13:16.138  
[2024-11-19T16:57:09.002Z]  Job: Malloc_STAT (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096)
00:13:16.138  	 Malloc_STAT         :       2.15   25692.73     100.36       0.00     0.00    9934.30     936.23   10797.84
00:13:16.138  
[2024-11-19T16:57:09.002Z]  ===================================================================================================================
00:13:16.138  
[2024-11-19T16:57:09.002Z]  Total                       :              68192.83     266.38       0.00     0.00    7487.80     936.23   10797.84
00:13:16.138  0
00:13:16.138   16:57:08	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:16.138   16:57:08	-- bdev/blockdev.sh@607 -- # killprocess 122203
00:13:16.138   16:57:08	-- common/autotest_common.sh@936 -- # '[' -z 122203 ']'
00:13:16.138   16:57:08	-- common/autotest_common.sh@940 -- # kill -0 122203
00:13:16.138    16:57:08	-- common/autotest_common.sh@941 -- # uname
00:13:16.138   16:57:08	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:13:16.138    16:57:08	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 122203
00:13:16.397   16:57:08	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:13:16.397  killing process with pid 122203
00:13:16.397  Received shutdown signal, test time was about 2.204359 seconds
00:13:16.397  
00:13:16.397                                                                                                  Latency(us)
00:13:16.397  
[2024-11-19T16:57:09.261Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:13:16.397  
[2024-11-19T16:57:09.261Z]  ===================================================================================================================
00:13:16.397  
[2024-11-19T16:57:09.261Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:13:16.397   16:57:08	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:13:16.397   16:57:08	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 122203'
00:13:16.397   16:57:08	-- common/autotest_common.sh@955 -- # kill 122203
00:13:16.397   16:57:08	-- common/autotest_common.sh@960 -- # wait 122203
00:13:16.656  ************************************
00:13:16.656  END TEST bdev_stat
00:13:16.656  ************************************
00:13:16.656   16:57:09	-- bdev/blockdev.sh@608 -- # trap - SIGINT SIGTERM EXIT
00:13:16.656  
00:13:16.656  real	0m3.724s
00:13:16.656  user	0m7.396s
00:13:16.656  sys	0m0.364s
00:13:16.656   16:57:09	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:13:16.656   16:57:09	-- common/autotest_common.sh@10 -- # set +x
00:13:16.656   16:57:09	-- bdev/blockdev.sh@792 -- # [[ bdev == gpt ]]
00:13:16.656   16:57:09	-- bdev/blockdev.sh@796 -- # [[ bdev == crypto_sw ]]
00:13:16.656   16:57:09	-- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT
00:13:16.656   16:57:09	-- bdev/blockdev.sh@809 -- # cleanup
00:13:16.656   16:57:09	-- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile
00:13:16.656   16:57:09	-- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:13:16.656   16:57:09	-- bdev/blockdev.sh@24 -- # [[ bdev == rbd ]]
00:13:16.656   16:57:09	-- bdev/blockdev.sh@28 -- # [[ bdev == daos ]]
00:13:16.656   16:57:09	-- bdev/blockdev.sh@32 -- # [[ bdev = \g\p\t ]]
00:13:16.656   16:57:09	-- bdev/blockdev.sh@38 -- # [[ bdev == xnvme ]]
00:13:16.656  ************************************
00:13:16.656  END TEST blockdev_general
00:13:16.656  ************************************
00:13:16.656  
00:13:16.656  real	1m58.579s
00:13:16.656  user	5m14.195s
00:13:16.656  sys	0m23.892s
00:13:16.656   16:57:09	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:13:16.656   16:57:09	-- common/autotest_common.sh@10 -- # set +x
00:13:16.656   16:57:09	-- spdk/autotest.sh@183 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh
00:13:16.656   16:57:09	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:13:16.656   16:57:09	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:13:16.656   16:57:09	-- common/autotest_common.sh@10 -- # set +x
00:13:16.656  ************************************
00:13:16.656  START TEST bdev_raid
00:13:16.656  ************************************
00:13:16.656   16:57:09	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh
00:13:16.915  * Looking for test storage...
00:13:16.915  * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev
00:13:16.915    16:57:09	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:13:16.915     16:57:09	-- common/autotest_common.sh@1690 -- # lcov --version
00:13:16.915     16:57:09	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:13:16.915    16:57:09	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:13:16.915    16:57:09	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:13:16.915    16:57:09	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:13:16.915    16:57:09	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:13:16.915    16:57:09	-- scripts/common.sh@335 -- # IFS=.-:
00:13:16.915    16:57:09	-- scripts/common.sh@335 -- # read -ra ver1
00:13:16.915    16:57:09	-- scripts/common.sh@336 -- # IFS=.-:
00:13:16.915    16:57:09	-- scripts/common.sh@336 -- # read -ra ver2
00:13:16.915    16:57:09	-- scripts/common.sh@337 -- # local 'op=<'
00:13:16.915    16:57:09	-- scripts/common.sh@339 -- # ver1_l=2
00:13:16.915    16:57:09	-- scripts/common.sh@340 -- # ver2_l=1
00:13:16.915    16:57:09	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:13:16.915    16:57:09	-- scripts/common.sh@343 -- # case "$op" in
00:13:16.915    16:57:09	-- scripts/common.sh@344 -- # : 1
00:13:16.915    16:57:09	-- scripts/common.sh@363 -- # (( v = 0 ))
00:13:16.915    16:57:09	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:13:16.915     16:57:09	-- scripts/common.sh@364 -- # decimal 1
00:13:16.915     16:57:09	-- scripts/common.sh@352 -- # local d=1
00:13:16.915     16:57:09	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:13:16.915     16:57:09	-- scripts/common.sh@354 -- # echo 1
00:13:16.915    16:57:09	-- scripts/common.sh@364 -- # ver1[v]=1
00:13:16.915     16:57:09	-- scripts/common.sh@365 -- # decimal 2
00:13:16.916     16:57:09	-- scripts/common.sh@352 -- # local d=2
00:13:16.916     16:57:09	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:13:16.916     16:57:09	-- scripts/common.sh@354 -- # echo 2
00:13:16.916    16:57:09	-- scripts/common.sh@365 -- # ver2[v]=2
00:13:16.916    16:57:09	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:13:16.916    16:57:09	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:13:16.916    16:57:09	-- scripts/common.sh@367 -- # return 0
00:13:16.916    16:57:09	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:13:16.916    16:57:09	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:13:16.916  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:16.916  		--rc genhtml_branch_coverage=1
00:13:16.916  		--rc genhtml_function_coverage=1
00:13:16.916  		--rc genhtml_legend=1
00:13:16.916  		--rc geninfo_all_blocks=1
00:13:16.916  		--rc geninfo_unexecuted_blocks=1
00:13:16.916  		
00:13:16.916  		'
00:13:16.916    16:57:09	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:13:16.916  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:16.916  		--rc genhtml_branch_coverage=1
00:13:16.916  		--rc genhtml_function_coverage=1
00:13:16.916  		--rc genhtml_legend=1
00:13:16.916  		--rc geninfo_all_blocks=1
00:13:16.916  		--rc geninfo_unexecuted_blocks=1
00:13:16.916  		
00:13:16.916  		'
00:13:16.916    16:57:09	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:13:16.916  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:16.916  		--rc genhtml_branch_coverage=1
00:13:16.916  		--rc genhtml_function_coverage=1
00:13:16.916  		--rc genhtml_legend=1
00:13:16.916  		--rc geninfo_all_blocks=1
00:13:16.916  		--rc geninfo_unexecuted_blocks=1
00:13:16.916  		
00:13:16.916  		'
00:13:16.916    16:57:09	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:13:16.916  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:16.916  		--rc genhtml_branch_coverage=1
00:13:16.916  		--rc genhtml_function_coverage=1
00:13:16.916  		--rc genhtml_legend=1
00:13:16.916  		--rc geninfo_all_blocks=1
00:13:16.916  		--rc geninfo_unexecuted_blocks=1
00:13:16.916  		
00:13:16.916  		'
00:13:16.916   16:57:09	-- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh
00:13:16.916    16:57:09	-- bdev/nbd_common.sh@6 -- # set -e
00:13:16.916   16:57:09	-- bdev/bdev_raid.sh@14 -- # rpc_py='/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock'
00:13:16.916   16:57:09	-- bdev/bdev_raid.sh@714 -- # trap 'on_error_exit;' ERR
00:13:16.916    16:57:09	-- bdev/bdev_raid.sh@716 -- # uname -s
00:13:16.916   16:57:09	-- bdev/bdev_raid.sh@716 -- # '[' Linux = Linux ']'
00:13:16.916   16:57:09	-- bdev/bdev_raid.sh@716 -- # modprobe -n nbd
00:13:16.916   16:57:09	-- bdev/bdev_raid.sh@717 -- # has_nbd=true
00:13:16.916   16:57:09	-- bdev/bdev_raid.sh@718 -- # modprobe nbd
00:13:16.916   16:57:09	-- bdev/bdev_raid.sh@719 -- # run_test raid_function_test_raid0 raid_function_test raid0
00:13:16.916   16:57:09	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:13:16.916   16:57:09	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:13:16.916   16:57:09	-- common/autotest_common.sh@10 -- # set +x
00:13:16.916  ************************************
00:13:16.916  START TEST raid_function_test_raid0
00:13:16.916  ************************************
00:13:16.916   16:57:09	-- common/autotest_common.sh@1114 -- # raid_function_test raid0
00:13:16.916   16:57:09	-- bdev/bdev_raid.sh@81 -- # local raid_level=raid0
00:13:16.916   16:57:09	-- bdev/bdev_raid.sh@82 -- # local nbd=/dev/nbd0
00:13:16.916   16:57:09	-- bdev/bdev_raid.sh@83 -- # local raid_bdev
00:13:16.916   16:57:09	-- bdev/bdev_raid.sh@86 -- # raid_pid=122350
00:13:16.916   16:57:09	-- bdev/bdev_raid.sh@87 -- # echo 'Process raid pid: 122350'
00:13:16.916   16:57:09	-- bdev/bdev_raid.sh@85 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid
00:13:16.916  Process raid pid: 122350
00:13:16.916   16:57:09	-- bdev/bdev_raid.sh@88 -- # waitforlisten 122350 /var/tmp/spdk-raid.sock
00:13:16.916   16:57:09	-- common/autotest_common.sh@829 -- # '[' -z 122350 ']'
00:13:16.916   16:57:09	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock
00:13:16.916   16:57:09	-- common/autotest_common.sh@834 -- # local max_retries=100
00:13:16.916   16:57:09	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...'
00:13:16.916  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...
00:13:16.916   16:57:09	-- common/autotest_common.sh@838 -- # xtrace_disable
00:13:16.916   16:57:09	-- common/autotest_common.sh@10 -- # set +x
00:13:16.916  [2024-11-19 16:57:09.749091] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:13:16.916  [2024-11-19 16:57:09.750225] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:13:17.176  [2024-11-19 16:57:09.907901] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:17.176  [2024-11-19 16:57:09.950949] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:13:17.176  [2024-11-19 16:57:09.992556] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:13:18.110   16:57:10	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:13:18.110   16:57:10	-- common/autotest_common.sh@862 -- # return 0
00:13:18.110   16:57:10	-- bdev/bdev_raid.sh@90 -- # configure_raid_bdev raid0
00:13:18.110   16:57:10	-- bdev/bdev_raid.sh@67 -- # local raid_level=raid0
00:13:18.110   16:57:10	-- bdev/bdev_raid.sh@68 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt
00:13:18.110   16:57:10	-- bdev/bdev_raid.sh@70 -- # cat
00:13:18.110   16:57:10	-- bdev/bdev_raid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock
00:13:18.110  [2024-11-19 16:57:10.909670] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed
00:13:18.110  [2024-11-19 16:57:10.912265] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed
00:13:18.110  [2024-11-19 16:57:10.912469] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080
00:13:18.110  [2024-11-19 16:57:10.912597] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512
00:13:18.110  [2024-11-19 16:57:10.912794] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000001f80
00:13:18.110  [2024-11-19 16:57:10.913451] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080
00:13:18.110  [2024-11-19 16:57:10.913581] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x616000006080
00:13:18.110  [2024-11-19 16:57:10.913895] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:13:18.110  Base_1
00:13:18.110  Base_2
00:13:18.110   16:57:10	-- bdev/bdev_raid.sh@77 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt
00:13:18.110    16:57:10	-- bdev/bdev_raid.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online
00:13:18.110    16:57:10	-- bdev/bdev_raid.sh@91 -- # jq -r '.[0]["name"] | select(.)'
00:13:18.369   16:57:11	-- bdev/bdev_raid.sh@91 -- # raid_bdev=raid
00:13:18.369   16:57:11	-- bdev/bdev_raid.sh@92 -- # '[' raid = '' ']'
00:13:18.369   16:57:11	-- bdev/bdev_raid.sh@97 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0
00:13:18.369   16:57:11	-- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:13:18.369   16:57:11	-- bdev/nbd_common.sh@10 -- # bdev_list=('raid')
00:13:18.369   16:57:11	-- bdev/nbd_common.sh@10 -- # local bdev_list
00:13:18.369   16:57:11	-- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0')
00:13:18.369   16:57:11	-- bdev/nbd_common.sh@11 -- # local nbd_list
00:13:18.369   16:57:11	-- bdev/nbd_common.sh@12 -- # local i
00:13:18.369   16:57:11	-- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:13:18.369   16:57:11	-- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:13:18.369   16:57:11	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0
00:13:18.628  [2024-11-19 16:57:11.418006] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120
00:13:18.628  /dev/nbd0
00:13:18.628    16:57:11	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:13:18.628   16:57:11	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:13:18.628   16:57:11	-- common/autotest_common.sh@866 -- # local nbd_name=nbd0
00:13:18.628   16:57:11	-- common/autotest_common.sh@867 -- # local i
00:13:18.628   16:57:11	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:13:18.628   16:57:11	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:13:18.628   16:57:11	-- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions
00:13:18.628   16:57:11	-- common/autotest_common.sh@871 -- # break
00:13:18.628   16:57:11	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:13:18.628   16:57:11	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:13:18.628   16:57:11	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:13:18.628  1+0 records in
00:13:18.628  1+0 records out
00:13:18.628  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000365322 s, 11.2 MB/s
00:13:18.628    16:57:11	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:13:18.628   16:57:11	-- common/autotest_common.sh@884 -- # size=4096
00:13:18.628   16:57:11	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:13:18.628   16:57:11	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:13:18.628   16:57:11	-- common/autotest_common.sh@887 -- # return 0
00:13:18.887   16:57:11	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:13:18.887   16:57:11	-- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:13:18.887    16:57:11	-- bdev/bdev_raid.sh@98 -- # nbd_get_count /var/tmp/spdk-raid.sock
00:13:18.887    16:57:11	-- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:13:18.887     16:57:11	-- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks
00:13:18.887    16:57:11	-- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:13:18.887    {
00:13:18.887      "nbd_device": "/dev/nbd0",
00:13:18.887      "bdev_name": "raid"
00:13:18.887    }
00:13:18.887  ]'
00:13:18.887     16:57:11	-- bdev/nbd_common.sh@64 -- # echo '[
00:13:18.887    {
00:13:18.887      "nbd_device": "/dev/nbd0",
00:13:18.887      "bdev_name": "raid"
00:13:18.887    }
00:13:18.887  ]'
00:13:18.887     16:57:11	-- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:13:18.887    16:57:11	-- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0
00:13:18.887     16:57:11	-- bdev/nbd_common.sh@65 -- # echo /dev/nbd0
00:13:18.887     16:57:11	-- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:13:18.887    16:57:11	-- bdev/nbd_common.sh@65 -- # count=1
00:13:18.887    16:57:11	-- bdev/nbd_common.sh@66 -- # echo 1
00:13:18.887   16:57:11	-- bdev/bdev_raid.sh@98 -- # count=1
00:13:18.887   16:57:11	-- bdev/bdev_raid.sh@99 -- # '[' 1 -ne 1 ']'
00:13:18.887   16:57:11	-- bdev/bdev_raid.sh@103 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock
00:13:18.887   16:57:11	-- bdev/bdev_raid.sh@17 -- # hash blkdiscard
00:13:18.887   16:57:11	-- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0
00:13:18.887   16:57:11	-- bdev/bdev_raid.sh@19 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:13:18.887   16:57:11	-- bdev/bdev_raid.sh@20 -- # local blksize
00:13:18.887    16:57:11	-- bdev/bdev_raid.sh@21 -- # grep -v LOG-SEC
00:13:18.887    16:57:11	-- bdev/bdev_raid.sh@21 -- # lsblk -o LOG-SEC /dev/nbd0
00:13:18.887    16:57:11	-- bdev/bdev_raid.sh@21 -- # cut -d ' ' -f 5
00:13:18.887   16:57:11	-- bdev/bdev_raid.sh@21 -- # blksize=512
00:13:18.887   16:57:11	-- bdev/bdev_raid.sh@22 -- # local rw_blk_num=4096
00:13:18.887   16:57:11	-- bdev/bdev_raid.sh@23 -- # local rw_len=2097152
00:13:18.887   16:57:11	-- bdev/bdev_raid.sh@24 -- # unmap_blk_offs=('0' '1028' '321')
00:13:18.887   16:57:11	-- bdev/bdev_raid.sh@24 -- # local unmap_blk_offs
00:13:18.887   16:57:11	-- bdev/bdev_raid.sh@25 -- # unmap_blk_nums=('128' '2035' '456')
00:13:18.887   16:57:11	-- bdev/bdev_raid.sh@25 -- # local unmap_blk_nums
00:13:18.887   16:57:11	-- bdev/bdev_raid.sh@26 -- # local unmap_off
00:13:19.145   16:57:11	-- bdev/bdev_raid.sh@27 -- # local unmap_len
00:13:19.145   16:57:11	-- bdev/bdev_raid.sh@30 -- # dd if=/dev/urandom of=/raidrandtest bs=512 count=4096
00:13:19.145  4096+0 records in
00:13:19.145  4096+0 records out
00:13:19.145  2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0311822 s, 67.3 MB/s
00:13:19.145   16:57:11	-- bdev/bdev_raid.sh@31 -- # dd if=/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct
00:13:19.145  4096+0 records in
00:13:19.145  4096+0 records out
00:13:19.145  2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.195479 s, 10.7 MB/s
00:13:19.145   16:57:11	-- bdev/bdev_raid.sh@32 -- # blockdev --flushbufs /dev/nbd0
00:13:19.145   16:57:11	-- bdev/bdev_raid.sh@35 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0
00:13:19.403   16:57:11	-- bdev/bdev_raid.sh@37 -- # (( i = 0 ))
00:13:19.403   16:57:12	-- bdev/bdev_raid.sh@37 -- # (( i < 3 ))
00:13:19.403   16:57:12	-- bdev/bdev_raid.sh@38 -- # unmap_off=0
00:13:19.403   16:57:12	-- bdev/bdev_raid.sh@39 -- # unmap_len=65536
00:13:19.403   16:57:12	-- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=0 count=128 conv=notrunc
00:13:19.403  128+0 records in
00:13:19.403  128+0 records out
00:13:19.403  65536 bytes (66 kB, 64 KiB) copied, 0.000746871 s, 87.7 MB/s
00:13:19.403   16:57:12	-- bdev/bdev_raid.sh@45 -- # blkdiscard -o 0 -l 65536 /dev/nbd0
00:13:19.403   16:57:12	-- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0
00:13:19.403   16:57:12	-- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0
00:13:19.403   16:57:12	-- bdev/bdev_raid.sh@37 -- # (( i++ ))
00:13:19.403   16:57:12	-- bdev/bdev_raid.sh@37 -- # (( i < 3 ))
00:13:19.403   16:57:12	-- bdev/bdev_raid.sh@38 -- # unmap_off=526336
00:13:19.403   16:57:12	-- bdev/bdev_raid.sh@39 -- # unmap_len=1041920
00:13:19.403   16:57:12	-- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc
00:13:19.403  2035+0 records in
00:13:19.403  2035+0 records out
00:13:19.403  1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00982519 s, 106 MB/s
00:13:19.403   16:57:12	-- bdev/bdev_raid.sh@45 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0
00:13:19.403   16:57:12	-- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0
00:13:19.403   16:57:12	-- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0
00:13:19.403   16:57:12	-- bdev/bdev_raid.sh@37 -- # (( i++ ))
00:13:19.403   16:57:12	-- bdev/bdev_raid.sh@37 -- # (( i < 3 ))
00:13:19.403   16:57:12	-- bdev/bdev_raid.sh@38 -- # unmap_off=164352
00:13:19.403   16:57:12	-- bdev/bdev_raid.sh@39 -- # unmap_len=233472
00:13:19.403   16:57:12	-- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=321 count=456 conv=notrunc
00:13:19.403  456+0 records in
00:13:19.403  456+0 records out
00:13:19.403  233472 bytes (233 kB, 228 KiB) copied, 0.00211589 s, 110 MB/s
00:13:19.403   16:57:12	-- bdev/bdev_raid.sh@45 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0
00:13:19.403   16:57:12	-- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0
00:13:19.403   16:57:12	-- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0
00:13:19.403   16:57:12	-- bdev/bdev_raid.sh@37 -- # (( i++ ))
00:13:19.403   16:57:12	-- bdev/bdev_raid.sh@37 -- # (( i < 3 ))
00:13:19.403   16:57:12	-- bdev/bdev_raid.sh@53 -- # return 0
00:13:19.403   16:57:12	-- bdev/bdev_raid.sh@105 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0
00:13:19.403   16:57:12	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:13:19.403   16:57:12	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:13:19.403   16:57:12	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:13:19.403   16:57:12	-- bdev/nbd_common.sh@51 -- # local i
00:13:19.403   16:57:12	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:13:19.403   16:57:12	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0
00:13:19.659    16:57:12	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:13:19.659   16:57:12	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:13:19.659   16:57:12	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:13:19.659   16:57:12	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:13:19.659  [2024-11-19 16:57:12.276104] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:13:19.659   16:57:12	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:13:19.659   16:57:12	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:13:19.659   16:57:12	-- bdev/nbd_common.sh@41 -- # break
00:13:19.659   16:57:12	-- bdev/nbd_common.sh@45 -- # return 0
00:13:19.659    16:57:12	-- bdev/bdev_raid.sh@106 -- # nbd_get_count /var/tmp/spdk-raid.sock
00:13:19.659    16:57:12	-- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:13:19.659     16:57:12	-- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks
00:13:19.918    16:57:12	-- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:13:19.918     16:57:12	-- bdev/nbd_common.sh@64 -- # echo '[]'
00:13:19.918     16:57:12	-- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:13:19.918    16:57:12	-- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:13:19.918     16:57:12	-- bdev/nbd_common.sh@65 -- # echo ''
00:13:19.918     16:57:12	-- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:13:19.918     16:57:12	-- bdev/nbd_common.sh@65 -- # true
00:13:19.918    16:57:12	-- bdev/nbd_common.sh@65 -- # count=0
00:13:19.918    16:57:12	-- bdev/nbd_common.sh@66 -- # echo 0
00:13:19.918   16:57:12	-- bdev/bdev_raid.sh@106 -- # count=0
00:13:19.918   16:57:12	-- bdev/bdev_raid.sh@107 -- # '[' 0 -ne 0 ']'
00:13:19.918   16:57:12	-- bdev/bdev_raid.sh@111 -- # killprocess 122350
00:13:19.918   16:57:12	-- common/autotest_common.sh@936 -- # '[' -z 122350 ']'
00:13:19.918   16:57:12	-- common/autotest_common.sh@940 -- # kill -0 122350
00:13:19.918    16:57:12	-- common/autotest_common.sh@941 -- # uname
00:13:19.918   16:57:12	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:13:19.918    16:57:12	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 122350
00:13:19.918  killing process with pid 122350
00:13:19.918   16:57:12	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:13:19.918   16:57:12	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:13:19.918   16:57:12	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 122350'
00:13:19.918   16:57:12	-- common/autotest_common.sh@955 -- # kill 122350
00:13:19.918  [2024-11-19 16:57:12.633432] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:13:19.918   16:57:12	-- common/autotest_common.sh@960 -- # wait 122350
00:13:19.918  [2024-11-19 16:57:12.633552] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:13:19.918  [2024-11-19 16:57:12.633610] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:13:19.918  [2024-11-19 16:57:12.633620] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name raid, state offline
00:13:19.918  [2024-11-19 16:57:12.657143] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:13:20.177  ************************************
00:13:20.177  END TEST raid_function_test_raid0
00:13:20.177  ************************************
00:13:20.177   16:57:12	-- bdev/bdev_raid.sh@113 -- # return 0
00:13:20.177  
00:13:20.177  real	0m3.219s
00:13:20.177  user	0m4.304s
00:13:20.177  sys	0m0.963s
00:13:20.177   16:57:12	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:13:20.177   16:57:12	-- common/autotest_common.sh@10 -- # set +x
00:13:20.177   16:57:12	-- bdev/bdev_raid.sh@720 -- # run_test raid_function_test_concat raid_function_test concat
00:13:20.177   16:57:12	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:13:20.177   16:57:12	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:13:20.177   16:57:12	-- common/autotest_common.sh@10 -- # set +x
00:13:20.177  ************************************
00:13:20.177  START TEST raid_function_test_concat
00:13:20.177  ************************************
00:13:20.177   16:57:12	-- common/autotest_common.sh@1114 -- # raid_function_test concat
00:13:20.177   16:57:12	-- bdev/bdev_raid.sh@81 -- # local raid_level=concat
00:13:20.177   16:57:12	-- bdev/bdev_raid.sh@82 -- # local nbd=/dev/nbd0
00:13:20.177   16:57:12	-- bdev/bdev_raid.sh@83 -- # local raid_bdev
00:13:20.177   16:57:12	-- bdev/bdev_raid.sh@86 -- # raid_pid=122496
00:13:20.177   16:57:12	-- bdev/bdev_raid.sh@87 -- # echo 'Process raid pid: 122496'
00:13:20.177   16:57:12	-- bdev/bdev_raid.sh@85 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid
00:13:20.177  Process raid pid: 122496
00:13:20.177   16:57:12	-- bdev/bdev_raid.sh@88 -- # waitforlisten 122496 /var/tmp/spdk-raid.sock
00:13:20.177   16:57:12	-- common/autotest_common.sh@829 -- # '[' -z 122496 ']'
00:13:20.177   16:57:12	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock
00:13:20.177   16:57:12	-- common/autotest_common.sh@834 -- # local max_retries=100
00:13:20.177  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...
00:13:20.177   16:57:12	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...'
00:13:20.177   16:57:12	-- common/autotest_common.sh@838 -- # xtrace_disable
00:13:20.177   16:57:12	-- common/autotest_common.sh@10 -- # set +x
00:13:20.177  [2024-11-19 16:57:13.033918] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:13:20.177  [2024-11-19 16:57:13.034098] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:13:20.436  [2024-11-19 16:57:13.173753] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:20.436  [2024-11-19 16:57:13.215644] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:13:20.436  [2024-11-19 16:57:13.257326] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:13:21.085   16:57:13	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:13:21.085   16:57:13	-- common/autotest_common.sh@862 -- # return 0
00:13:21.085   16:57:13	-- bdev/bdev_raid.sh@90 -- # configure_raid_bdev concat
00:13:21.085   16:57:13	-- bdev/bdev_raid.sh@67 -- # local raid_level=concat
00:13:21.085   16:57:13	-- bdev/bdev_raid.sh@68 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt
00:13:21.085   16:57:13	-- bdev/bdev_raid.sh@70 -- # cat
00:13:21.085   16:57:13	-- bdev/bdev_raid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock
00:13:21.348  [2024-11-19 16:57:14.205093] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed
00:13:21.607  [2024-11-19 16:57:14.207935] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed
00:13:21.607  [2024-11-19 16:57:14.208023] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080
00:13:21.607  [2024-11-19 16:57:14.208037] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512
00:13:21.607  [2024-11-19 16:57:14.208240] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000001f80
00:13:21.607  [2024-11-19 16:57:14.208746] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080
00:13:21.607  [2024-11-19 16:57:14.208771] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x616000006080
00:13:21.607  [2024-11-19 16:57:14.209019] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:13:21.607  Base_1
00:13:21.607  Base_2
00:13:21.607   16:57:14	-- bdev/bdev_raid.sh@77 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt
00:13:21.607    16:57:14	-- bdev/bdev_raid.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online
00:13:21.607    16:57:14	-- bdev/bdev_raid.sh@91 -- # jq -r '.[0]["name"] | select(.)'
00:13:21.607   16:57:14	-- bdev/bdev_raid.sh@91 -- # raid_bdev=raid
00:13:21.607   16:57:14	-- bdev/bdev_raid.sh@92 -- # '[' raid = '' ']'
00:13:21.607   16:57:14	-- bdev/bdev_raid.sh@97 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0
00:13:21.607   16:57:14	-- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:13:21.607   16:57:14	-- bdev/nbd_common.sh@10 -- # bdev_list=('raid')
00:13:21.607   16:57:14	-- bdev/nbd_common.sh@10 -- # local bdev_list
00:13:21.607   16:57:14	-- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0')
00:13:21.607   16:57:14	-- bdev/nbd_common.sh@11 -- # local nbd_list
00:13:21.607   16:57:14	-- bdev/nbd_common.sh@12 -- # local i
00:13:21.607   16:57:14	-- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:13:21.607   16:57:14	-- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:13:21.607   16:57:14	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0
00:13:21.866  [2024-11-19 16:57:14.661257] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120
00:13:21.866  /dev/nbd0
00:13:21.866    16:57:14	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:13:21.866   16:57:14	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:13:21.866   16:57:14	-- common/autotest_common.sh@866 -- # local nbd_name=nbd0
00:13:21.866   16:57:14	-- common/autotest_common.sh@867 -- # local i
00:13:21.866   16:57:14	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:13:21.866   16:57:14	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:13:21.866   16:57:14	-- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions
00:13:21.866   16:57:14	-- common/autotest_common.sh@871 -- # break
00:13:21.866   16:57:14	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:13:21.866   16:57:14	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:13:21.866   16:57:14	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:13:21.866  1+0 records in
00:13:21.866  1+0 records out
00:13:21.866  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000338588 s, 12.1 MB/s
00:13:21.866    16:57:14	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:13:21.866   16:57:14	-- common/autotest_common.sh@884 -- # size=4096
00:13:21.866   16:57:14	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:13:21.866   16:57:14	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:13:21.866   16:57:14	-- common/autotest_common.sh@887 -- # return 0
00:13:21.866   16:57:14	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:13:21.866   16:57:14	-- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:13:21.866    16:57:14	-- bdev/bdev_raid.sh@98 -- # nbd_get_count /var/tmp/spdk-raid.sock
00:13:21.866    16:57:14	-- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:13:21.866     16:57:14	-- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks
00:13:22.125    16:57:14	-- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:13:22.125    {
00:13:22.125      "nbd_device": "/dev/nbd0",
00:13:22.125      "bdev_name": "raid"
00:13:22.125    }
00:13:22.125  ]'
00:13:22.125     16:57:14	-- bdev/nbd_common.sh@64 -- # echo '[
00:13:22.125    {
00:13:22.125      "nbd_device": "/dev/nbd0",
00:13:22.125      "bdev_name": "raid"
00:13:22.125    }
00:13:22.125  ]'
00:13:22.384     16:57:14	-- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:13:22.384    16:57:15	-- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0
00:13:22.384     16:57:15	-- bdev/nbd_common.sh@65 -- # echo /dev/nbd0
00:13:22.384     16:57:15	-- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:13:22.384    16:57:15	-- bdev/nbd_common.sh@65 -- # count=1
00:13:22.384    16:57:15	-- bdev/nbd_common.sh@66 -- # echo 1
00:13:22.384   16:57:15	-- bdev/bdev_raid.sh@98 -- # count=1
00:13:22.384   16:57:15	-- bdev/bdev_raid.sh@99 -- # '[' 1 -ne 1 ']'
00:13:22.384   16:57:15	-- bdev/bdev_raid.sh@103 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock
00:13:22.384   16:57:15	-- bdev/bdev_raid.sh@17 -- # hash blkdiscard
00:13:22.384   16:57:15	-- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0
00:13:22.384   16:57:15	-- bdev/bdev_raid.sh@19 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:13:22.384   16:57:15	-- bdev/bdev_raid.sh@20 -- # local blksize
00:13:22.384    16:57:15	-- bdev/bdev_raid.sh@21 -- # lsblk -o LOG-SEC /dev/nbd0
00:13:22.384    16:57:15	-- bdev/bdev_raid.sh@21 -- # cut -d ' ' -f 5
00:13:22.384    16:57:15	-- bdev/bdev_raid.sh@21 -- # grep -v LOG-SEC
00:13:22.384   16:57:15	-- bdev/bdev_raid.sh@21 -- # blksize=512
00:13:22.384   16:57:15	-- bdev/bdev_raid.sh@22 -- # local rw_blk_num=4096
00:13:22.384   16:57:15	-- bdev/bdev_raid.sh@23 -- # local rw_len=2097152
00:13:22.384   16:57:15	-- bdev/bdev_raid.sh@24 -- # unmap_blk_offs=('0' '1028' '321')
00:13:22.384   16:57:15	-- bdev/bdev_raid.sh@24 -- # local unmap_blk_offs
00:13:22.384   16:57:15	-- bdev/bdev_raid.sh@25 -- # unmap_blk_nums=('128' '2035' '456')
00:13:22.384   16:57:15	-- bdev/bdev_raid.sh@25 -- # local unmap_blk_nums
00:13:22.384   16:57:15	-- bdev/bdev_raid.sh@26 -- # local unmap_off
00:13:22.384   16:57:15	-- bdev/bdev_raid.sh@27 -- # local unmap_len
00:13:22.384   16:57:15	-- bdev/bdev_raid.sh@30 -- # dd if=/dev/urandom of=/raidrandtest bs=512 count=4096
00:13:22.384  4096+0 records in
00:13:22.384  4096+0 records out
00:13:22.384  2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.033829 s, 62.0 MB/s
00:13:22.384   16:57:15	-- bdev/bdev_raid.sh@31 -- # dd if=/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct
00:13:22.643  4096+0 records in
00:13:22.643  4096+0 records out
00:13:22.643  2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.216286 s, 9.7 MB/s
00:13:22.643   16:57:15	-- bdev/bdev_raid.sh@32 -- # blockdev --flushbufs /dev/nbd0
00:13:22.643   16:57:15	-- bdev/bdev_raid.sh@35 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0
00:13:22.643   16:57:15	-- bdev/bdev_raid.sh@37 -- # (( i = 0 ))
00:13:22.643   16:57:15	-- bdev/bdev_raid.sh@37 -- # (( i < 3 ))
00:13:22.643   16:57:15	-- bdev/bdev_raid.sh@38 -- # unmap_off=0
00:13:22.643   16:57:15	-- bdev/bdev_raid.sh@39 -- # unmap_len=65536
00:13:22.643   16:57:15	-- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=0 count=128 conv=notrunc
00:13:22.643  128+0 records in
00:13:22.643  128+0 records out
00:13:22.643  65536 bytes (66 kB, 64 KiB) copied, 0.00114243 s, 57.4 MB/s
00:13:22.643   16:57:15	-- bdev/bdev_raid.sh@45 -- # blkdiscard -o 0 -l 65536 /dev/nbd0
00:13:22.643   16:57:15	-- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0
00:13:22.643   16:57:15	-- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0
00:13:22.643   16:57:15	-- bdev/bdev_raid.sh@37 -- # (( i++ ))
00:13:22.643   16:57:15	-- bdev/bdev_raid.sh@37 -- # (( i < 3 ))
00:13:22.643   16:57:15	-- bdev/bdev_raid.sh@38 -- # unmap_off=526336
00:13:22.643   16:57:15	-- bdev/bdev_raid.sh@39 -- # unmap_len=1041920
00:13:22.643   16:57:15	-- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc
00:13:22.643  2035+0 records in
00:13:22.643  2035+0 records out
00:13:22.643  1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00938393 s, 111 MB/s
00:13:22.643   16:57:15	-- bdev/bdev_raid.sh@45 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0
00:13:22.643   16:57:15	-- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0
00:13:22.643   16:57:15	-- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0
00:13:22.643   16:57:15	-- bdev/bdev_raid.sh@37 -- # (( i++ ))
00:13:22.643   16:57:15	-- bdev/bdev_raid.sh@37 -- # (( i < 3 ))
00:13:22.643   16:57:15	-- bdev/bdev_raid.sh@38 -- # unmap_off=164352
00:13:22.643   16:57:15	-- bdev/bdev_raid.sh@39 -- # unmap_len=233472
00:13:22.643   16:57:15	-- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=321 count=456 conv=notrunc
00:13:22.643  456+0 records in
00:13:22.643  456+0 records out
00:13:22.643  233472 bytes (233 kB, 228 KiB) copied, 0.00290084 s, 80.5 MB/s
00:13:22.643   16:57:15	-- bdev/bdev_raid.sh@45 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0
00:13:22.643   16:57:15	-- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0
00:13:22.643   16:57:15	-- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0
00:13:22.643   16:57:15	-- bdev/bdev_raid.sh@37 -- # (( i++ ))
00:13:22.643   16:57:15	-- bdev/bdev_raid.sh@37 -- # (( i < 3 ))
00:13:22.643   16:57:15	-- bdev/bdev_raid.sh@53 -- # return 0
00:13:22.643   16:57:15	-- bdev/bdev_raid.sh@105 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0
00:13:22.643   16:57:15	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:13:22.643   16:57:15	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:13:22.643   16:57:15	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:13:22.643   16:57:15	-- bdev/nbd_common.sh@51 -- # local i
00:13:22.643   16:57:15	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:13:22.643   16:57:15	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0
00:13:22.903    16:57:15	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:13:22.903  [2024-11-19 16:57:15.652836] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:13:22.903   16:57:15	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:13:22.903   16:57:15	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:13:22.903   16:57:15	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:13:22.903   16:57:15	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:13:22.903   16:57:15	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:13:22.903   16:57:15	-- bdev/nbd_common.sh@41 -- # break
00:13:22.903   16:57:15	-- bdev/nbd_common.sh@45 -- # return 0
00:13:22.903    16:57:15	-- bdev/bdev_raid.sh@106 -- # nbd_get_count /var/tmp/spdk-raid.sock
00:13:22.903    16:57:15	-- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:13:22.903     16:57:15	-- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks
00:13:23.162    16:57:15	-- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:13:23.162     16:57:15	-- bdev/nbd_common.sh@64 -- # echo '[]'
00:13:23.162     16:57:15	-- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:13:23.162    16:57:15	-- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:13:23.162     16:57:15	-- bdev/nbd_common.sh@65 -- # echo ''
00:13:23.162     16:57:15	-- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:13:23.162     16:57:15	-- bdev/nbd_common.sh@65 -- # true
00:13:23.162    16:57:15	-- bdev/nbd_common.sh@65 -- # count=0
00:13:23.162    16:57:15	-- bdev/nbd_common.sh@66 -- # echo 0
00:13:23.162   16:57:15	-- bdev/bdev_raid.sh@106 -- # count=0
00:13:23.162   16:57:15	-- bdev/bdev_raid.sh@107 -- # '[' 0 -ne 0 ']'
00:13:23.162   16:57:15	-- bdev/bdev_raid.sh@111 -- # killprocess 122496
00:13:23.162   16:57:15	-- common/autotest_common.sh@936 -- # '[' -z 122496 ']'
00:13:23.162   16:57:15	-- common/autotest_common.sh@940 -- # kill -0 122496
00:13:23.162    16:57:15	-- common/autotest_common.sh@941 -- # uname
00:13:23.162   16:57:15	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:13:23.162    16:57:15	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 122496
00:13:23.162   16:57:15	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:13:23.162  killing process with pid 122496
00:13:23.162   16:57:15	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:13:23.162   16:57:15	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 122496'
00:13:23.162   16:57:15	-- common/autotest_common.sh@955 -- # kill 122496
00:13:23.162  [2024-11-19 16:57:15.959459] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:13:23.162   16:57:15	-- common/autotest_common.sh@960 -- # wait 122496
00:13:23.162  [2024-11-19 16:57:15.959576] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:13:23.163  [2024-11-19 16:57:15.959635] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:13:23.163  [2024-11-19 16:57:15.959645] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name raid, state offline
00:13:23.163  [2024-11-19 16:57:15.983212] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:13:23.420   16:57:16	-- bdev/bdev_raid.sh@113 -- # return 0
00:13:23.420  
00:13:23.420  real	0m3.262s
00:13:23.420  user	0m4.321s
00:13:23.420  sys	0m1.033s
00:13:23.420   16:57:16	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:13:23.420   16:57:16	-- common/autotest_common.sh@10 -- # set +x
00:13:23.420  ************************************
00:13:23.420  END TEST raid_function_test_concat
00:13:23.420  ************************************
00:13:23.679   16:57:16	-- bdev/bdev_raid.sh@723 -- # run_test raid0_resize_test raid0_resize_test
00:13:23.679   16:57:16	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:13:23.679   16:57:16	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:13:23.679   16:57:16	-- common/autotest_common.sh@10 -- # set +x
00:13:23.679  ************************************
00:13:23.679  START TEST raid0_resize_test
00:13:23.679  ************************************
00:13:23.679   16:57:16	-- common/autotest_common.sh@1114 -- # raid0_resize_test
00:13:23.679   16:57:16	-- bdev/bdev_raid.sh@293 -- # local blksize=512
00:13:23.679   16:57:16	-- bdev/bdev_raid.sh@294 -- # local bdev_size_mb=32
00:13:23.679   16:57:16	-- bdev/bdev_raid.sh@295 -- # local new_bdev_size_mb=64
00:13:23.679   16:57:16	-- bdev/bdev_raid.sh@296 -- # local blkcnt
00:13:23.679   16:57:16	-- bdev/bdev_raid.sh@297 -- # local raid_size_mb
00:13:23.679   16:57:16	-- bdev/bdev_raid.sh@298 -- # local new_raid_size_mb
00:13:23.679   16:57:16	-- bdev/bdev_raid.sh@300 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid
00:13:23.679   16:57:16	-- bdev/bdev_raid.sh@301 -- # raid_pid=122639
00:13:23.679  Process raid pid: 122639
00:13:23.679   16:57:16	-- bdev/bdev_raid.sh@302 -- # echo 'Process raid pid: 122639'
00:13:23.679   16:57:16	-- bdev/bdev_raid.sh@303 -- # waitforlisten 122639 /var/tmp/spdk-raid.sock
00:13:23.679   16:57:16	-- common/autotest_common.sh@829 -- # '[' -z 122639 ']'
00:13:23.679   16:57:16	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock
00:13:23.679   16:57:16	-- common/autotest_common.sh@834 -- # local max_retries=100
00:13:23.679  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...
00:13:23.679   16:57:16	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...'
00:13:23.679   16:57:16	-- common/autotest_common.sh@838 -- # xtrace_disable
00:13:23.679   16:57:16	-- common/autotest_common.sh@10 -- # set +x
00:13:23.679  [2024-11-19 16:57:16.366212] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:13:23.679  [2024-11-19 16:57:16.367337] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:13:23.679  [2024-11-19 16:57:16.525020] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:23.937  [2024-11-19 16:57:16.567905] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:13:23.937  [2024-11-19 16:57:16.609728] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:13:24.503   16:57:17	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:13:24.503   16:57:17	-- common/autotest_common.sh@862 -- # return 0
00:13:24.503   16:57:17	-- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_1 32 512
00:13:24.762  Base_1
00:13:24.762   16:57:17	-- bdev/bdev_raid.sh@306 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_2 32 512
00:13:25.020  Base_2
00:13:25.020   16:57:17	-- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r 0 -b 'Base_1 Base_2' -n Raid
00:13:25.020  [2024-11-19 16:57:17.811135] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed
00:13:25.020  [2024-11-19 16:57:17.813268] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed
00:13:25.020  [2024-11-19 16:57:17.813323] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080
00:13:25.020  [2024-11-19 16:57:17.813332] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512
00:13:25.020  [2024-11-19 16:57:17.813486] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000001de0
00:13:25.020  [2024-11-19 16:57:17.813827] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080
00:13:25.020  [2024-11-19 16:57:17.813845] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x616000006080
00:13:25.020  [2024-11-19 16:57:17.813991] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:13:25.020   16:57:17	-- bdev/bdev_raid.sh@311 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_1 64
00:13:25.284  [2024-11-19 16:57:17.979140] bdev_raid.c:2069:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev
00:13:25.284  [2024-11-19 16:57:17.979175] bdev_raid.c:2082:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072
00:13:25.284  true
00:13:25.284    16:57:17	-- bdev/bdev_raid.sh@314 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid
00:13:25.284    16:57:17	-- bdev/bdev_raid.sh@314 -- # jq '.[].num_blocks'
00:13:25.563  [2024-11-19 16:57:18.159319] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:13:25.563   16:57:18	-- bdev/bdev_raid.sh@314 -- # blkcnt=131072
00:13:25.563   16:57:18	-- bdev/bdev_raid.sh@315 -- # raid_size_mb=64
00:13:25.563   16:57:18	-- bdev/bdev_raid.sh@316 -- # '[' 64 '!=' 64 ']'
00:13:25.563   16:57:18	-- bdev/bdev_raid.sh@322 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_2 64
00:13:25.827  [2024-11-19 16:57:18.423215] bdev_raid.c:2069:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev
00:13:25.827  [2024-11-19 16:57:18.423251] bdev_raid.c:2082:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072
00:13:25.827  [2024-11-19 16:57:18.423294] raid0.c: 402:raid0_resize: *NOTICE*: raid0 'Raid': min blockcount was changed from 262144 to 262144
00:13:25.827  [2024-11-19 16:57:18.423356] bdev_raid.c:1572:raid_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1
00:13:25.827  true
00:13:25.827    16:57:18	-- bdev/bdev_raid.sh@325 -- # jq '.[].num_blocks'
00:13:25.827    16:57:18	-- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid
00:13:25.827  [2024-11-19 16:57:18.607352] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:13:25.828   16:57:18	-- bdev/bdev_raid.sh@325 -- # blkcnt=262144
00:13:25.828   16:57:18	-- bdev/bdev_raid.sh@326 -- # raid_size_mb=128
00:13:25.828   16:57:18	-- bdev/bdev_raid.sh@327 -- # '[' 128 '!=' 128 ']'
00:13:25.828   16:57:18	-- bdev/bdev_raid.sh@332 -- # killprocess 122639
00:13:25.828   16:57:18	-- common/autotest_common.sh@936 -- # '[' -z 122639 ']'
00:13:25.828   16:57:18	-- common/autotest_common.sh@940 -- # kill -0 122639
00:13:25.828    16:57:18	-- common/autotest_common.sh@941 -- # uname
00:13:25.828   16:57:18	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:13:25.828    16:57:18	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 122639
00:13:25.828   16:57:18	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:13:25.828   16:57:18	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:13:25.828   16:57:18	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 122639'
00:13:25.828  killing process with pid 122639
00:13:25.828   16:57:18	-- common/autotest_common.sh@955 -- # kill 122639
00:13:25.828  [2024-11-19 16:57:18.654372] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:13:25.828   16:57:18	-- common/autotest_common.sh@960 -- # wait 122639
00:13:25.828  [2024-11-19 16:57:18.654484] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:13:25.828  [2024-11-19 16:57:18.654547] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:13:25.828  [2024-11-19 16:57:18.654557] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Raid, state offline
00:13:25.828  [2024-11-19 16:57:18.655093] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:13:26.086   16:57:18	-- bdev/bdev_raid.sh@334 -- # return 0
00:13:26.086  
00:13:26.086  real	0m2.593s
00:13:26.086  user	0m3.868s
00:13:26.086  sys	0m0.492s
00:13:26.086   16:57:18	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:13:26.086  ************************************
00:13:26.086  END TEST raid0_resize_test
00:13:26.086   16:57:18	-- common/autotest_common.sh@10 -- # set +x
00:13:26.086  ************************************
00:13:26.345   16:57:18	-- bdev/bdev_raid.sh@725 -- # for n in {2..4}
00:13:26.345   16:57:18	-- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1
00:13:26.345   16:57:18	-- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false
00:13:26.345   16:57:18	-- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']'
00:13:26.345   16:57:18	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:13:26.345   16:57:18	-- common/autotest_common.sh@10 -- # set +x
00:13:26.345  ************************************
00:13:26.345  START TEST raid_state_function_test
00:13:26.345  ************************************
00:13:26.345   16:57:18	-- common/autotest_common.sh@1114 -- # raid_state_function_test raid0 2 false
00:13:26.345   16:57:18	-- bdev/bdev_raid.sh@202 -- # local raid_level=raid0
00:13:26.345   16:57:18	-- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2
00:13:26.345   16:57:18	-- bdev/bdev_raid.sh@204 -- # local superblock=false
00:13:26.345   16:57:18	-- bdev/bdev_raid.sh@205 -- # local raid_bdev
00:13:26.345    16:57:18	-- bdev/bdev_raid.sh@206 -- # (( i = 1 ))
00:13:26.345    16:57:18	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:13:26.345    16:57:18	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev1
00:13:26.345    16:57:18	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:13:26.345    16:57:18	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:13:26.345    16:57:18	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev2
00:13:26.345    16:57:18	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:13:26.345    16:57:18	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:13:26.345   16:57:18	-- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2')
00:13:26.345   16:57:18	-- bdev/bdev_raid.sh@206 -- # local base_bdevs
00:13:26.345   16:57:18	-- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid
00:13:26.345   16:57:18	-- bdev/bdev_raid.sh@208 -- # local strip_size
00:13:26.345   16:57:18	-- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg
00:13:26.345   16:57:18	-- bdev/bdev_raid.sh@210 -- # local superblock_create_arg
00:13:26.345   16:57:18	-- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']'
00:13:26.345   16:57:18	-- bdev/bdev_raid.sh@213 -- # strip_size=64
00:13:26.345   16:57:18	-- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64'
00:13:26.345   16:57:18	-- bdev/bdev_raid.sh@219 -- # '[' false = true ']'
00:13:26.345   16:57:18	-- bdev/bdev_raid.sh@222 -- # superblock_create_arg=
00:13:26.345   16:57:18	-- bdev/bdev_raid.sh@226 -- # raid_pid=122721
00:13:26.345  Process raid pid: 122721
00:13:26.345   16:57:18	-- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 122721'
00:13:26.345   16:57:18	-- bdev/bdev_raid.sh@228 -- # waitforlisten 122721 /var/tmp/spdk-raid.sock
00:13:26.345   16:57:18	-- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid
00:13:26.345   16:57:18	-- common/autotest_common.sh@829 -- # '[' -z 122721 ']'
00:13:26.345   16:57:18	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock
00:13:26.345  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...
00:13:26.345   16:57:18	-- common/autotest_common.sh@834 -- # local max_retries=100
00:13:26.345   16:57:18	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...'
00:13:26.345   16:57:18	-- common/autotest_common.sh@838 -- # xtrace_disable
00:13:26.345   16:57:18	-- common/autotest_common.sh@10 -- # set +x
00:13:26.345  [2024-11-19 16:57:19.048863] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:13:26.345  [2024-11-19 16:57:19.049122] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:13:26.604  [2024-11-19 16:57:19.204578] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:26.604  [2024-11-19 16:57:19.247572] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:13:26.604  [2024-11-19 16:57:19.289358] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:13:27.171   16:57:19	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:13:27.171   16:57:20	-- common/autotest_common.sh@862 -- # return 0
00:13:27.171   16:57:20	-- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid
00:13:27.430  [2024-11-19 16:57:20.158768] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:13:27.430  [2024-11-19 16:57:20.158863] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:13:27.430  [2024-11-19 16:57:20.158874] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:13:27.430  [2024-11-19 16:57:20.158892] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:13:27.430   16:57:20	-- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2
00:13:27.430   16:57:20	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:13:27.430   16:57:20	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:13:27.430   16:57:20	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid0
00:13:27.430   16:57:20	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:13:27.430   16:57:20	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:13:27.430   16:57:20	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:13:27.430   16:57:20	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:13:27.430   16:57:20	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:13:27.430   16:57:20	-- bdev/bdev_raid.sh@125 -- # local tmp
00:13:27.430    16:57:20	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:13:27.430    16:57:20	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:13:27.689   16:57:20	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:13:27.689    "name": "Existed_Raid",
00:13:27.689    "uuid": "00000000-0000-0000-0000-000000000000",
00:13:27.689    "strip_size_kb": 64,
00:13:27.689    "state": "configuring",
00:13:27.689    "raid_level": "raid0",
00:13:27.689    "superblock": false,
00:13:27.689    "num_base_bdevs": 2,
00:13:27.689    "num_base_bdevs_discovered": 0,
00:13:27.689    "num_base_bdevs_operational": 2,
00:13:27.689    "base_bdevs_list": [
00:13:27.689      {
00:13:27.689        "name": "BaseBdev1",
00:13:27.689        "uuid": "00000000-0000-0000-0000-000000000000",
00:13:27.689        "is_configured": false,
00:13:27.690        "data_offset": 0,
00:13:27.690        "data_size": 0
00:13:27.690      },
00:13:27.690      {
00:13:27.690        "name": "BaseBdev2",
00:13:27.690        "uuid": "00000000-0000-0000-0000-000000000000",
00:13:27.690        "is_configured": false,
00:13:27.690        "data_offset": 0,
00:13:27.690        "data_size": 0
00:13:27.690      }
00:13:27.690    ]
00:13:27.690  }'
00:13:27.690   16:57:20	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:13:27.690   16:57:20	-- common/autotest_common.sh@10 -- # set +x
00:13:28.258   16:57:21	-- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid
00:13:28.517  [2024-11-19 16:57:21.310839] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:13:28.517  [2024-11-19 16:57:21.310894] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring
00:13:28.517   16:57:21	-- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid
00:13:28.776  [2024-11-19 16:57:21.514906] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:13:28.776  [2024-11-19 16:57:21.514993] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:13:28.776  [2024-11-19 16:57:21.515004] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:13:28.776  [2024-11-19 16:57:21.515028] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:13:28.776   16:57:21	-- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1
00:13:29.035  [2024-11-19 16:57:21.776414] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:13:29.035  BaseBdev1
00:13:29.035   16:57:21	-- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1
00:13:29.035   16:57:21	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1
00:13:29.035   16:57:21	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:13:29.035   16:57:21	-- common/autotest_common.sh@899 -- # local i
00:13:29.035   16:57:21	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:13:29.035   16:57:21	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:13:29.035   16:57:21	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:13:29.294   16:57:22	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000
00:13:29.553  [
00:13:29.553    {
00:13:29.553      "name": "BaseBdev1",
00:13:29.553      "aliases": [
00:13:29.553        "12e00e21-af87-47fd-95a4-afd31edec0f2"
00:13:29.553      ],
00:13:29.553      "product_name": "Malloc disk",
00:13:29.553      "block_size": 512,
00:13:29.553      "num_blocks": 65536,
00:13:29.553      "uuid": "12e00e21-af87-47fd-95a4-afd31edec0f2",
00:13:29.553      "assigned_rate_limits": {
00:13:29.553        "rw_ios_per_sec": 0,
00:13:29.553        "rw_mbytes_per_sec": 0,
00:13:29.553        "r_mbytes_per_sec": 0,
00:13:29.553        "w_mbytes_per_sec": 0
00:13:29.553      },
00:13:29.553      "claimed": true,
00:13:29.553      "claim_type": "exclusive_write",
00:13:29.553      "zoned": false,
00:13:29.553      "supported_io_types": {
00:13:29.553        "read": true,
00:13:29.553        "write": true,
00:13:29.553        "unmap": true,
00:13:29.553        "write_zeroes": true,
00:13:29.553        "flush": true,
00:13:29.553        "reset": true,
00:13:29.553        "compare": false,
00:13:29.553        "compare_and_write": false,
00:13:29.553        "abort": true,
00:13:29.553        "nvme_admin": false,
00:13:29.553        "nvme_io": false
00:13:29.553      },
00:13:29.553      "memory_domains": [
00:13:29.553        {
00:13:29.553          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:13:29.553          "dma_device_type": 2
00:13:29.553        }
00:13:29.553      ],
00:13:29.553      "driver_specific": {}
00:13:29.553    }
00:13:29.553  ]
00:13:29.553   16:57:22	-- common/autotest_common.sh@905 -- # return 0
00:13:29.553   16:57:22	-- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2
00:13:29.553   16:57:22	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:13:29.553   16:57:22	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:13:29.553   16:57:22	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid0
00:13:29.553   16:57:22	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:13:29.553   16:57:22	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:13:29.553   16:57:22	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:13:29.553   16:57:22	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:13:29.553   16:57:22	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:13:29.553   16:57:22	-- bdev/bdev_raid.sh@125 -- # local tmp
00:13:29.553    16:57:22	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:13:29.553    16:57:22	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:13:29.813   16:57:22	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:13:29.813    "name": "Existed_Raid",
00:13:29.813    "uuid": "00000000-0000-0000-0000-000000000000",
00:13:29.813    "strip_size_kb": 64,
00:13:29.813    "state": "configuring",
00:13:29.813    "raid_level": "raid0",
00:13:29.813    "superblock": false,
00:13:29.813    "num_base_bdevs": 2,
00:13:29.813    "num_base_bdevs_discovered": 1,
00:13:29.813    "num_base_bdevs_operational": 2,
00:13:29.813    "base_bdevs_list": [
00:13:29.813      {
00:13:29.813        "name": "BaseBdev1",
00:13:29.813        "uuid": "12e00e21-af87-47fd-95a4-afd31edec0f2",
00:13:29.813        "is_configured": true,
00:13:29.813        "data_offset": 0,
00:13:29.813        "data_size": 65536
00:13:29.813      },
00:13:29.813      {
00:13:29.813        "name": "BaseBdev2",
00:13:29.813        "uuid": "00000000-0000-0000-0000-000000000000",
00:13:29.813        "is_configured": false,
00:13:29.813        "data_offset": 0,
00:13:29.813        "data_size": 0
00:13:29.813      }
00:13:29.813    ]
00:13:29.813  }'
00:13:29.813   16:57:22	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:13:29.813   16:57:22	-- common/autotest_common.sh@10 -- # set +x
00:13:30.380   16:57:23	-- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid
00:13:30.380  [2024-11-19 16:57:23.216683] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:13:30.380  [2024-11-19 16:57:23.216746] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring
00:13:30.639   16:57:23	-- bdev/bdev_raid.sh@244 -- # '[' false = true ']'
00:13:30.639   16:57:23	-- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid
00:13:30.639  [2024-11-19 16:57:23.404776] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:13:30.639  [2024-11-19 16:57:23.406979] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:13:30.639  [2024-11-19 16:57:23.407040] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:13:30.639   16:57:23	-- bdev/bdev_raid.sh@254 -- # (( i = 1 ))
00:13:30.639   16:57:23	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:13:30.639   16:57:23	-- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2
00:13:30.639   16:57:23	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:13:30.639   16:57:23	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:13:30.639   16:57:23	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid0
00:13:30.639   16:57:23	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:13:30.639   16:57:23	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:13:30.639   16:57:23	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:13:30.639   16:57:23	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:13:30.639   16:57:23	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:13:30.639   16:57:23	-- bdev/bdev_raid.sh@125 -- # local tmp
00:13:30.639    16:57:23	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:13:30.639    16:57:23	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:13:30.897   16:57:23	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:13:30.897    "name": "Existed_Raid",
00:13:30.897    "uuid": "00000000-0000-0000-0000-000000000000",
00:13:30.897    "strip_size_kb": 64,
00:13:30.897    "state": "configuring",
00:13:30.897    "raid_level": "raid0",
00:13:30.897    "superblock": false,
00:13:30.897    "num_base_bdevs": 2,
00:13:30.897    "num_base_bdevs_discovered": 1,
00:13:30.897    "num_base_bdevs_operational": 2,
00:13:30.897    "base_bdevs_list": [
00:13:30.897      {
00:13:30.897        "name": "BaseBdev1",
00:13:30.897        "uuid": "12e00e21-af87-47fd-95a4-afd31edec0f2",
00:13:30.897        "is_configured": true,
00:13:30.897        "data_offset": 0,
00:13:30.897        "data_size": 65536
00:13:30.897      },
00:13:30.897      {
00:13:30.897        "name": "BaseBdev2",
00:13:30.897        "uuid": "00000000-0000-0000-0000-000000000000",
00:13:30.897        "is_configured": false,
00:13:30.897        "data_offset": 0,
00:13:30.897        "data_size": 0
00:13:30.897      }
00:13:30.897    ]
00:13:30.897  }'
00:13:30.897   16:57:23	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:13:30.897   16:57:23	-- common/autotest_common.sh@10 -- # set +x
00:13:31.834   16:57:24	-- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2
00:13:31.834  [2024-11-19 16:57:24.642308] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:13:31.834  [2024-11-19 16:57:24.642442] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080
00:13:31.834  [2024-11-19 16:57:24.642490] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512
00:13:31.834  [2024-11-19 16:57:24.642738] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000001f80
00:13:31.834  [2024-11-19 16:57:24.643494] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080
00:13:31.834  [2024-11-19 16:57:24.643653] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080
00:13:31.834  [2024-11-19 16:57:24.644095] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:13:31.834  BaseBdev2
00:13:31.834   16:57:24	-- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2
00:13:31.834   16:57:24	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2
00:13:31.834   16:57:24	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:13:31.834   16:57:24	-- common/autotest_common.sh@899 -- # local i
00:13:31.834   16:57:24	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:13:31.834   16:57:24	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:13:31.834   16:57:24	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:13:32.403   16:57:24	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000
00:13:32.403  [
00:13:32.403    {
00:13:32.403      "name": "BaseBdev2",
00:13:32.403      "aliases": [
00:13:32.403        "7b20df54-942b-4c4f-97ed-484236dc4c79"
00:13:32.403      ],
00:13:32.403      "product_name": "Malloc disk",
00:13:32.403      "block_size": 512,
00:13:32.403      "num_blocks": 65536,
00:13:32.403      "uuid": "7b20df54-942b-4c4f-97ed-484236dc4c79",
00:13:32.403      "assigned_rate_limits": {
00:13:32.403        "rw_ios_per_sec": 0,
00:13:32.403        "rw_mbytes_per_sec": 0,
00:13:32.403        "r_mbytes_per_sec": 0,
00:13:32.403        "w_mbytes_per_sec": 0
00:13:32.403      },
00:13:32.403      "claimed": true,
00:13:32.403      "claim_type": "exclusive_write",
00:13:32.403      "zoned": false,
00:13:32.403      "supported_io_types": {
00:13:32.403        "read": true,
00:13:32.403        "write": true,
00:13:32.403        "unmap": true,
00:13:32.403        "write_zeroes": true,
00:13:32.403        "flush": true,
00:13:32.403        "reset": true,
00:13:32.403        "compare": false,
00:13:32.403        "compare_and_write": false,
00:13:32.403        "abort": true,
00:13:32.403        "nvme_admin": false,
00:13:32.403        "nvme_io": false
00:13:32.403      },
00:13:32.403      "memory_domains": [
00:13:32.403        {
00:13:32.403          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:13:32.403          "dma_device_type": 2
00:13:32.403        }
00:13:32.403      ],
00:13:32.403      "driver_specific": {}
00:13:32.403    }
00:13:32.403  ]
00:13:32.662   16:57:25	-- common/autotest_common.sh@905 -- # return 0
00:13:32.662   16:57:25	-- bdev/bdev_raid.sh@254 -- # (( i++ ))
00:13:32.662   16:57:25	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:13:32.662   16:57:25	-- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2
00:13:32.662   16:57:25	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:13:32.662   16:57:25	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:13:32.662   16:57:25	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid0
00:13:32.662   16:57:25	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:13:32.662   16:57:25	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:13:32.662   16:57:25	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:13:32.662   16:57:25	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:13:32.662   16:57:25	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:13:32.662   16:57:25	-- bdev/bdev_raid.sh@125 -- # local tmp
00:13:32.663    16:57:25	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:13:32.663    16:57:25	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:13:32.663   16:57:25	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:13:32.663    "name": "Existed_Raid",
00:13:32.663    "uuid": "f960257d-4387-4ea6-8026-db6e7157d6d2",
00:13:32.663    "strip_size_kb": 64,
00:13:32.663    "state": "online",
00:13:32.663    "raid_level": "raid0",
00:13:32.663    "superblock": false,
00:13:32.663    "num_base_bdevs": 2,
00:13:32.663    "num_base_bdevs_discovered": 2,
00:13:32.663    "num_base_bdevs_operational": 2,
00:13:32.663    "base_bdevs_list": [
00:13:32.663      {
00:13:32.663        "name": "BaseBdev1",
00:13:32.663        "uuid": "12e00e21-af87-47fd-95a4-afd31edec0f2",
00:13:32.663        "is_configured": true,
00:13:32.663        "data_offset": 0,
00:13:32.663        "data_size": 65536
00:13:32.663      },
00:13:32.663      {
00:13:32.663        "name": "BaseBdev2",
00:13:32.663        "uuid": "7b20df54-942b-4c4f-97ed-484236dc4c79",
00:13:32.663        "is_configured": true,
00:13:32.663        "data_offset": 0,
00:13:32.663        "data_size": 65536
00:13:32.663      }
00:13:32.663    ]
00:13:32.663  }'
00:13:32.663   16:57:25	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:13:32.663   16:57:25	-- common/autotest_common.sh@10 -- # set +x
00:13:33.231   16:57:26	-- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1
00:13:33.491  [2024-11-19 16:57:26.346805] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:13:33.491  [2024-11-19 16:57:26.347036] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:13:33.491  [2024-11-19 16:57:26.347210] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:13:33.749   16:57:26	-- bdev/bdev_raid.sh@263 -- # local expected_state
00:13:33.749   16:57:26	-- bdev/bdev_raid.sh@264 -- # has_redundancy raid0
00:13:33.749   16:57:26	-- bdev/bdev_raid.sh@195 -- # case $1 in
00:13:33.749   16:57:26	-- bdev/bdev_raid.sh@197 -- # return 1
00:13:33.749   16:57:26	-- bdev/bdev_raid.sh@265 -- # expected_state=offline
00:13:33.749   16:57:26	-- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1
00:13:33.749   16:57:26	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:13:33.749   16:57:26	-- bdev/bdev_raid.sh@118 -- # local expected_state=offline
00:13:33.749   16:57:26	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid0
00:13:33.749   16:57:26	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:13:33.749   16:57:26	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1
00:13:33.749   16:57:26	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:13:33.749   16:57:26	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:13:33.749   16:57:26	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:13:33.749   16:57:26	-- bdev/bdev_raid.sh@125 -- # local tmp
00:13:33.749    16:57:26	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:13:33.749    16:57:26	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:13:33.749   16:57:26	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:13:33.749    "name": "Existed_Raid",
00:13:33.749    "uuid": "f960257d-4387-4ea6-8026-db6e7157d6d2",
00:13:33.749    "strip_size_kb": 64,
00:13:33.749    "state": "offline",
00:13:33.749    "raid_level": "raid0",
00:13:33.749    "superblock": false,
00:13:33.749    "num_base_bdevs": 2,
00:13:33.749    "num_base_bdevs_discovered": 1,
00:13:33.749    "num_base_bdevs_operational": 1,
00:13:33.749    "base_bdevs_list": [
00:13:33.749      {
00:13:33.749        "name": null,
00:13:33.749        "uuid": "00000000-0000-0000-0000-000000000000",
00:13:33.749        "is_configured": false,
00:13:33.749        "data_offset": 0,
00:13:33.749        "data_size": 65536
00:13:33.749      },
00:13:33.749      {
00:13:33.749        "name": "BaseBdev2",
00:13:33.749        "uuid": "7b20df54-942b-4c4f-97ed-484236dc4c79",
00:13:33.749        "is_configured": true,
00:13:33.749        "data_offset": 0,
00:13:33.749        "data_size": 65536
00:13:33.749      }
00:13:33.749    ]
00:13:33.749  }'
00:13:33.749   16:57:26	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:13:33.749   16:57:26	-- common/autotest_common.sh@10 -- # set +x
00:13:34.684   16:57:27	-- bdev/bdev_raid.sh@273 -- # (( i = 1 ))
00:13:34.684   16:57:27	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:13:34.684    16:57:27	-- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:13:34.684    16:57:27	-- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]'
00:13:34.684   16:57:27	-- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid
00:13:34.684   16:57:27	-- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:13:34.684   16:57:27	-- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2
00:13:34.943  [2024-11-19 16:57:27.706540] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2
00:13:34.943  [2024-11-19 16:57:27.706801] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline
00:13:34.943   16:57:27	-- bdev/bdev_raid.sh@273 -- # (( i++ ))
00:13:34.943   16:57:27	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:13:34.943    16:57:27	-- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:13:34.943    16:57:27	-- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)'
00:13:35.201   16:57:27	-- bdev/bdev_raid.sh@281 -- # raid_bdev=
00:13:35.201   16:57:27	-- bdev/bdev_raid.sh@282 -- # '[' -n '' ']'
00:13:35.201   16:57:27	-- bdev/bdev_raid.sh@287 -- # killprocess 122721
00:13:35.201   16:57:27	-- common/autotest_common.sh@936 -- # '[' -z 122721 ']'
00:13:35.201   16:57:27	-- common/autotest_common.sh@940 -- # kill -0 122721
00:13:35.201    16:57:27	-- common/autotest_common.sh@941 -- # uname
00:13:35.201   16:57:27	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:13:35.201    16:57:27	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 122721
00:13:35.201  killing process with pid 122721
00:13:35.201   16:57:28	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:13:35.201   16:57:28	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:13:35.201   16:57:28	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 122721'
00:13:35.201   16:57:28	-- common/autotest_common.sh@955 -- # kill 122721
00:13:35.201   16:57:28	-- common/autotest_common.sh@960 -- # wait 122721
00:13:35.201  [2024-11-19 16:57:28.025155] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:13:35.201  [2024-11-19 16:57:28.025246] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:13:35.459  ************************************
00:13:35.460  END TEST raid_state_function_test
00:13:35.460  ************************************
00:13:35.460   16:57:28	-- bdev/bdev_raid.sh@289 -- # return 0
00:13:35.460  
00:13:35.460  real	0m9.311s
00:13:35.460  user	0m16.574s
00:13:35.460  sys	0m1.492s
00:13:35.460   16:57:28	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:13:35.460   16:57:28	-- common/autotest_common.sh@10 -- # set +x
00:13:35.718   16:57:28	-- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true
00:13:35.718   16:57:28	-- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']'
00:13:35.718   16:57:28	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:13:35.718   16:57:28	-- common/autotest_common.sh@10 -- # set +x
00:13:35.718  ************************************
00:13:35.718  START TEST raid_state_function_test_sb
00:13:35.718  ************************************
00:13:35.718   16:57:28	-- common/autotest_common.sh@1114 -- # raid_state_function_test raid0 2 true
00:13:35.718   16:57:28	-- bdev/bdev_raid.sh@202 -- # local raid_level=raid0
00:13:35.718   16:57:28	-- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2
00:13:35.718   16:57:28	-- bdev/bdev_raid.sh@204 -- # local superblock=true
00:13:35.718   16:57:28	-- bdev/bdev_raid.sh@205 -- # local raid_bdev
00:13:35.718    16:57:28	-- bdev/bdev_raid.sh@206 -- # (( i = 1 ))
00:13:35.718    16:57:28	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:13:35.718    16:57:28	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev1
00:13:35.718    16:57:28	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:13:35.718    16:57:28	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:13:35.718    16:57:28	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev2
00:13:35.718    16:57:28	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:13:35.718    16:57:28	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:13:35.718   16:57:28	-- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2')
00:13:35.718   16:57:28	-- bdev/bdev_raid.sh@206 -- # local base_bdevs
00:13:35.718   16:57:28	-- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid
00:13:35.718   16:57:28	-- bdev/bdev_raid.sh@208 -- # local strip_size
00:13:35.718   16:57:28	-- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg
00:13:35.718   16:57:28	-- bdev/bdev_raid.sh@210 -- # local superblock_create_arg
00:13:35.718   16:57:28	-- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']'
00:13:35.718   16:57:28	-- bdev/bdev_raid.sh@213 -- # strip_size=64
00:13:35.718   16:57:28	-- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64'
00:13:35.718   16:57:28	-- bdev/bdev_raid.sh@219 -- # '[' true = true ']'
00:13:35.718   16:57:28	-- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s
00:13:35.718   16:57:28	-- bdev/bdev_raid.sh@226 -- # raid_pid=123030
00:13:35.718   16:57:28	-- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 123030'
00:13:35.718  Process raid pid: 123030
00:13:35.718   16:57:28	-- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid
00:13:35.718   16:57:28	-- bdev/bdev_raid.sh@228 -- # waitforlisten 123030 /var/tmp/spdk-raid.sock
00:13:35.718   16:57:28	-- common/autotest_common.sh@829 -- # '[' -z 123030 ']'
00:13:35.718   16:57:28	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock
00:13:35.718   16:57:28	-- common/autotest_common.sh@834 -- # local max_retries=100
00:13:35.718   16:57:28	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...'
00:13:35.719  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...
00:13:35.719   16:57:28	-- common/autotest_common.sh@838 -- # xtrace_disable
00:13:35.719   16:57:28	-- common/autotest_common.sh@10 -- # set +x
00:13:35.719  [2024-11-19 16:57:28.421620] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:13:35.719  [2024-11-19 16:57:28.421965] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:13:35.719  [2024-11-19 16:57:28.558565] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:35.978  [2024-11-19 16:57:28.606089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:13:35.978  [2024-11-19 16:57:28.648832] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:13:36.544   16:57:29	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:13:36.544   16:57:29	-- common/autotest_common.sh@862 -- # return 0
00:13:36.544   16:57:29	-- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid
00:13:36.802  [2024-11-19 16:57:29.651722] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:13:36.802  [2024-11-19 16:57:29.651983] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:13:36.802  [2024-11-19 16:57:29.652080] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:13:36.802  [2024-11-19 16:57:29.652131] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:13:37.061   16:57:29	-- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2
00:13:37.061   16:57:29	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:13:37.061   16:57:29	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:13:37.061   16:57:29	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid0
00:13:37.061   16:57:29	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:13:37.061   16:57:29	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:13:37.061   16:57:29	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:13:37.061   16:57:29	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:13:37.061   16:57:29	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:13:37.061   16:57:29	-- bdev/bdev_raid.sh@125 -- # local tmp
00:13:37.061    16:57:29	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:13:37.061    16:57:29	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:13:37.061   16:57:29	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:13:37.061    "name": "Existed_Raid",
00:13:37.061    "uuid": "21518812-c8fb-45e9-a23c-90448af3d094",
00:13:37.061    "strip_size_kb": 64,
00:13:37.061    "state": "configuring",
00:13:37.061    "raid_level": "raid0",
00:13:37.061    "superblock": true,
00:13:37.061    "num_base_bdevs": 2,
00:13:37.061    "num_base_bdevs_discovered": 0,
00:13:37.061    "num_base_bdevs_operational": 2,
00:13:37.061    "base_bdevs_list": [
00:13:37.061      {
00:13:37.061        "name": "BaseBdev1",
00:13:37.061        "uuid": "00000000-0000-0000-0000-000000000000",
00:13:37.061        "is_configured": false,
00:13:37.061        "data_offset": 0,
00:13:37.061        "data_size": 0
00:13:37.061      },
00:13:37.061      {
00:13:37.061        "name": "BaseBdev2",
00:13:37.061        "uuid": "00000000-0000-0000-0000-000000000000",
00:13:37.061        "is_configured": false,
00:13:37.061        "data_offset": 0,
00:13:37.061        "data_size": 0
00:13:37.061      }
00:13:37.061    ]
00:13:37.061  }'
00:13:37.061   16:57:29	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:13:37.061   16:57:29	-- common/autotest_common.sh@10 -- # set +x
00:13:37.629   16:57:30	-- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid
00:13:37.888  [2024-11-19 16:57:30.715769] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:13:37.888  [2024-11-19 16:57:30.715981] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring
00:13:37.888   16:57:30	-- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid
00:13:38.455  [2024-11-19 16:57:31.059860] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:13:38.455  [2024-11-19 16:57:31.060106] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:13:38.455  [2024-11-19 16:57:31.060194] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:13:38.455  [2024-11-19 16:57:31.060249] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:13:38.455   16:57:31	-- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1
00:13:38.456  [2024-11-19 16:57:31.273162] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:13:38.456  BaseBdev1
00:13:38.456   16:57:31	-- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1
00:13:38.456   16:57:31	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1
00:13:38.456   16:57:31	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:13:38.456   16:57:31	-- common/autotest_common.sh@899 -- # local i
00:13:38.456   16:57:31	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:13:38.456   16:57:31	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:13:38.456   16:57:31	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:13:39.033   16:57:31	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000
00:13:39.033  [
00:13:39.033    {
00:13:39.033      "name": "BaseBdev1",
00:13:39.033      "aliases": [
00:13:39.033        "c33bec9c-a6ff-4d97-99e8-5980e1755fd9"
00:13:39.033      ],
00:13:39.033      "product_name": "Malloc disk",
00:13:39.033      "block_size": 512,
00:13:39.034      "num_blocks": 65536,
00:13:39.034      "uuid": "c33bec9c-a6ff-4d97-99e8-5980e1755fd9",
00:13:39.034      "assigned_rate_limits": {
00:13:39.034        "rw_ios_per_sec": 0,
00:13:39.034        "rw_mbytes_per_sec": 0,
00:13:39.034        "r_mbytes_per_sec": 0,
00:13:39.034        "w_mbytes_per_sec": 0
00:13:39.034      },
00:13:39.034      "claimed": true,
00:13:39.034      "claim_type": "exclusive_write",
00:13:39.034      "zoned": false,
00:13:39.034      "supported_io_types": {
00:13:39.034        "read": true,
00:13:39.034        "write": true,
00:13:39.034        "unmap": true,
00:13:39.034        "write_zeroes": true,
00:13:39.034        "flush": true,
00:13:39.034        "reset": true,
00:13:39.034        "compare": false,
00:13:39.034        "compare_and_write": false,
00:13:39.034        "abort": true,
00:13:39.034        "nvme_admin": false,
00:13:39.034        "nvme_io": false
00:13:39.034      },
00:13:39.034      "memory_domains": [
00:13:39.034        {
00:13:39.034          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:13:39.034          "dma_device_type": 2
00:13:39.034        }
00:13:39.034      ],
00:13:39.034      "driver_specific": {}
00:13:39.034    }
00:13:39.034  ]
00:13:39.034   16:57:31	-- common/autotest_common.sh@905 -- # return 0
00:13:39.034   16:57:31	-- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2
00:13:39.034   16:57:31	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:13:39.034   16:57:31	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:13:39.034   16:57:31	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid0
00:13:39.034   16:57:31	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:13:39.034   16:57:31	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:13:39.034   16:57:31	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:13:39.034   16:57:31	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:13:39.034   16:57:31	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:13:39.034   16:57:31	-- bdev/bdev_raid.sh@125 -- # local tmp
00:13:39.034    16:57:31	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:13:39.034    16:57:31	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:13:39.293   16:57:32	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:13:39.293    "name": "Existed_Raid",
00:13:39.293    "uuid": "cff95cf5-3ae5-4bd1-b248-550d46c96d10",
00:13:39.293    "strip_size_kb": 64,
00:13:39.293    "state": "configuring",
00:13:39.293    "raid_level": "raid0",
00:13:39.293    "superblock": true,
00:13:39.293    "num_base_bdevs": 2,
00:13:39.293    "num_base_bdevs_discovered": 1,
00:13:39.293    "num_base_bdevs_operational": 2,
00:13:39.293    "base_bdevs_list": [
00:13:39.293      {
00:13:39.293        "name": "BaseBdev1",
00:13:39.293        "uuid": "c33bec9c-a6ff-4d97-99e8-5980e1755fd9",
00:13:39.293        "is_configured": true,
00:13:39.293        "data_offset": 2048,
00:13:39.293        "data_size": 63488
00:13:39.293      },
00:13:39.293      {
00:13:39.293        "name": "BaseBdev2",
00:13:39.293        "uuid": "00000000-0000-0000-0000-000000000000",
00:13:39.293        "is_configured": false,
00:13:39.293        "data_offset": 0,
00:13:39.293        "data_size": 0
00:13:39.293      }
00:13:39.293    ]
00:13:39.293  }'
00:13:39.293   16:57:32	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:13:39.293   16:57:32	-- common/autotest_common.sh@10 -- # set +x
00:13:39.861   16:57:32	-- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid
00:13:40.120  [2024-11-19 16:57:32.865520] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:13:40.120  [2024-11-19 16:57:32.865754] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring
00:13:40.120   16:57:32	-- bdev/bdev_raid.sh@244 -- # '[' true = true ']'
00:13:40.120   16:57:32	-- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1
00:13:40.380   16:57:33	-- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1
00:13:40.639  BaseBdev1
00:13:40.639   16:57:33	-- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1
00:13:40.639   16:57:33	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1
00:13:40.639   16:57:33	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:13:40.639   16:57:33	-- common/autotest_common.sh@899 -- # local i
00:13:40.639   16:57:33	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:13:40.639   16:57:33	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:13:40.639   16:57:33	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:13:40.898   16:57:33	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000
00:13:40.898  [
00:13:40.898    {
00:13:40.898      "name": "BaseBdev1",
00:13:40.898      "aliases": [
00:13:40.898        "6a2cd7e9-fe7d-45b9-a1d8-19d50b6d4640"
00:13:40.898      ],
00:13:40.898      "product_name": "Malloc disk",
00:13:40.898      "block_size": 512,
00:13:40.898      "num_blocks": 65536,
00:13:40.898      "uuid": "6a2cd7e9-fe7d-45b9-a1d8-19d50b6d4640",
00:13:40.898      "assigned_rate_limits": {
00:13:40.898        "rw_ios_per_sec": 0,
00:13:40.898        "rw_mbytes_per_sec": 0,
00:13:40.898        "r_mbytes_per_sec": 0,
00:13:40.898        "w_mbytes_per_sec": 0
00:13:40.898      },
00:13:40.898      "claimed": false,
00:13:40.898      "zoned": false,
00:13:40.898      "supported_io_types": {
00:13:40.898        "read": true,
00:13:40.898        "write": true,
00:13:40.898        "unmap": true,
00:13:40.898        "write_zeroes": true,
00:13:40.898        "flush": true,
00:13:40.898        "reset": true,
00:13:40.898        "compare": false,
00:13:40.898        "compare_and_write": false,
00:13:40.898        "abort": true,
00:13:40.898        "nvme_admin": false,
00:13:40.898        "nvme_io": false
00:13:40.898      },
00:13:40.898      "memory_domains": [
00:13:40.898        {
00:13:40.898          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:13:40.898          "dma_device_type": 2
00:13:40.898        }
00:13:40.898      ],
00:13:40.898      "driver_specific": {}
00:13:40.898    }
00:13:40.898  ]
00:13:40.898   16:57:33	-- common/autotest_common.sh@905 -- # return 0
00:13:40.898   16:57:33	-- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid
00:13:41.158  [2024-11-19 16:57:33.954467] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:13:41.158  [2024-11-19 16:57:33.956972] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:13:41.158  [2024-11-19 16:57:33.957151] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:13:41.158   16:57:33	-- bdev/bdev_raid.sh@254 -- # (( i = 1 ))
00:13:41.158   16:57:33	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:13:41.158   16:57:33	-- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2
00:13:41.158   16:57:33	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:13:41.158   16:57:33	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:13:41.158   16:57:33	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid0
00:13:41.158   16:57:33	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:13:41.158   16:57:33	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:13:41.158   16:57:33	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:13:41.158   16:57:33	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:13:41.158   16:57:33	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:13:41.158   16:57:33	-- bdev/bdev_raid.sh@125 -- # local tmp
00:13:41.158    16:57:33	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:13:41.158    16:57:33	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:13:41.416   16:57:34	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:13:41.416    "name": "Existed_Raid",
00:13:41.416    "uuid": "e450105c-558d-4908-beb1-12ce5cd3e9be",
00:13:41.416    "strip_size_kb": 64,
00:13:41.416    "state": "configuring",
00:13:41.416    "raid_level": "raid0",
00:13:41.416    "superblock": true,
00:13:41.416    "num_base_bdevs": 2,
00:13:41.416    "num_base_bdevs_discovered": 1,
00:13:41.416    "num_base_bdevs_operational": 2,
00:13:41.417    "base_bdevs_list": [
00:13:41.417      {
00:13:41.417        "name": "BaseBdev1",
00:13:41.417        "uuid": "6a2cd7e9-fe7d-45b9-a1d8-19d50b6d4640",
00:13:41.417        "is_configured": true,
00:13:41.417        "data_offset": 2048,
00:13:41.417        "data_size": 63488
00:13:41.417      },
00:13:41.417      {
00:13:41.417        "name": "BaseBdev2",
00:13:41.417        "uuid": "00000000-0000-0000-0000-000000000000",
00:13:41.417        "is_configured": false,
00:13:41.417        "data_offset": 0,
00:13:41.417        "data_size": 0
00:13:41.417      }
00:13:41.417    ]
00:13:41.417  }'
00:13:41.417   16:57:34	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:13:41.417   16:57:34	-- common/autotest_common.sh@10 -- # set +x
00:13:41.984   16:57:34	-- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2
00:13:42.242  [2024-11-19 16:57:34.986226] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:13:42.242  [2024-11-19 16:57:34.986730] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006680
00:13:42.242  [2024-11-19 16:57:34.986915] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512
00:13:42.242  [2024-11-19 16:57:34.987162] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002050
00:13:42.242  BaseBdev2
00:13:42.242  [2024-11-19 16:57:34.987754] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006680
00:13:42.242  [2024-11-19 16:57:34.987954] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006680
00:13:42.242  [2024-11-19 16:57:34.988304] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:13:42.242   16:57:34	-- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2
00:13:42.242   16:57:34	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2
00:13:42.242   16:57:34	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:13:42.242   16:57:34	-- common/autotest_common.sh@899 -- # local i
00:13:42.242   16:57:34	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:13:42.242   16:57:34	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:13:42.242   16:57:34	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:13:42.501   16:57:35	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000
00:13:42.501  [
00:13:42.501    {
00:13:42.501      "name": "BaseBdev2",
00:13:42.501      "aliases": [
00:13:42.501        "cd0e125c-31b7-406d-84df-bfb6139ad3dd"
00:13:42.501      ],
00:13:42.501      "product_name": "Malloc disk",
00:13:42.501      "block_size": 512,
00:13:42.501      "num_blocks": 65536,
00:13:42.501      "uuid": "cd0e125c-31b7-406d-84df-bfb6139ad3dd",
00:13:42.501      "assigned_rate_limits": {
00:13:42.501        "rw_ios_per_sec": 0,
00:13:42.501        "rw_mbytes_per_sec": 0,
00:13:42.501        "r_mbytes_per_sec": 0,
00:13:42.501        "w_mbytes_per_sec": 0
00:13:42.501      },
00:13:42.501      "claimed": true,
00:13:42.501      "claim_type": "exclusive_write",
00:13:42.501      "zoned": false,
00:13:42.501      "supported_io_types": {
00:13:42.501        "read": true,
00:13:42.501        "write": true,
00:13:42.501        "unmap": true,
00:13:42.501        "write_zeroes": true,
00:13:42.501        "flush": true,
00:13:42.501        "reset": true,
00:13:42.501        "compare": false,
00:13:42.501        "compare_and_write": false,
00:13:42.501        "abort": true,
00:13:42.501        "nvme_admin": false,
00:13:42.501        "nvme_io": false
00:13:42.501      },
00:13:42.501      "memory_domains": [
00:13:42.501        {
00:13:42.501          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:13:42.501          "dma_device_type": 2
00:13:42.501        }
00:13:42.501      ],
00:13:42.501      "driver_specific": {}
00:13:42.501    }
00:13:42.501  ]
00:13:42.759   16:57:35	-- common/autotest_common.sh@905 -- # return 0
00:13:42.759   16:57:35	-- bdev/bdev_raid.sh@254 -- # (( i++ ))
00:13:42.759   16:57:35	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:13:42.759   16:57:35	-- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2
00:13:42.759   16:57:35	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:13:42.759   16:57:35	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:13:42.759   16:57:35	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid0
00:13:42.759   16:57:35	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:13:42.759   16:57:35	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:13:42.759   16:57:35	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:13:42.759   16:57:35	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:13:42.760   16:57:35	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:13:42.760   16:57:35	-- bdev/bdev_raid.sh@125 -- # local tmp
00:13:42.760    16:57:35	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:13:42.760    16:57:35	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:13:42.760   16:57:35	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:13:42.760    "name": "Existed_Raid",
00:13:42.760    "uuid": "e450105c-558d-4908-beb1-12ce5cd3e9be",
00:13:42.760    "strip_size_kb": 64,
00:13:42.760    "state": "online",
00:13:42.760    "raid_level": "raid0",
00:13:42.760    "superblock": true,
00:13:42.760    "num_base_bdevs": 2,
00:13:42.760    "num_base_bdevs_discovered": 2,
00:13:42.760    "num_base_bdevs_operational": 2,
00:13:42.760    "base_bdevs_list": [
00:13:42.760      {
00:13:42.760        "name": "BaseBdev1",
00:13:42.760        "uuid": "6a2cd7e9-fe7d-45b9-a1d8-19d50b6d4640",
00:13:42.760        "is_configured": true,
00:13:42.760        "data_offset": 2048,
00:13:42.760        "data_size": 63488
00:13:42.760      },
00:13:42.760      {
00:13:42.760        "name": "BaseBdev2",
00:13:42.760        "uuid": "cd0e125c-31b7-406d-84df-bfb6139ad3dd",
00:13:42.760        "is_configured": true,
00:13:42.760        "data_offset": 2048,
00:13:42.760        "data_size": 63488
00:13:42.760      }
00:13:42.760    ]
00:13:42.760  }'
00:13:42.760   16:57:35	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:13:42.760   16:57:35	-- common/autotest_common.sh@10 -- # set +x
00:13:43.326   16:57:36	-- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1
00:13:43.584  [2024-11-19 16:57:36.262599] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:13:43.584  [2024-11-19 16:57:36.262807] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:13:43.584  [2024-11-19 16:57:36.263032] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:13:43.584   16:57:36	-- bdev/bdev_raid.sh@263 -- # local expected_state
00:13:43.584   16:57:36	-- bdev/bdev_raid.sh@264 -- # has_redundancy raid0
00:13:43.584   16:57:36	-- bdev/bdev_raid.sh@195 -- # case $1 in
00:13:43.584   16:57:36	-- bdev/bdev_raid.sh@197 -- # return 1
00:13:43.584   16:57:36	-- bdev/bdev_raid.sh@265 -- # expected_state=offline
00:13:43.584   16:57:36	-- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1
00:13:43.584   16:57:36	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:13:43.584   16:57:36	-- bdev/bdev_raid.sh@118 -- # local expected_state=offline
00:13:43.584   16:57:36	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid0
00:13:43.584   16:57:36	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:13:43.584   16:57:36	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1
00:13:43.584   16:57:36	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:13:43.584   16:57:36	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:13:43.584   16:57:36	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:13:43.584   16:57:36	-- bdev/bdev_raid.sh@125 -- # local tmp
00:13:43.584    16:57:36	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:13:43.584    16:57:36	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:13:43.843   16:57:36	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:13:43.843    "name": "Existed_Raid",
00:13:43.843    "uuid": "e450105c-558d-4908-beb1-12ce5cd3e9be",
00:13:43.843    "strip_size_kb": 64,
00:13:43.843    "state": "offline",
00:13:43.843    "raid_level": "raid0",
00:13:43.843    "superblock": true,
00:13:43.843    "num_base_bdevs": 2,
00:13:43.843    "num_base_bdevs_discovered": 1,
00:13:43.843    "num_base_bdevs_operational": 1,
00:13:43.843    "base_bdevs_list": [
00:13:43.843      {
00:13:43.843        "name": null,
00:13:43.843        "uuid": "00000000-0000-0000-0000-000000000000",
00:13:43.843        "is_configured": false,
00:13:43.843        "data_offset": 2048,
00:13:43.843        "data_size": 63488
00:13:43.843      },
00:13:43.843      {
00:13:43.843        "name": "BaseBdev2",
00:13:43.843        "uuid": "cd0e125c-31b7-406d-84df-bfb6139ad3dd",
00:13:43.843        "is_configured": true,
00:13:43.843        "data_offset": 2048,
00:13:43.843        "data_size": 63488
00:13:43.843      }
00:13:43.843    ]
00:13:43.843  }'
00:13:43.843   16:57:36	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:13:43.843   16:57:36	-- common/autotest_common.sh@10 -- # set +x
00:13:44.409   16:57:36	-- bdev/bdev_raid.sh@273 -- # (( i = 1 ))
00:13:44.409   16:57:36	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:13:44.409    16:57:36	-- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:13:44.409    16:57:36	-- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]'
00:13:44.409   16:57:37	-- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid
00:13:44.409   16:57:37	-- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:13:44.409   16:57:37	-- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2
00:13:44.667  [2024-11-19 16:57:37.409816] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2
00:13:44.667  [2024-11-19 16:57:37.410063] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state offline
00:13:44.667   16:57:37	-- bdev/bdev_raid.sh@273 -- # (( i++ ))
00:13:44.667   16:57:37	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:13:44.667    16:57:37	-- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:13:44.667    16:57:37	-- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)'
00:13:44.926   16:57:37	-- bdev/bdev_raid.sh@281 -- # raid_bdev=
00:13:44.926   16:57:37	-- bdev/bdev_raid.sh@282 -- # '[' -n '' ']'
00:13:44.926   16:57:37	-- bdev/bdev_raid.sh@287 -- # killprocess 123030
00:13:44.926   16:57:37	-- common/autotest_common.sh@936 -- # '[' -z 123030 ']'
00:13:44.926   16:57:37	-- common/autotest_common.sh@940 -- # kill -0 123030
00:13:44.926    16:57:37	-- common/autotest_common.sh@941 -- # uname
00:13:44.926   16:57:37	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:13:44.926    16:57:37	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 123030
00:13:44.926   16:57:37	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:13:44.926   16:57:37	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:13:44.926   16:57:37	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 123030'
00:13:44.926  killing process with pid 123030
00:13:44.926   16:57:37	-- common/autotest_common.sh@955 -- # kill 123030
00:13:44.926   16:57:37	-- common/autotest_common.sh@960 -- # wait 123030
00:13:44.926  [2024-11-19 16:57:37.648381] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:13:44.926  [2024-11-19 16:57:37.648477] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:13:45.186   16:57:37	-- bdev/bdev_raid.sh@289 -- # return 0
00:13:45.186  
00:13:45.186  real	0m9.538s
00:13:45.186  user	0m16.934s
00:13:45.186  sys	0m1.532s
00:13:45.186   16:57:37	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:13:45.186   16:57:37	-- common/autotest_common.sh@10 -- # set +x
00:13:45.186  ************************************
00:13:45.186  END TEST raid_state_function_test_sb
00:13:45.186  ************************************
00:13:45.186   16:57:37	-- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 2
00:13:45.186   16:57:37	-- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']'
00:13:45.186   16:57:37	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:13:45.186   16:57:37	-- common/autotest_common.sh@10 -- # set +x
00:13:45.186  ************************************
00:13:45.186  START TEST raid_superblock_test
00:13:45.186  ************************************
00:13:45.186   16:57:37	-- common/autotest_common.sh@1114 -- # raid_superblock_test raid0 2
00:13:45.186   16:57:37	-- bdev/bdev_raid.sh@338 -- # local raid_level=raid0
00:13:45.186   16:57:37	-- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2
00:13:45.186   16:57:37	-- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=()
00:13:45.186   16:57:37	-- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc
00:13:45.186   16:57:37	-- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=()
00:13:45.186   16:57:37	-- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt
00:13:45.186   16:57:37	-- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=()
00:13:45.186   16:57:37	-- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid
00:13:45.186   16:57:37	-- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1
00:13:45.186   16:57:37	-- bdev/bdev_raid.sh@344 -- # local strip_size
00:13:45.186   16:57:37	-- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg
00:13:45.186   16:57:37	-- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid
00:13:45.186   16:57:37	-- bdev/bdev_raid.sh@347 -- # local raid_bdev
00:13:45.186   16:57:37	-- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']'
00:13:45.186   16:57:37	-- bdev/bdev_raid.sh@350 -- # strip_size=64
00:13:45.186   16:57:37	-- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64'
00:13:45.186   16:57:37	-- bdev/bdev_raid.sh@357 -- # raid_pid=123342
00:13:45.186   16:57:37	-- bdev/bdev_raid.sh@358 -- # waitforlisten 123342 /var/tmp/spdk-raid.sock
00:13:45.186   16:57:37	-- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid
00:13:45.186   16:57:37	-- common/autotest_common.sh@829 -- # '[' -z 123342 ']'
00:13:45.186   16:57:37	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock
00:13:45.186   16:57:37	-- common/autotest_common.sh@834 -- # local max_retries=100
00:13:45.186   16:57:37	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...'
00:13:45.186  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...
00:13:45.186   16:57:37	-- common/autotest_common.sh@838 -- # xtrace_disable
00:13:45.186   16:57:37	-- common/autotest_common.sh@10 -- # set +x
00:13:45.186  [2024-11-19 16:57:38.041261] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:13:45.186  [2024-11-19 16:57:38.041729] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123342 ]
00:13:45.445  [2024-11-19 16:57:38.192409] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:45.445  [2024-11-19 16:57:38.233602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:13:45.445  [2024-11-19 16:57:38.275122] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:13:46.011   16:57:38	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:13:46.012   16:57:38	-- common/autotest_common.sh@862 -- # return 0
00:13:46.012   16:57:38	-- bdev/bdev_raid.sh@361 -- # (( i = 1 ))
00:13:46.012   16:57:38	-- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs ))
00:13:46.012   16:57:38	-- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1
00:13:46.012   16:57:38	-- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1
00:13:46.012   16:57:38	-- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001
00:13:46.012   16:57:38	-- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc)
00:13:46.012   16:57:38	-- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt)
00:13:46.012   16:57:38	-- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:13:46.012   16:57:38	-- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1
00:13:46.269  malloc1
00:13:46.269   16:57:39	-- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001
00:13:46.528  [2024-11-19 16:57:39.259279] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1
00:13:46.528  [2024-11-19 16:57:39.259524] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:13:46.528  [2024-11-19 16:57:39.259598] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80
00:13:46.528  [2024-11-19 16:57:39.259727] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:13:46.528  [2024-11-19 16:57:39.262237] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:13:46.528  [2024-11-19 16:57:39.262419] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1
00:13:46.528  pt1
00:13:46.528   16:57:39	-- bdev/bdev_raid.sh@361 -- # (( i++ ))
00:13:46.528   16:57:39	-- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs ))
00:13:46.528   16:57:39	-- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2
00:13:46.528   16:57:39	-- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2
00:13:46.528   16:57:39	-- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002
00:13:46.528   16:57:39	-- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc)
00:13:46.528   16:57:39	-- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt)
00:13:46.528   16:57:39	-- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:13:46.528   16:57:39	-- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2
00:13:46.786  malloc2
00:13:46.786   16:57:39	-- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:13:47.044  [2024-11-19 16:57:39.696153] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:13:47.044  [2024-11-19 16:57:39.696369] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:13:47.044  [2024-11-19 16:57:39.696452] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680
00:13:47.044  [2024-11-19 16:57:39.696601] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:13:47.044  [2024-11-19 16:57:39.698907] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:13:47.044  [2024-11-19 16:57:39.699074] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:13:47.044  pt2
00:13:47.044   16:57:39	-- bdev/bdev_raid.sh@361 -- # (( i++ ))
00:13:47.044   16:57:39	-- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs ))
00:13:47.044   16:57:39	-- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2' -n raid_bdev1 -s
00:13:47.304  [2024-11-19 16:57:39.940665] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed
00:13:47.304  [2024-11-19 16:57:39.942993] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:13:47.304  [2024-11-19 16:57:39.943308] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006c80
00:13:47.304  [2024-11-19 16:57:39.943421] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512
00:13:47.304  [2024-11-19 16:57:39.943614] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120
00:13:47.304  [2024-11-19 16:57:39.944171] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006c80
00:13:47.304  [2024-11-19 16:57:39.944209] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000006c80
00:13:47.304  [2024-11-19 16:57:39.944602] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:13:47.304   16:57:39	-- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2
00:13:47.304   16:57:39	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:13:47.304   16:57:39	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:13:47.304   16:57:39	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid0
00:13:47.304   16:57:39	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:13:47.304   16:57:39	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:13:47.304   16:57:39	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:13:47.304   16:57:39	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:13:47.304   16:57:39	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:13:47.304   16:57:39	-- bdev/bdev_raid.sh@125 -- # local tmp
00:13:47.304    16:57:39	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:13:47.304    16:57:39	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:13:47.304   16:57:40	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:13:47.304    "name": "raid_bdev1",
00:13:47.304    "uuid": "822b58bb-60ef-4dc3-a640-d48efcca2827",
00:13:47.304    "strip_size_kb": 64,
00:13:47.304    "state": "online",
00:13:47.304    "raid_level": "raid0",
00:13:47.304    "superblock": true,
00:13:47.304    "num_base_bdevs": 2,
00:13:47.304    "num_base_bdevs_discovered": 2,
00:13:47.304    "num_base_bdevs_operational": 2,
00:13:47.304    "base_bdevs_list": [
00:13:47.304      {
00:13:47.304        "name": "pt1",
00:13:47.304        "uuid": "ef799db4-8436-5854-bfa4-3899b778e9ba",
00:13:47.304        "is_configured": true,
00:13:47.304        "data_offset": 2048,
00:13:47.304        "data_size": 63488
00:13:47.304      },
00:13:47.304      {
00:13:47.304        "name": "pt2",
00:13:47.304        "uuid": "7628cda9-62c5-57ea-a3f6-617ca160f311",
00:13:47.304        "is_configured": true,
00:13:47.304        "data_offset": 2048,
00:13:47.304        "data_size": 63488
00:13:47.304      }
00:13:47.304    ]
00:13:47.304  }'
00:13:47.304   16:57:40	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:13:47.304   16:57:40	-- common/autotest_common.sh@10 -- # set +x
00:13:47.873    16:57:40	-- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid'
00:13:47.873    16:57:40	-- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1
00:13:48.131  [2024-11-19 16:57:40.905174] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:13:48.131   16:57:40	-- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=822b58bb-60ef-4dc3-a640-d48efcca2827
00:13:48.131   16:57:40	-- bdev/bdev_raid.sh@380 -- # '[' -z 822b58bb-60ef-4dc3-a640-d48efcca2827 ']'
00:13:48.131   16:57:40	-- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1
00:13:48.390  [2024-11-19 16:57:41.152996] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:13:48.390  [2024-11-19 16:57:41.153130] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:13:48.390  [2024-11-19 16:57:41.153362] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:13:48.390  [2024-11-19 16:57:41.153512] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:13:48.390  [2024-11-19 16:57:41.153589] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name raid_bdev1, state offline
00:13:48.390    16:57:41	-- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:13:48.390    16:57:41	-- bdev/bdev_raid.sh@386 -- # jq -r '.[]'
00:13:48.650   16:57:41	-- bdev/bdev_raid.sh@386 -- # raid_bdev=
00:13:48.650   16:57:41	-- bdev/bdev_raid.sh@387 -- # '[' -n '' ']'
00:13:48.650   16:57:41	-- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}"
00:13:48.650   16:57:41	-- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1
00:13:48.909   16:57:41	-- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}"
00:13:48.909   16:57:41	-- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2
00:13:49.168    16:57:41	-- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs
00:13:49.168    16:57:41	-- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any'
00:13:49.426   16:57:42	-- bdev/bdev_raid.sh@395 -- # '[' false == true ']'
00:13:49.426   16:57:42	-- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1
00:13:49.426   16:57:42	-- common/autotest_common.sh@650 -- # local es=0
00:13:49.426   16:57:42	-- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1
00:13:49.426   16:57:42	-- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:13:49.426   16:57:42	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:13:49.426    16:57:42	-- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:13:49.426   16:57:42	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:13:49.426    16:57:42	-- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:13:49.426   16:57:42	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:13:49.426   16:57:42	-- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:13:49.426   16:57:42	-- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]]
00:13:49.426   16:57:42	-- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1
00:13:49.426  [2024-11-19 16:57:42.225489] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed
00:13:49.426  [2024-11-19 16:57:42.228169] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed
00:13:49.426  [2024-11-19 16:57:42.228381] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1
00:13:49.426  [2024-11-19 16:57:42.228558] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2
00:13:49.426  [2024-11-19 16:57:42.228693] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:13:49.426  [2024-11-19 16:57:42.228732] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name raid_bdev1, state configuring
00:13:49.426  request:
00:13:49.426  {
00:13:49.426    "name": "raid_bdev1",
00:13:49.426    "raid_level": "raid0",
00:13:49.426    "base_bdevs": [
00:13:49.426      "malloc1",
00:13:49.426      "malloc2"
00:13:49.426    ],
00:13:49.426    "superblock": false,
00:13:49.426    "strip_size_kb": 64,
00:13:49.426    "method": "bdev_raid_create",
00:13:49.426    "req_id": 1
00:13:49.426  }
00:13:49.426  Got JSON-RPC error response
00:13:49.426  response:
00:13:49.426  {
00:13:49.426    "code": -17,
00:13:49.426    "message": "Failed to create RAID bdev raid_bdev1: File exists"
00:13:49.426  }
00:13:49.426   16:57:42	-- common/autotest_common.sh@653 -- # es=1
00:13:49.426   16:57:42	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:13:49.426   16:57:42	-- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:13:49.426   16:57:42	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:13:49.426    16:57:42	-- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:13:49.426    16:57:42	-- bdev/bdev_raid.sh@403 -- # jq -r '.[]'
00:13:49.685   16:57:42	-- bdev/bdev_raid.sh@403 -- # raid_bdev=
00:13:49.685   16:57:42	-- bdev/bdev_raid.sh@404 -- # '[' -n '' ']'
00:13:49.685   16:57:42	-- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001
00:13:49.944  [2024-11-19 16:57:42.649450] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1
00:13:49.944  [2024-11-19 16:57:42.649701] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:13:49.944  [2024-11-19 16:57:42.649857] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880
00:13:49.944  [2024-11-19 16:57:42.649951] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:13:49.944  [2024-11-19 16:57:42.652742] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:13:49.944  [2024-11-19 16:57:42.652905] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1
00:13:49.944  [2024-11-19 16:57:42.653080] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1
00:13:49.944  [2024-11-19 16:57:42.653222] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed
00:13:49.944  pt1
00:13:49.944   16:57:42	-- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2
00:13:49.944   16:57:42	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:13:49.944   16:57:42	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:13:49.944   16:57:42	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid0
00:13:49.944   16:57:42	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:13:49.944   16:57:42	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:13:49.944   16:57:42	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:13:49.944   16:57:42	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:13:49.945   16:57:42	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:13:49.945   16:57:42	-- bdev/bdev_raid.sh@125 -- # local tmp
00:13:49.945    16:57:42	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:13:49.945    16:57:42	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:13:50.218   16:57:42	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:13:50.218    "name": "raid_bdev1",
00:13:50.218    "uuid": "822b58bb-60ef-4dc3-a640-d48efcca2827",
00:13:50.218    "strip_size_kb": 64,
00:13:50.218    "state": "configuring",
00:13:50.218    "raid_level": "raid0",
00:13:50.218    "superblock": true,
00:13:50.218    "num_base_bdevs": 2,
00:13:50.218    "num_base_bdevs_discovered": 1,
00:13:50.218    "num_base_bdevs_operational": 2,
00:13:50.218    "base_bdevs_list": [
00:13:50.218      {
00:13:50.218        "name": "pt1",
00:13:50.218        "uuid": "ef799db4-8436-5854-bfa4-3899b778e9ba",
00:13:50.218        "is_configured": true,
00:13:50.218        "data_offset": 2048,
00:13:50.218        "data_size": 63488
00:13:50.218      },
00:13:50.218      {
00:13:50.218        "name": null,
00:13:50.218        "uuid": "7628cda9-62c5-57ea-a3f6-617ca160f311",
00:13:50.218        "is_configured": false,
00:13:50.218        "data_offset": 2048,
00:13:50.218        "data_size": 63488
00:13:50.218      }
00:13:50.218    ]
00:13:50.218  }'
00:13:50.218   16:57:42	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:13:50.218   16:57:42	-- common/autotest_common.sh@10 -- # set +x
00:13:50.792   16:57:43	-- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']'
00:13:50.792   16:57:43	-- bdev/bdev_raid.sh@422 -- # (( i = 1 ))
00:13:50.792   16:57:43	-- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs ))
00:13:50.792   16:57:43	-- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:13:50.792  [2024-11-19 16:57:43.621664] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:13:50.792  [2024-11-19 16:57:43.621957] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:13:50.792  [2024-11-19 16:57:43.622110] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180
00:13:50.792  [2024-11-19 16:57:43.622263] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:13:50.792  [2024-11-19 16:57:43.622826] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:13:50.792  [2024-11-19 16:57:43.622990] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:13:50.792  [2024-11-19 16:57:43.623160] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2
00:13:50.792  [2024-11-19 16:57:43.623260] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:13:50.792  [2024-11-19 16:57:43.623415] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80
00:13:50.792  [2024-11-19 16:57:43.623503] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512
00:13:50.792  [2024-11-19 16:57:43.623624] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390
00:13:50.792  [2024-11-19 16:57:43.624011] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80
00:13:50.792  [2024-11-19 16:57:43.624111] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80
00:13:50.792  [2024-11-19 16:57:43.624298] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:13:50.792  pt2
00:13:50.792   16:57:43	-- bdev/bdev_raid.sh@422 -- # (( i++ ))
00:13:50.792   16:57:43	-- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs ))
00:13:50.792   16:57:43	-- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2
00:13:50.792   16:57:43	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:13:50.792   16:57:43	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:13:50.792   16:57:43	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid0
00:13:50.792   16:57:43	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:13:50.792   16:57:43	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:13:50.792   16:57:43	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:13:50.792   16:57:43	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:13:50.792   16:57:43	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:13:50.792   16:57:43	-- bdev/bdev_raid.sh@125 -- # local tmp
00:13:50.792    16:57:43	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:13:50.792    16:57:43	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:13:51.361   16:57:43	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:13:51.361    "name": "raid_bdev1",
00:13:51.361    "uuid": "822b58bb-60ef-4dc3-a640-d48efcca2827",
00:13:51.361    "strip_size_kb": 64,
00:13:51.361    "state": "online",
00:13:51.361    "raid_level": "raid0",
00:13:51.361    "superblock": true,
00:13:51.361    "num_base_bdevs": 2,
00:13:51.361    "num_base_bdevs_discovered": 2,
00:13:51.361    "num_base_bdevs_operational": 2,
00:13:51.361    "base_bdevs_list": [
00:13:51.361      {
00:13:51.361        "name": "pt1",
00:13:51.361        "uuid": "ef799db4-8436-5854-bfa4-3899b778e9ba",
00:13:51.361        "is_configured": true,
00:13:51.361        "data_offset": 2048,
00:13:51.361        "data_size": 63488
00:13:51.361      },
00:13:51.361      {
00:13:51.361        "name": "pt2",
00:13:51.361        "uuid": "7628cda9-62c5-57ea-a3f6-617ca160f311",
00:13:51.361        "is_configured": true,
00:13:51.361        "data_offset": 2048,
00:13:51.361        "data_size": 63488
00:13:51.361      }
00:13:51.361    ]
00:13:51.361  }'
00:13:51.361   16:57:43	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:13:51.361   16:57:43	-- common/autotest_common.sh@10 -- # set +x
00:13:51.929    16:57:44	-- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1
00:13:51.929    16:57:44	-- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid'
00:13:51.929  [2024-11-19 16:57:44.686042] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:13:51.929   16:57:44	-- bdev/bdev_raid.sh@430 -- # '[' 822b58bb-60ef-4dc3-a640-d48efcca2827 '!=' 822b58bb-60ef-4dc3-a640-d48efcca2827 ']'
00:13:51.929   16:57:44	-- bdev/bdev_raid.sh@434 -- # has_redundancy raid0
00:13:51.929   16:57:44	-- bdev/bdev_raid.sh@195 -- # case $1 in
00:13:51.929   16:57:44	-- bdev/bdev_raid.sh@197 -- # return 1
00:13:51.929   16:57:44	-- bdev/bdev_raid.sh@511 -- # killprocess 123342
00:13:51.929   16:57:44	-- common/autotest_common.sh@936 -- # '[' -z 123342 ']'
00:13:51.929   16:57:44	-- common/autotest_common.sh@940 -- # kill -0 123342
00:13:51.929    16:57:44	-- common/autotest_common.sh@941 -- # uname
00:13:51.929   16:57:44	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:13:51.929    16:57:44	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 123342
00:13:51.929   16:57:44	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:13:51.929   16:57:44	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:13:51.929   16:57:44	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 123342'
00:13:51.929  killing process with pid 123342
00:13:51.929   16:57:44	-- common/autotest_common.sh@955 -- # kill 123342
00:13:51.929   16:57:44	-- common/autotest_common.sh@960 -- # wait 123342
00:13:51.929  [2024-11-19 16:57:44.740348] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:13:51.929  [2024-11-19 16:57:44.740451] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:13:51.929  [2024-11-19 16:57:44.740513] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:13:51.929  [2024-11-19 16:57:44.740621] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline
00:13:51.929  [2024-11-19 16:57:44.780329] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:13:52.497   16:57:45	-- bdev/bdev_raid.sh@513 -- # return 0
00:13:52.497  
00:13:52.497  real	0m7.192s
00:13:52.497  user	0m12.398s
00:13:52.497  sys	0m1.220s
00:13:52.497   16:57:45	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:13:52.497   16:57:45	-- common/autotest_common.sh@10 -- # set +x
00:13:52.497  ************************************
00:13:52.497  END TEST raid_superblock_test
00:13:52.497  ************************************
00:13:52.497   16:57:45	-- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1
00:13:52.497   16:57:45	-- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 2 false
00:13:52.497   16:57:45	-- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']'
00:13:52.497   16:57:45	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:13:52.497   16:57:45	-- common/autotest_common.sh@10 -- # set +x
00:13:52.497  ************************************
00:13:52.497  START TEST raid_state_function_test
00:13:52.497  ************************************
00:13:52.497   16:57:45	-- common/autotest_common.sh@1114 -- # raid_state_function_test concat 2 false
00:13:52.497   16:57:45	-- bdev/bdev_raid.sh@202 -- # local raid_level=concat
00:13:52.497   16:57:45	-- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2
00:13:52.497   16:57:45	-- bdev/bdev_raid.sh@204 -- # local superblock=false
00:13:52.497   16:57:45	-- bdev/bdev_raid.sh@205 -- # local raid_bdev
00:13:52.497    16:57:45	-- bdev/bdev_raid.sh@206 -- # (( i = 1 ))
00:13:52.497    16:57:45	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:13:52.497    16:57:45	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev1
00:13:52.497    16:57:45	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:13:52.497    16:57:45	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:13:52.497    16:57:45	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev2
00:13:52.497    16:57:45	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:13:52.497    16:57:45	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:13:52.498   16:57:45	-- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2')
00:13:52.498   16:57:45	-- bdev/bdev_raid.sh@206 -- # local base_bdevs
00:13:52.498   16:57:45	-- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid
00:13:52.498   16:57:45	-- bdev/bdev_raid.sh@208 -- # local strip_size
00:13:52.498   16:57:45	-- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg
00:13:52.498   16:57:45	-- bdev/bdev_raid.sh@210 -- # local superblock_create_arg
00:13:52.498   16:57:45	-- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']'
00:13:52.498   16:57:45	-- bdev/bdev_raid.sh@213 -- # strip_size=64
00:13:52.498   16:57:45	-- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64'
00:13:52.498   16:57:45	-- bdev/bdev_raid.sh@219 -- # '[' false = true ']'
00:13:52.498   16:57:45	-- bdev/bdev_raid.sh@222 -- # superblock_create_arg=
00:13:52.498   16:57:45	-- bdev/bdev_raid.sh@226 -- # raid_pid=123580
00:13:52.498   16:57:45	-- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid
00:13:52.498  Process raid pid: 123580
00:13:52.498   16:57:45	-- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 123580'
00:13:52.498   16:57:45	-- bdev/bdev_raid.sh@228 -- # waitforlisten 123580 /var/tmp/spdk-raid.sock
00:13:52.498   16:57:45	-- common/autotest_common.sh@829 -- # '[' -z 123580 ']'
00:13:52.498   16:57:45	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock
00:13:52.498   16:57:45	-- common/autotest_common.sh@834 -- # local max_retries=100
00:13:52.498   16:57:45	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...'
00:13:52.498  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...
00:13:52.498   16:57:45	-- common/autotest_common.sh@838 -- # xtrace_disable
00:13:52.498   16:57:45	-- common/autotest_common.sh@10 -- # set +x
00:13:52.498  [2024-11-19 16:57:45.315902] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:13:52.498  [2024-11-19 16:57:45.316386] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:13:52.757  [2024-11-19 16:57:45.473497] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:52.757  [2024-11-19 16:57:45.521785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:13:52.757  [2024-11-19 16:57:45.569262] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:13:53.695   16:57:46	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:13:53.695   16:57:46	-- common/autotest_common.sh@862 -- # return 0
00:13:53.695   16:57:46	-- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid
00:13:53.695  [2024-11-19 16:57:46.405191] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:13:53.695  [2024-11-19 16:57:46.405422] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:13:53.695  [2024-11-19 16:57:46.405504] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:13:53.695  [2024-11-19 16:57:46.405552] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:13:53.695   16:57:46	-- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2
00:13:53.695   16:57:46	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:13:53.695   16:57:46	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:13:53.695   16:57:46	-- bdev/bdev_raid.sh@119 -- # local raid_level=concat
00:13:53.695   16:57:46	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:13:53.695   16:57:46	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:13:53.695   16:57:46	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:13:53.695   16:57:46	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:13:53.695   16:57:46	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:13:53.695   16:57:46	-- bdev/bdev_raid.sh@125 -- # local tmp
00:13:53.695    16:57:46	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:13:53.695    16:57:46	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:13:53.954   16:57:46	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:13:53.954    "name": "Existed_Raid",
00:13:53.954    "uuid": "00000000-0000-0000-0000-000000000000",
00:13:53.954    "strip_size_kb": 64,
00:13:53.954    "state": "configuring",
00:13:53.954    "raid_level": "concat",
00:13:53.954    "superblock": false,
00:13:53.954    "num_base_bdevs": 2,
00:13:53.954    "num_base_bdevs_discovered": 0,
00:13:53.954    "num_base_bdevs_operational": 2,
00:13:53.954    "base_bdevs_list": [
00:13:53.954      {
00:13:53.954        "name": "BaseBdev1",
00:13:53.954        "uuid": "00000000-0000-0000-0000-000000000000",
00:13:53.954        "is_configured": false,
00:13:53.954        "data_offset": 0,
00:13:53.954        "data_size": 0
00:13:53.954      },
00:13:53.954      {
00:13:53.954        "name": "BaseBdev2",
00:13:53.954        "uuid": "00000000-0000-0000-0000-000000000000",
00:13:53.954        "is_configured": false,
00:13:53.954        "data_offset": 0,
00:13:53.954        "data_size": 0
00:13:53.954      }
00:13:53.954    ]
00:13:53.954  }'
00:13:53.954   16:57:46	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:13:53.954   16:57:46	-- common/autotest_common.sh@10 -- # set +x
00:13:54.522   16:57:47	-- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid
00:13:54.522  [2024-11-19 16:57:47.293237] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:13:54.522  [2024-11-19 16:57:47.293431] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring
00:13:54.522   16:57:47	-- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid
00:13:54.781  [2024-11-19 16:57:47.473290] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:13:54.781  [2024-11-19 16:57:47.473473] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:13:54.781  [2024-11-19 16:57:47.473548] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:13:54.781  [2024-11-19 16:57:47.473601] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:13:54.781   16:57:47	-- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1
00:13:55.039  [2024-11-19 16:57:47.654306] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:13:55.039  BaseBdev1
00:13:55.039   16:57:47	-- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1
00:13:55.039   16:57:47	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1
00:13:55.039   16:57:47	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:13:55.039   16:57:47	-- common/autotest_common.sh@899 -- # local i
00:13:55.039   16:57:47	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:13:55.039   16:57:47	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:13:55.039   16:57:47	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:13:55.299   16:57:47	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000
00:13:55.299  [
00:13:55.299    {
00:13:55.299      "name": "BaseBdev1",
00:13:55.299      "aliases": [
00:13:55.299        "9102a062-87b4-4408-a9a1-96a6aa060a13"
00:13:55.299      ],
00:13:55.299      "product_name": "Malloc disk",
00:13:55.299      "block_size": 512,
00:13:55.299      "num_blocks": 65536,
00:13:55.299      "uuid": "9102a062-87b4-4408-a9a1-96a6aa060a13",
00:13:55.299      "assigned_rate_limits": {
00:13:55.299        "rw_ios_per_sec": 0,
00:13:55.299        "rw_mbytes_per_sec": 0,
00:13:55.299        "r_mbytes_per_sec": 0,
00:13:55.299        "w_mbytes_per_sec": 0
00:13:55.299      },
00:13:55.299      "claimed": true,
00:13:55.299      "claim_type": "exclusive_write",
00:13:55.299      "zoned": false,
00:13:55.299      "supported_io_types": {
00:13:55.299        "read": true,
00:13:55.299        "write": true,
00:13:55.299        "unmap": true,
00:13:55.299        "write_zeroes": true,
00:13:55.299        "flush": true,
00:13:55.299        "reset": true,
00:13:55.299        "compare": false,
00:13:55.299        "compare_and_write": false,
00:13:55.299        "abort": true,
00:13:55.299        "nvme_admin": false,
00:13:55.299        "nvme_io": false
00:13:55.299      },
00:13:55.299      "memory_domains": [
00:13:55.299        {
00:13:55.299          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:13:55.299          "dma_device_type": 2
00:13:55.299        }
00:13:55.299      ],
00:13:55.299      "driver_specific": {}
00:13:55.299    }
00:13:55.299  ]
00:13:55.299   16:57:48	-- common/autotest_common.sh@905 -- # return 0
00:13:55.299   16:57:48	-- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2
00:13:55.299   16:57:48	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:13:55.299   16:57:48	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:13:55.299   16:57:48	-- bdev/bdev_raid.sh@119 -- # local raid_level=concat
00:13:55.299   16:57:48	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:13:55.299   16:57:48	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:13:55.299   16:57:48	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:13:55.299   16:57:48	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:13:55.299   16:57:48	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:13:55.299   16:57:48	-- bdev/bdev_raid.sh@125 -- # local tmp
00:13:55.299    16:57:48	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:13:55.299    16:57:48	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:13:55.558   16:57:48	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:13:55.558    "name": "Existed_Raid",
00:13:55.558    "uuid": "00000000-0000-0000-0000-000000000000",
00:13:55.558    "strip_size_kb": 64,
00:13:55.558    "state": "configuring",
00:13:55.558    "raid_level": "concat",
00:13:55.558    "superblock": false,
00:13:55.558    "num_base_bdevs": 2,
00:13:55.558    "num_base_bdevs_discovered": 1,
00:13:55.558    "num_base_bdevs_operational": 2,
00:13:55.558    "base_bdevs_list": [
00:13:55.558      {
00:13:55.558        "name": "BaseBdev1",
00:13:55.558        "uuid": "9102a062-87b4-4408-a9a1-96a6aa060a13",
00:13:55.558        "is_configured": true,
00:13:55.558        "data_offset": 0,
00:13:55.558        "data_size": 65536
00:13:55.558      },
00:13:55.558      {
00:13:55.558        "name": "BaseBdev2",
00:13:55.558        "uuid": "00000000-0000-0000-0000-000000000000",
00:13:55.558        "is_configured": false,
00:13:55.558        "data_offset": 0,
00:13:55.558        "data_size": 0
00:13:55.558      }
00:13:55.558    ]
00:13:55.558  }'
00:13:55.558   16:57:48	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:13:55.558   16:57:48	-- common/autotest_common.sh@10 -- # set +x
00:13:56.125   16:57:48	-- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid
00:13:56.384  [2024-11-19 16:57:49.070551] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:13:56.384  [2024-11-19 16:57:49.070796] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring
00:13:56.384   16:57:49	-- bdev/bdev_raid.sh@244 -- # '[' false = true ']'
00:13:56.384   16:57:49	-- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid
00:13:56.642  [2024-11-19 16:57:49.322675] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:13:56.642  [2024-11-19 16:57:49.324953] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:13:56.642  [2024-11-19 16:57:49.325127] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:13:56.642   16:57:49	-- bdev/bdev_raid.sh@254 -- # (( i = 1 ))
00:13:56.642   16:57:49	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:13:56.642   16:57:49	-- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2
00:13:56.642   16:57:49	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:13:56.642   16:57:49	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:13:56.642   16:57:49	-- bdev/bdev_raid.sh@119 -- # local raid_level=concat
00:13:56.642   16:57:49	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:13:56.642   16:57:49	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:13:56.642   16:57:49	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:13:56.642   16:57:49	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:13:56.642   16:57:49	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:13:56.642   16:57:49	-- bdev/bdev_raid.sh@125 -- # local tmp
00:13:56.642    16:57:49	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:13:56.642    16:57:49	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:13:56.900   16:57:49	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:13:56.900    "name": "Existed_Raid",
00:13:56.900    "uuid": "00000000-0000-0000-0000-000000000000",
00:13:56.900    "strip_size_kb": 64,
00:13:56.900    "state": "configuring",
00:13:56.900    "raid_level": "concat",
00:13:56.900    "superblock": false,
00:13:56.900    "num_base_bdevs": 2,
00:13:56.900    "num_base_bdevs_discovered": 1,
00:13:56.900    "num_base_bdevs_operational": 2,
00:13:56.900    "base_bdevs_list": [
00:13:56.900      {
00:13:56.900        "name": "BaseBdev1",
00:13:56.900        "uuid": "9102a062-87b4-4408-a9a1-96a6aa060a13",
00:13:56.900        "is_configured": true,
00:13:56.900        "data_offset": 0,
00:13:56.900        "data_size": 65536
00:13:56.900      },
00:13:56.900      {
00:13:56.900        "name": "BaseBdev2",
00:13:56.900        "uuid": "00000000-0000-0000-0000-000000000000",
00:13:56.900        "is_configured": false,
00:13:56.900        "data_offset": 0,
00:13:56.900        "data_size": 0
00:13:56.900      }
00:13:56.900    ]
00:13:56.900  }'
00:13:56.900   16:57:49	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:13:56.900   16:57:49	-- common/autotest_common.sh@10 -- # set +x
00:13:57.158   16:57:50	-- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2
00:13:57.417  [2024-11-19 16:57:50.194470] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:13:57.417  [2024-11-19 16:57:50.194812] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080
00:13:57.417  [2024-11-19 16:57:50.194922] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512
00:13:57.417  [2024-11-19 16:57:50.195340] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000001f80
00:13:57.417  [2024-11-19 16:57:50.196147] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080
00:13:57.417  [2024-11-19 16:57:50.196322] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080
00:13:57.417  [2024-11-19 16:57:50.196924] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:13:57.417  BaseBdev2
00:13:57.417   16:57:50	-- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2
00:13:57.417   16:57:50	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2
00:13:57.417   16:57:50	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:13:57.417   16:57:50	-- common/autotest_common.sh@899 -- # local i
00:13:57.417   16:57:50	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:13:57.417   16:57:50	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:13:57.417   16:57:50	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:13:57.676   16:57:50	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000
00:13:57.935  [
00:13:57.935    {
00:13:57.935      "name": "BaseBdev2",
00:13:57.935      "aliases": [
00:13:57.935        "300a0932-ea0b-41ca-862b-757d26e63d7b"
00:13:57.935      ],
00:13:57.935      "product_name": "Malloc disk",
00:13:57.935      "block_size": 512,
00:13:57.935      "num_blocks": 65536,
00:13:57.935      "uuid": "300a0932-ea0b-41ca-862b-757d26e63d7b",
00:13:57.935      "assigned_rate_limits": {
00:13:57.935        "rw_ios_per_sec": 0,
00:13:57.935        "rw_mbytes_per_sec": 0,
00:13:57.935        "r_mbytes_per_sec": 0,
00:13:57.935        "w_mbytes_per_sec": 0
00:13:57.935      },
00:13:57.935      "claimed": true,
00:13:57.935      "claim_type": "exclusive_write",
00:13:57.935      "zoned": false,
00:13:57.935      "supported_io_types": {
00:13:57.935        "read": true,
00:13:57.935        "write": true,
00:13:57.935        "unmap": true,
00:13:57.935        "write_zeroes": true,
00:13:57.935        "flush": true,
00:13:57.935        "reset": true,
00:13:57.935        "compare": false,
00:13:57.935        "compare_and_write": false,
00:13:57.935        "abort": true,
00:13:57.935        "nvme_admin": false,
00:13:57.935        "nvme_io": false
00:13:57.935      },
00:13:57.935      "memory_domains": [
00:13:57.935        {
00:13:57.935          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:13:57.935          "dma_device_type": 2
00:13:57.935        }
00:13:57.935      ],
00:13:57.935      "driver_specific": {}
00:13:57.935    }
00:13:57.935  ]
00:13:57.935   16:57:50	-- common/autotest_common.sh@905 -- # return 0
00:13:57.935   16:57:50	-- bdev/bdev_raid.sh@254 -- # (( i++ ))
00:13:57.935   16:57:50	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:13:57.935   16:57:50	-- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 2
00:13:57.935   16:57:50	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:13:57.935   16:57:50	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:13:57.935   16:57:50	-- bdev/bdev_raid.sh@119 -- # local raid_level=concat
00:13:57.935   16:57:50	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:13:57.935   16:57:50	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:13:57.935   16:57:50	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:13:57.935   16:57:50	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:13:57.935   16:57:50	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:13:57.935   16:57:50	-- bdev/bdev_raid.sh@125 -- # local tmp
00:13:57.935    16:57:50	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:13:57.935    16:57:50	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:13:58.195   16:57:50	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:13:58.195    "name": "Existed_Raid",
00:13:58.195    "uuid": "00448c69-a87d-4353-b1ec-03cf229a9f16",
00:13:58.195    "strip_size_kb": 64,
00:13:58.195    "state": "online",
00:13:58.195    "raid_level": "concat",
00:13:58.195    "superblock": false,
00:13:58.195    "num_base_bdevs": 2,
00:13:58.195    "num_base_bdevs_discovered": 2,
00:13:58.195    "num_base_bdevs_operational": 2,
00:13:58.195    "base_bdevs_list": [
00:13:58.195      {
00:13:58.195        "name": "BaseBdev1",
00:13:58.195        "uuid": "9102a062-87b4-4408-a9a1-96a6aa060a13",
00:13:58.195        "is_configured": true,
00:13:58.195        "data_offset": 0,
00:13:58.195        "data_size": 65536
00:13:58.195      },
00:13:58.195      {
00:13:58.195        "name": "BaseBdev2",
00:13:58.195        "uuid": "300a0932-ea0b-41ca-862b-757d26e63d7b",
00:13:58.195        "is_configured": true,
00:13:58.195        "data_offset": 0,
00:13:58.195        "data_size": 65536
00:13:58.195      }
00:13:58.195    ]
00:13:58.195  }'
00:13:58.195   16:57:50	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:13:58.195   16:57:50	-- common/autotest_common.sh@10 -- # set +x
00:13:58.763   16:57:51	-- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1
00:13:59.023  [2024-11-19 16:57:51.646845] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:13:59.023  [2024-11-19 16:57:51.647064] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:13:59.023  [2024-11-19 16:57:51.647279] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:13:59.023   16:57:51	-- bdev/bdev_raid.sh@263 -- # local expected_state
00:13:59.023   16:57:51	-- bdev/bdev_raid.sh@264 -- # has_redundancy concat
00:13:59.023   16:57:51	-- bdev/bdev_raid.sh@195 -- # case $1 in
00:13:59.023   16:57:51	-- bdev/bdev_raid.sh@197 -- # return 1
00:13:59.023   16:57:51	-- bdev/bdev_raid.sh@265 -- # expected_state=offline
00:13:59.023   16:57:51	-- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1
00:13:59.023   16:57:51	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:13:59.023   16:57:51	-- bdev/bdev_raid.sh@118 -- # local expected_state=offline
00:13:59.023   16:57:51	-- bdev/bdev_raid.sh@119 -- # local raid_level=concat
00:13:59.023   16:57:51	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:13:59.023   16:57:51	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1
00:13:59.023   16:57:51	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:13:59.023   16:57:51	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:13:59.023   16:57:51	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:13:59.023   16:57:51	-- bdev/bdev_raid.sh@125 -- # local tmp
00:13:59.023    16:57:51	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:13:59.023    16:57:51	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:13:59.282   16:57:51	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:13:59.282    "name": "Existed_Raid",
00:13:59.282    "uuid": "00448c69-a87d-4353-b1ec-03cf229a9f16",
00:13:59.282    "strip_size_kb": 64,
00:13:59.282    "state": "offline",
00:13:59.282    "raid_level": "concat",
00:13:59.282    "superblock": false,
00:13:59.282    "num_base_bdevs": 2,
00:13:59.282    "num_base_bdevs_discovered": 1,
00:13:59.282    "num_base_bdevs_operational": 1,
00:13:59.282    "base_bdevs_list": [
00:13:59.282      {
00:13:59.282        "name": null,
00:13:59.282        "uuid": "00000000-0000-0000-0000-000000000000",
00:13:59.282        "is_configured": false,
00:13:59.282        "data_offset": 0,
00:13:59.282        "data_size": 65536
00:13:59.282      },
00:13:59.282      {
00:13:59.282        "name": "BaseBdev2",
00:13:59.282        "uuid": "300a0932-ea0b-41ca-862b-757d26e63d7b",
00:13:59.282        "is_configured": true,
00:13:59.282        "data_offset": 0,
00:13:59.282        "data_size": 65536
00:13:59.282      }
00:13:59.282    ]
00:13:59.282  }'
00:13:59.282   16:57:51	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:13:59.282   16:57:51	-- common/autotest_common.sh@10 -- # set +x
00:13:59.851   16:57:52	-- bdev/bdev_raid.sh@273 -- # (( i = 1 ))
00:13:59.851   16:57:52	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:13:59.851    16:57:52	-- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:13:59.851    16:57:52	-- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]'
00:13:59.851   16:57:52	-- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid
00:13:59.851   16:57:52	-- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:13:59.851   16:57:52	-- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2
00:14:00.110  [2024-11-19 16:57:52.887508] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2
00:14:00.110  [2024-11-19 16:57:52.887749] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline
00:14:00.110   16:57:52	-- bdev/bdev_raid.sh@273 -- # (( i++ ))
00:14:00.110   16:57:52	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:14:00.110    16:57:52	-- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:14:00.110    16:57:52	-- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)'
00:14:00.370   16:57:53	-- bdev/bdev_raid.sh@281 -- # raid_bdev=
00:14:00.370   16:57:53	-- bdev/bdev_raid.sh@282 -- # '[' -n '' ']'
00:14:00.370   16:57:53	-- bdev/bdev_raid.sh@287 -- # killprocess 123580
00:14:00.370   16:57:53	-- common/autotest_common.sh@936 -- # '[' -z 123580 ']'
00:14:00.370   16:57:53	-- common/autotest_common.sh@940 -- # kill -0 123580
00:14:00.370    16:57:53	-- common/autotest_common.sh@941 -- # uname
00:14:00.370   16:57:53	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:14:00.370    16:57:53	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 123580
00:14:00.370   16:57:53	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:14:00.370   16:57:53	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:14:00.370   16:57:53	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 123580'
00:14:00.370  killing process with pid 123580
00:14:00.370   16:57:53	-- common/autotest_common.sh@955 -- # kill 123580
00:14:00.370  [2024-11-19 16:57:53.114823] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:14:00.370   16:57:53	-- common/autotest_common.sh@960 -- # wait 123580
00:14:00.370  [2024-11-19 16:57:53.115065] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:14:00.629   16:57:53	-- bdev/bdev_raid.sh@289 -- # return 0
00:14:00.629  
00:14:00.629  real	0m8.113s
00:14:00.629  user	0m14.241s
00:14:00.629  sys	0m1.458s
00:14:00.629   16:57:53	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:14:00.629   16:57:53	-- common/autotest_common.sh@10 -- # set +x
00:14:00.629  ************************************
00:14:00.629  END TEST raid_state_function_test
00:14:00.629  ************************************
00:14:00.629   16:57:53	-- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true
00:14:00.629   16:57:53	-- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']'
00:14:00.629   16:57:53	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:14:00.629   16:57:53	-- common/autotest_common.sh@10 -- # set +x
00:14:00.629  ************************************
00:14:00.629  START TEST raid_state_function_test_sb
00:14:00.629  ************************************
00:14:00.629   16:57:53	-- common/autotest_common.sh@1114 -- # raid_state_function_test concat 2 true
00:14:00.629   16:57:53	-- bdev/bdev_raid.sh@202 -- # local raid_level=concat
00:14:00.629   16:57:53	-- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2
00:14:00.629   16:57:53	-- bdev/bdev_raid.sh@204 -- # local superblock=true
00:14:00.629   16:57:53	-- bdev/bdev_raid.sh@205 -- # local raid_bdev
00:14:00.629    16:57:53	-- bdev/bdev_raid.sh@206 -- # (( i = 1 ))
00:14:00.629    16:57:53	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:14:00.629    16:57:53	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev1
00:14:00.629    16:57:53	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:14:00.629    16:57:53	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:14:00.629    16:57:53	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev2
00:14:00.629    16:57:53	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:14:00.629    16:57:53	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:14:00.629   16:57:53	-- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2')
00:14:00.629   16:57:53	-- bdev/bdev_raid.sh@206 -- # local base_bdevs
00:14:00.629   16:57:53	-- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid
00:14:00.629   16:57:53	-- bdev/bdev_raid.sh@208 -- # local strip_size
00:14:00.629   16:57:53	-- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg
00:14:00.629   16:57:53	-- bdev/bdev_raid.sh@210 -- # local superblock_create_arg
00:14:00.629   16:57:53	-- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']'
00:14:00.629   16:57:53	-- bdev/bdev_raid.sh@213 -- # strip_size=64
00:14:00.629   16:57:53	-- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64'
00:14:00.629   16:57:53	-- bdev/bdev_raid.sh@219 -- # '[' true = true ']'
00:14:00.629   16:57:53	-- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s
00:14:00.629   16:57:53	-- bdev/bdev_raid.sh@226 -- # raid_pid=123877
00:14:00.629   16:57:53	-- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 123877'
00:14:00.629  Process raid pid: 123877
00:14:00.629   16:57:53	-- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid
00:14:00.629   16:57:53	-- bdev/bdev_raid.sh@228 -- # waitforlisten 123877 /var/tmp/spdk-raid.sock
00:14:00.629   16:57:53	-- common/autotest_common.sh@829 -- # '[' -z 123877 ']'
00:14:00.629   16:57:53	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock
00:14:00.629   16:57:53	-- common/autotest_common.sh@834 -- # local max_retries=100
00:14:00.629   16:57:53	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...'
00:14:00.629  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...
00:14:00.629   16:57:53	-- common/autotest_common.sh@838 -- # xtrace_disable
00:14:00.629   16:57:53	-- common/autotest_common.sh@10 -- # set +x
00:14:00.888  [2024-11-19 16:57:53.512717] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:14:00.888  [2024-11-19 16:57:53.513178] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:14:00.888  [2024-11-19 16:57:53.666421] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:14:00.888  [2024-11-19 16:57:53.708543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:14:01.146  [2024-11-19 16:57:53.750125] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:14:01.714   16:57:54	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:14:01.714   16:57:54	-- common/autotest_common.sh@862 -- # return 0
00:14:01.714   16:57:54	-- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid
00:14:01.714  [2024-11-19 16:57:54.539597] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:14:01.715  [2024-11-19 16:57:54.539864] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:14:01.715  [2024-11-19 16:57:54.539945] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:14:01.715  [2024-11-19 16:57:54.539996] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:14:01.715   16:57:54	-- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2
00:14:01.715   16:57:54	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:14:01.715   16:57:54	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:14:01.715   16:57:54	-- bdev/bdev_raid.sh@119 -- # local raid_level=concat
00:14:01.715   16:57:54	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:14:01.715   16:57:54	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:14:01.715   16:57:54	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:14:01.715   16:57:54	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:14:01.715   16:57:54	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:14:01.715   16:57:54	-- bdev/bdev_raid.sh@125 -- # local tmp
00:14:01.715    16:57:54	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:14:01.715    16:57:54	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:14:01.974   16:57:54	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:14:01.974    "name": "Existed_Raid",
00:14:01.974    "uuid": "131e9630-ef77-4eee-996a-f3312653f7e3",
00:14:01.974    "strip_size_kb": 64,
00:14:01.974    "state": "configuring",
00:14:01.974    "raid_level": "concat",
00:14:01.974    "superblock": true,
00:14:01.974    "num_base_bdevs": 2,
00:14:01.974    "num_base_bdevs_discovered": 0,
00:14:01.974    "num_base_bdevs_operational": 2,
00:14:01.974    "base_bdevs_list": [
00:14:01.974      {
00:14:01.974        "name": "BaseBdev1",
00:14:01.974        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:01.974        "is_configured": false,
00:14:01.974        "data_offset": 0,
00:14:01.974        "data_size": 0
00:14:01.974      },
00:14:01.974      {
00:14:01.974        "name": "BaseBdev2",
00:14:01.974        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:01.974        "is_configured": false,
00:14:01.974        "data_offset": 0,
00:14:01.974        "data_size": 0
00:14:01.974      }
00:14:01.974    ]
00:14:01.974  }'
00:14:01.974   16:57:54	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:14:01.974   16:57:54	-- common/autotest_common.sh@10 -- # set +x
00:14:02.541   16:57:55	-- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid
00:14:02.800  [2024-11-19 16:57:55.511598] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:14:02.800  [2024-11-19 16:57:55.511774] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring
00:14:02.800   16:57:55	-- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid
00:14:03.059  [2024-11-19 16:57:55.691675] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:14:03.059  [2024-11-19 16:57:55.691882] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:14:03.059  [2024-11-19 16:57:55.691964] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:14:03.059  [2024-11-19 16:57:55.692018] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:14:03.059   16:57:55	-- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1
00:14:03.059  [2024-11-19 16:57:55.876711] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:14:03.059  BaseBdev1
00:14:03.059   16:57:55	-- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1
00:14:03.059   16:57:55	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1
00:14:03.059   16:57:55	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:14:03.059   16:57:55	-- common/autotest_common.sh@899 -- # local i
00:14:03.059   16:57:55	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:14:03.059   16:57:55	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:14:03.059   16:57:55	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:14:03.318   16:57:56	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000
00:14:03.578  [
00:14:03.578    {
00:14:03.578      "name": "BaseBdev1",
00:14:03.578      "aliases": [
00:14:03.578        "106602f6-178d-45f1-a3f8-a83cc2d68488"
00:14:03.578      ],
00:14:03.578      "product_name": "Malloc disk",
00:14:03.578      "block_size": 512,
00:14:03.578      "num_blocks": 65536,
00:14:03.578      "uuid": "106602f6-178d-45f1-a3f8-a83cc2d68488",
00:14:03.578      "assigned_rate_limits": {
00:14:03.578        "rw_ios_per_sec": 0,
00:14:03.578        "rw_mbytes_per_sec": 0,
00:14:03.578        "r_mbytes_per_sec": 0,
00:14:03.578        "w_mbytes_per_sec": 0
00:14:03.578      },
00:14:03.578      "claimed": true,
00:14:03.578      "claim_type": "exclusive_write",
00:14:03.578      "zoned": false,
00:14:03.578      "supported_io_types": {
00:14:03.578        "read": true,
00:14:03.578        "write": true,
00:14:03.578        "unmap": true,
00:14:03.578        "write_zeroes": true,
00:14:03.578        "flush": true,
00:14:03.578        "reset": true,
00:14:03.578        "compare": false,
00:14:03.578        "compare_and_write": false,
00:14:03.578        "abort": true,
00:14:03.578        "nvme_admin": false,
00:14:03.578        "nvme_io": false
00:14:03.578      },
00:14:03.578      "memory_domains": [
00:14:03.578        {
00:14:03.578          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:14:03.578          "dma_device_type": 2
00:14:03.578        }
00:14:03.578      ],
00:14:03.578      "driver_specific": {}
00:14:03.578    }
00:14:03.578  ]
00:14:03.578   16:57:56	-- common/autotest_common.sh@905 -- # return 0
00:14:03.578   16:57:56	-- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2
00:14:03.578   16:57:56	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:14:03.578   16:57:56	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:14:03.578   16:57:56	-- bdev/bdev_raid.sh@119 -- # local raid_level=concat
00:14:03.578   16:57:56	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:14:03.578   16:57:56	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:14:03.578   16:57:56	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:14:03.578   16:57:56	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:14:03.578   16:57:56	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:14:03.578   16:57:56	-- bdev/bdev_raid.sh@125 -- # local tmp
00:14:03.578    16:57:56	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:14:03.578    16:57:56	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:14:03.837   16:57:56	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:14:03.837    "name": "Existed_Raid",
00:14:03.837    "uuid": "c4358c4a-014a-48d0-8b23-02290a233ca3",
00:14:03.837    "strip_size_kb": 64,
00:14:03.837    "state": "configuring",
00:14:03.837    "raid_level": "concat",
00:14:03.837    "superblock": true,
00:14:03.837    "num_base_bdevs": 2,
00:14:03.837    "num_base_bdevs_discovered": 1,
00:14:03.837    "num_base_bdevs_operational": 2,
00:14:03.837    "base_bdevs_list": [
00:14:03.837      {
00:14:03.837        "name": "BaseBdev1",
00:14:03.837        "uuid": "106602f6-178d-45f1-a3f8-a83cc2d68488",
00:14:03.837        "is_configured": true,
00:14:03.837        "data_offset": 2048,
00:14:03.837        "data_size": 63488
00:14:03.837      },
00:14:03.837      {
00:14:03.837        "name": "BaseBdev2",
00:14:03.837        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:03.837        "is_configured": false,
00:14:03.837        "data_offset": 0,
00:14:03.837        "data_size": 0
00:14:03.837      }
00:14:03.837    ]
00:14:03.837  }'
00:14:03.837   16:57:56	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:14:03.837   16:57:56	-- common/autotest_common.sh@10 -- # set +x
00:14:04.405   16:57:57	-- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid
00:14:04.664  [2024-11-19 16:57:57.285042] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:14:04.664  [2024-11-19 16:57:57.285230] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring
00:14:04.664   16:57:57	-- bdev/bdev_raid.sh@244 -- # '[' true = true ']'
00:14:04.664   16:57:57	-- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1
00:14:04.664   16:57:57	-- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1
00:14:04.923  BaseBdev1
00:14:04.923   16:57:57	-- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1
00:14:04.923   16:57:57	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1
00:14:04.923   16:57:57	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:14:04.923   16:57:57	-- common/autotest_common.sh@899 -- # local i
00:14:04.923   16:57:57	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:14:04.923   16:57:57	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:14:04.923   16:57:57	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:14:05.181   16:57:57	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000
00:14:05.440  [
00:14:05.440    {
00:14:05.440      "name": "BaseBdev1",
00:14:05.440      "aliases": [
00:14:05.440        "93548ca4-5143-417a-a7ca-70ee90e33ef3"
00:14:05.440      ],
00:14:05.440      "product_name": "Malloc disk",
00:14:05.440      "block_size": 512,
00:14:05.440      "num_blocks": 65536,
00:14:05.440      "uuid": "93548ca4-5143-417a-a7ca-70ee90e33ef3",
00:14:05.440      "assigned_rate_limits": {
00:14:05.440        "rw_ios_per_sec": 0,
00:14:05.440        "rw_mbytes_per_sec": 0,
00:14:05.440        "r_mbytes_per_sec": 0,
00:14:05.440        "w_mbytes_per_sec": 0
00:14:05.440      },
00:14:05.440      "claimed": false,
00:14:05.440      "zoned": false,
00:14:05.440      "supported_io_types": {
00:14:05.440        "read": true,
00:14:05.440        "write": true,
00:14:05.440        "unmap": true,
00:14:05.440        "write_zeroes": true,
00:14:05.440        "flush": true,
00:14:05.440        "reset": true,
00:14:05.440        "compare": false,
00:14:05.440        "compare_and_write": false,
00:14:05.440        "abort": true,
00:14:05.440        "nvme_admin": false,
00:14:05.440        "nvme_io": false
00:14:05.440      },
00:14:05.440      "memory_domains": [
00:14:05.440        {
00:14:05.440          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:14:05.440          "dma_device_type": 2
00:14:05.440        }
00:14:05.440      ],
00:14:05.440      "driver_specific": {}
00:14:05.440    }
00:14:05.440  ]
00:14:05.440   16:57:58	-- common/autotest_common.sh@905 -- # return 0
00:14:05.440   16:57:58	-- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid
00:14:05.700  [2024-11-19 16:57:58.321684] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:14:05.700  [2024-11-19 16:57:58.323898] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:14:05.700  [2024-11-19 16:57:58.324069] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:14:05.700   16:57:58	-- bdev/bdev_raid.sh@254 -- # (( i = 1 ))
00:14:05.700   16:57:58	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:14:05.700   16:57:58	-- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2
00:14:05.700   16:57:58	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:14:05.700   16:57:58	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:14:05.700   16:57:58	-- bdev/bdev_raid.sh@119 -- # local raid_level=concat
00:14:05.700   16:57:58	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:14:05.700   16:57:58	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:14:05.700   16:57:58	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:14:05.700   16:57:58	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:14:05.700   16:57:58	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:14:05.700   16:57:58	-- bdev/bdev_raid.sh@125 -- # local tmp
00:14:05.700    16:57:58	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:14:05.700    16:57:58	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:14:05.700   16:57:58	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:14:05.700    "name": "Existed_Raid",
00:14:05.700    "uuid": "38a790b7-ffcc-402a-8808-6646bea3d397",
00:14:05.700    "strip_size_kb": 64,
00:14:05.700    "state": "configuring",
00:14:05.700    "raid_level": "concat",
00:14:05.700    "superblock": true,
00:14:05.700    "num_base_bdevs": 2,
00:14:05.700    "num_base_bdevs_discovered": 1,
00:14:05.700    "num_base_bdevs_operational": 2,
00:14:05.700    "base_bdevs_list": [
00:14:05.700      {
00:14:05.700        "name": "BaseBdev1",
00:14:05.700        "uuid": "93548ca4-5143-417a-a7ca-70ee90e33ef3",
00:14:05.700        "is_configured": true,
00:14:05.700        "data_offset": 2048,
00:14:05.700        "data_size": 63488
00:14:05.700      },
00:14:05.700      {
00:14:05.700        "name": "BaseBdev2",
00:14:05.700        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:05.700        "is_configured": false,
00:14:05.700        "data_offset": 0,
00:14:05.700        "data_size": 0
00:14:05.700      }
00:14:05.700    ]
00:14:05.700  }'
00:14:05.700   16:57:58	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:14:05.700   16:57:58	-- common/autotest_common.sh@10 -- # set +x
00:14:06.268   16:57:59	-- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2
00:14:06.526  [2024-11-19 16:57:59.345980] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:14:06.526  [2024-11-19 16:57:59.346430] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006680
00:14:06.526  [2024-11-19 16:57:59.346558] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512
00:14:06.526  [2024-11-19 16:57:59.346749] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002050
00:14:06.526  [2024-11-19 16:57:59.347335] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006680
00:14:06.526  [2024-11-19 16:57:59.347382] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006680
00:14:06.526  BaseBdev2
00:14:06.526  [2024-11-19 16:57:59.347627] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:14:06.526   16:57:59	-- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2
00:14:06.526   16:57:59	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2
00:14:06.526   16:57:59	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:14:06.526   16:57:59	-- common/autotest_common.sh@899 -- # local i
00:14:06.526   16:57:59	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:14:06.526   16:57:59	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:14:06.526   16:57:59	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:14:06.784   16:57:59	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000
00:14:07.043  [
00:14:07.043    {
00:14:07.043      "name": "BaseBdev2",
00:14:07.043      "aliases": [
00:14:07.043        "45cc08f3-2c5e-4dbd-b1a0-6f797921ca01"
00:14:07.043      ],
00:14:07.043      "product_name": "Malloc disk",
00:14:07.043      "block_size": 512,
00:14:07.043      "num_blocks": 65536,
00:14:07.044      "uuid": "45cc08f3-2c5e-4dbd-b1a0-6f797921ca01",
00:14:07.044      "assigned_rate_limits": {
00:14:07.044        "rw_ios_per_sec": 0,
00:14:07.044        "rw_mbytes_per_sec": 0,
00:14:07.044        "r_mbytes_per_sec": 0,
00:14:07.044        "w_mbytes_per_sec": 0
00:14:07.044      },
00:14:07.044      "claimed": true,
00:14:07.044      "claim_type": "exclusive_write",
00:14:07.044      "zoned": false,
00:14:07.044      "supported_io_types": {
00:14:07.044        "read": true,
00:14:07.044        "write": true,
00:14:07.044        "unmap": true,
00:14:07.044        "write_zeroes": true,
00:14:07.044        "flush": true,
00:14:07.044        "reset": true,
00:14:07.044        "compare": false,
00:14:07.044        "compare_and_write": false,
00:14:07.044        "abort": true,
00:14:07.044        "nvme_admin": false,
00:14:07.044        "nvme_io": false
00:14:07.044      },
00:14:07.044      "memory_domains": [
00:14:07.044        {
00:14:07.044          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:14:07.044          "dma_device_type": 2
00:14:07.044        }
00:14:07.044      ],
00:14:07.044      "driver_specific": {}
00:14:07.044    }
00:14:07.044  ]
00:14:07.044   16:57:59	-- common/autotest_common.sh@905 -- # return 0
00:14:07.044   16:57:59	-- bdev/bdev_raid.sh@254 -- # (( i++ ))
00:14:07.044   16:57:59	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:14:07.044   16:57:59	-- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 2
00:14:07.044   16:57:59	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:14:07.044   16:57:59	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:14:07.044   16:57:59	-- bdev/bdev_raid.sh@119 -- # local raid_level=concat
00:14:07.044   16:57:59	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:14:07.044   16:57:59	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:14:07.044   16:57:59	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:14:07.044   16:57:59	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:14:07.044   16:57:59	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:14:07.044   16:57:59	-- bdev/bdev_raid.sh@125 -- # local tmp
00:14:07.044    16:57:59	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:14:07.044    16:57:59	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:14:07.303   16:58:00	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:14:07.303    "name": "Existed_Raid",
00:14:07.303    "uuid": "38a790b7-ffcc-402a-8808-6646bea3d397",
00:14:07.303    "strip_size_kb": 64,
00:14:07.303    "state": "online",
00:14:07.303    "raid_level": "concat",
00:14:07.303    "superblock": true,
00:14:07.303    "num_base_bdevs": 2,
00:14:07.303    "num_base_bdevs_discovered": 2,
00:14:07.303    "num_base_bdevs_operational": 2,
00:14:07.303    "base_bdevs_list": [
00:14:07.303      {
00:14:07.303        "name": "BaseBdev1",
00:14:07.303        "uuid": "93548ca4-5143-417a-a7ca-70ee90e33ef3",
00:14:07.303        "is_configured": true,
00:14:07.303        "data_offset": 2048,
00:14:07.303        "data_size": 63488
00:14:07.303      },
00:14:07.303      {
00:14:07.303        "name": "BaseBdev2",
00:14:07.303        "uuid": "45cc08f3-2c5e-4dbd-b1a0-6f797921ca01",
00:14:07.303        "is_configured": true,
00:14:07.303        "data_offset": 2048,
00:14:07.303        "data_size": 63488
00:14:07.303      }
00:14:07.303    ]
00:14:07.303  }'
00:14:07.303   16:58:00	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:14:07.303   16:58:00	-- common/autotest_common.sh@10 -- # set +x
00:14:07.871   16:58:00	-- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1
00:14:08.131  [2024-11-19 16:58:00.830389] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:14:08.131  [2024-11-19 16:58:00.830550] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:14:08.131  [2024-11-19 16:58:00.830782] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:14:08.131   16:58:00	-- bdev/bdev_raid.sh@263 -- # local expected_state
00:14:08.131   16:58:00	-- bdev/bdev_raid.sh@264 -- # has_redundancy concat
00:14:08.131   16:58:00	-- bdev/bdev_raid.sh@195 -- # case $1 in
00:14:08.131   16:58:00	-- bdev/bdev_raid.sh@197 -- # return 1
00:14:08.131   16:58:00	-- bdev/bdev_raid.sh@265 -- # expected_state=offline
00:14:08.131   16:58:00	-- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1
00:14:08.131   16:58:00	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:14:08.131   16:58:00	-- bdev/bdev_raid.sh@118 -- # local expected_state=offline
00:14:08.131   16:58:00	-- bdev/bdev_raid.sh@119 -- # local raid_level=concat
00:14:08.131   16:58:00	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:14:08.131   16:58:00	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1
00:14:08.131   16:58:00	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:14:08.131   16:58:00	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:14:08.131   16:58:00	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:14:08.131   16:58:00	-- bdev/bdev_raid.sh@125 -- # local tmp
00:14:08.131    16:58:00	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:14:08.131    16:58:00	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:14:08.389   16:58:01	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:14:08.389    "name": "Existed_Raid",
00:14:08.389    "uuid": "38a790b7-ffcc-402a-8808-6646bea3d397",
00:14:08.389    "strip_size_kb": 64,
00:14:08.389    "state": "offline",
00:14:08.389    "raid_level": "concat",
00:14:08.389    "superblock": true,
00:14:08.389    "num_base_bdevs": 2,
00:14:08.389    "num_base_bdevs_discovered": 1,
00:14:08.389    "num_base_bdevs_operational": 1,
00:14:08.389    "base_bdevs_list": [
00:14:08.389      {
00:14:08.389        "name": null,
00:14:08.390        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:08.390        "is_configured": false,
00:14:08.390        "data_offset": 2048,
00:14:08.390        "data_size": 63488
00:14:08.390      },
00:14:08.390      {
00:14:08.390        "name": "BaseBdev2",
00:14:08.390        "uuid": "45cc08f3-2c5e-4dbd-b1a0-6f797921ca01",
00:14:08.390        "is_configured": true,
00:14:08.390        "data_offset": 2048,
00:14:08.390        "data_size": 63488
00:14:08.390      }
00:14:08.390    ]
00:14:08.390  }'
00:14:08.390   16:58:01	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:14:08.390   16:58:01	-- common/autotest_common.sh@10 -- # set +x
00:14:08.957   16:58:01	-- bdev/bdev_raid.sh@273 -- # (( i = 1 ))
00:14:08.957   16:58:01	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:14:08.957    16:58:01	-- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:14:08.957    16:58:01	-- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]'
00:14:09.216   16:58:01	-- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid
00:14:09.216   16:58:01	-- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:14:09.216   16:58:01	-- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2
00:14:09.216  [2024-11-19 16:58:02.045471] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2
00:14:09.216  [2024-11-19 16:58:02.045669] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state offline
00:14:09.216   16:58:02	-- bdev/bdev_raid.sh@273 -- # (( i++ ))
00:14:09.216   16:58:02	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:14:09.475    16:58:02	-- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)'
00:14:09.475    16:58:02	-- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:14:09.734   16:58:02	-- bdev/bdev_raid.sh@281 -- # raid_bdev=
00:14:09.734   16:58:02	-- bdev/bdev_raid.sh@282 -- # '[' -n '' ']'
00:14:09.734   16:58:02	-- bdev/bdev_raid.sh@287 -- # killprocess 123877
00:14:09.734   16:58:02	-- common/autotest_common.sh@936 -- # '[' -z 123877 ']'
00:14:09.734   16:58:02	-- common/autotest_common.sh@940 -- # kill -0 123877
00:14:09.734    16:58:02	-- common/autotest_common.sh@941 -- # uname
00:14:09.734   16:58:02	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:14:09.734    16:58:02	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 123877
00:14:09.735   16:58:02	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:14:09.735   16:58:02	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:14:09.735   16:58:02	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 123877'
00:14:09.735  killing process with pid 123877
00:14:09.735   16:58:02	-- common/autotest_common.sh@955 -- # kill 123877
00:14:09.735   16:58:02	-- common/autotest_common.sh@960 -- # wait 123877
00:14:09.735  [2024-11-19 16:58:02.370991] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:14:09.735  [2024-11-19 16:58:02.371067] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:14:09.994   16:58:02	-- bdev/bdev_raid.sh@289 -- # return 0
00:14:09.994  
00:14:09.994  real	0m9.184s
00:14:09.994  user	0m16.240s
00:14:09.994  sys	0m1.512s
00:14:09.994   16:58:02	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:14:09.994   16:58:02	-- common/autotest_common.sh@10 -- # set +x
00:14:09.994  ************************************
00:14:09.994  END TEST raid_state_function_test_sb
00:14:09.994  ************************************
00:14:09.994   16:58:02	-- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 2
00:14:09.994   16:58:02	-- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']'
00:14:09.994   16:58:02	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:14:09.994   16:58:02	-- common/autotest_common.sh@10 -- # set +x
00:14:09.994  ************************************
00:14:09.994  START TEST raid_superblock_test
00:14:09.994  ************************************
00:14:09.994   16:58:02	-- common/autotest_common.sh@1114 -- # raid_superblock_test concat 2
00:14:09.994   16:58:02	-- bdev/bdev_raid.sh@338 -- # local raid_level=concat
00:14:09.994   16:58:02	-- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2
00:14:09.994   16:58:02	-- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=()
00:14:09.994   16:58:02	-- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc
00:14:09.994   16:58:02	-- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=()
00:14:09.994   16:58:02	-- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt
00:14:09.994   16:58:02	-- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=()
00:14:09.994   16:58:02	-- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid
00:14:09.994   16:58:02	-- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1
00:14:09.994   16:58:02	-- bdev/bdev_raid.sh@344 -- # local strip_size
00:14:09.994   16:58:02	-- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg
00:14:09.994   16:58:02	-- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid
00:14:09.994   16:58:02	-- bdev/bdev_raid.sh@347 -- # local raid_bdev
00:14:09.994   16:58:02	-- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']'
00:14:09.994   16:58:02	-- bdev/bdev_raid.sh@350 -- # strip_size=64
00:14:09.994   16:58:02	-- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64'
00:14:09.994   16:58:02	-- bdev/bdev_raid.sh@357 -- # raid_pid=124190
00:14:09.994   16:58:02	-- bdev/bdev_raid.sh@358 -- # waitforlisten 124190 /var/tmp/spdk-raid.sock
00:14:09.995   16:58:02	-- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid
00:14:09.995   16:58:02	-- common/autotest_common.sh@829 -- # '[' -z 124190 ']'
00:14:09.995   16:58:02	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock
00:14:09.995   16:58:02	-- common/autotest_common.sh@834 -- # local max_retries=100
00:14:09.995   16:58:02	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...'
00:14:09.995  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...
00:14:09.995   16:58:02	-- common/autotest_common.sh@838 -- # xtrace_disable
00:14:09.995   16:58:02	-- common/autotest_common.sh@10 -- # set +x
00:14:09.995  [2024-11-19 16:58:02.761482] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:14:09.995  [2024-11-19 16:58:02.761962] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124190 ]
00:14:10.254  [2024-11-19 16:58:02.913856] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:14:10.254  [2024-11-19 16:58:02.955850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:14:10.254  [2024-11-19 16:58:02.997052] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:14:11.191   16:58:03	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:14:11.191   16:58:03	-- common/autotest_common.sh@862 -- # return 0
00:14:11.191   16:58:03	-- bdev/bdev_raid.sh@361 -- # (( i = 1 ))
00:14:11.191   16:58:03	-- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs ))
00:14:11.191   16:58:03	-- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1
00:14:11.191   16:58:03	-- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1
00:14:11.191   16:58:03	-- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001
00:14:11.191   16:58:03	-- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc)
00:14:11.191   16:58:03	-- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt)
00:14:11.191   16:58:03	-- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:14:11.191   16:58:03	-- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1
00:14:11.191  malloc1
00:14:11.191   16:58:03	-- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001
00:14:11.453  [2024-11-19 16:58:04.175109] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1
00:14:11.453  [2024-11-19 16:58:04.175357] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:14:11.453  [2024-11-19 16:58:04.175437] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80
00:14:11.453  [2024-11-19 16:58:04.175555] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:14:11.453  [2024-11-19 16:58:04.178121] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:14:11.453  [2024-11-19 16:58:04.178281] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1
00:14:11.453  pt1
00:14:11.453   16:58:04	-- bdev/bdev_raid.sh@361 -- # (( i++ ))
00:14:11.453   16:58:04	-- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs ))
00:14:11.453   16:58:04	-- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2
00:14:11.453   16:58:04	-- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2
00:14:11.453   16:58:04	-- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002
00:14:11.453   16:58:04	-- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc)
00:14:11.453   16:58:04	-- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt)
00:14:11.453   16:58:04	-- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:14:11.453   16:58:04	-- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2
00:14:11.744  malloc2
00:14:11.744   16:58:04	-- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:14:12.035  [2024-11-19 16:58:04.591938] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:14:12.035  [2024-11-19 16:58:04.592202] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:14:12.035  [2024-11-19 16:58:04.592271] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680
00:14:12.035  [2024-11-19 16:58:04.592393] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:14:12.035  [2024-11-19 16:58:04.594782] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:14:12.035  [2024-11-19 16:58:04.594952] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:14:12.035  pt2
00:14:12.035   16:58:04	-- bdev/bdev_raid.sh@361 -- # (( i++ ))
00:14:12.035   16:58:04	-- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs ))
00:14:12.035   16:58:04	-- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2' -n raid_bdev1 -s
00:14:12.035  [2024-11-19 16:58:04.772036] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed
00:14:12.035  [2024-11-19 16:58:04.774256] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:14:12.035  [2024-11-19 16:58:04.774557] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006c80
00:14:12.035  [2024-11-19 16:58:04.774653] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512
00:14:12.035  [2024-11-19 16:58:04.774808] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120
00:14:12.035  [2024-11-19 16:58:04.775263] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006c80
00:14:12.035  [2024-11-19 16:58:04.775360] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000006c80
00:14:12.035  [2024-11-19 16:58:04.775582] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:14:12.035   16:58:04	-- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2
00:14:12.035   16:58:04	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:14:12.035   16:58:04	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:14:12.035   16:58:04	-- bdev/bdev_raid.sh@119 -- # local raid_level=concat
00:14:12.035   16:58:04	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:14:12.035   16:58:04	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:14:12.035   16:58:04	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:14:12.035   16:58:04	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:14:12.035   16:58:04	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:14:12.035   16:58:04	-- bdev/bdev_raid.sh@125 -- # local tmp
00:14:12.035    16:58:04	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:14:12.035    16:58:04	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:14:12.301   16:58:05	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:14:12.301    "name": "raid_bdev1",
00:14:12.301    "uuid": "e778cedb-21fd-42e0-8d03-3ceb3b73636d",
00:14:12.301    "strip_size_kb": 64,
00:14:12.301    "state": "online",
00:14:12.301    "raid_level": "concat",
00:14:12.301    "superblock": true,
00:14:12.301    "num_base_bdevs": 2,
00:14:12.301    "num_base_bdevs_discovered": 2,
00:14:12.301    "num_base_bdevs_operational": 2,
00:14:12.301    "base_bdevs_list": [
00:14:12.301      {
00:14:12.301        "name": "pt1",
00:14:12.301        "uuid": "ac48b3f7-5993-5dac-b2fe-f1c4ced74255",
00:14:12.301        "is_configured": true,
00:14:12.301        "data_offset": 2048,
00:14:12.301        "data_size": 63488
00:14:12.301      },
00:14:12.301      {
00:14:12.301        "name": "pt2",
00:14:12.301        "uuid": "c17d04cb-5e42-5ceb-a4c4-4b4490b21ed6",
00:14:12.301        "is_configured": true,
00:14:12.301        "data_offset": 2048,
00:14:12.301        "data_size": 63488
00:14:12.301      }
00:14:12.301    ]
00:14:12.301  }'
00:14:12.301   16:58:05	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:14:12.301   16:58:05	-- common/autotest_common.sh@10 -- # set +x
00:14:12.869    16:58:05	-- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1
00:14:12.869    16:58:05	-- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid'
00:14:12.869  [2024-11-19 16:58:05.704285] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:14:12.869   16:58:05	-- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=e778cedb-21fd-42e0-8d03-3ceb3b73636d
00:14:12.869   16:58:05	-- bdev/bdev_raid.sh@380 -- # '[' -z e778cedb-21fd-42e0-8d03-3ceb3b73636d ']'
00:14:12.869   16:58:05	-- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1
00:14:13.127  [2024-11-19 16:58:05.960194] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:14:13.127  [2024-11-19 16:58:05.960354] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:14:13.127  [2024-11-19 16:58:05.960569] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:14:13.127  [2024-11-19 16:58:05.960731] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:14:13.127  [2024-11-19 16:58:05.960815] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name raid_bdev1, state offline
00:14:13.127    16:58:05	-- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:14:13.127    16:58:05	-- bdev/bdev_raid.sh@386 -- # jq -r '.[]'
00:14:13.386   16:58:06	-- bdev/bdev_raid.sh@386 -- # raid_bdev=
00:14:13.386   16:58:06	-- bdev/bdev_raid.sh@387 -- # '[' -n '' ']'
00:14:13.386   16:58:06	-- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}"
00:14:13.386   16:58:06	-- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1
00:14:13.645   16:58:06	-- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}"
00:14:13.645   16:58:06	-- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2
00:14:13.904    16:58:06	-- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs
00:14:13.904    16:58:06	-- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any'
00:14:13.904   16:58:06	-- bdev/bdev_raid.sh@395 -- # '[' false == true ']'
00:14:13.904   16:58:06	-- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1
00:14:14.162   16:58:06	-- common/autotest_common.sh@650 -- # local es=0
00:14:14.162   16:58:06	-- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1
00:14:14.162   16:58:06	-- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:14:14.162   16:58:06	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:14:14.162    16:58:06	-- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:14:14.162   16:58:06	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:14:14.162    16:58:06	-- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:14:14.162   16:58:06	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:14:14.162   16:58:06	-- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:14:14.162   16:58:06	-- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]]
00:14:14.162   16:58:06	-- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1
00:14:14.162  [2024-11-19 16:58:06.928317] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed
00:14:14.162  [2024-11-19 16:58:06.930492] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed
00:14:14.162  [2024-11-19 16:58:06.930670] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1
00:14:14.162  [2024-11-19 16:58:06.930843] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2
00:14:14.163  [2024-11-19 16:58:06.930985] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:14:14.163  [2024-11-19 16:58:06.931023] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name raid_bdev1, state configuring
00:14:14.163  request:
00:14:14.163  {
00:14:14.163    "name": "raid_bdev1",
00:14:14.163    "raid_level": "concat",
00:14:14.163    "base_bdevs": [
00:14:14.163      "malloc1",
00:14:14.163      "malloc2"
00:14:14.163    ],
00:14:14.163    "superblock": false,
00:14:14.163    "strip_size_kb": 64,
00:14:14.163    "method": "bdev_raid_create",
00:14:14.163    "req_id": 1
00:14:14.163  }
00:14:14.163  Got JSON-RPC error response
00:14:14.163  response:
00:14:14.163  {
00:14:14.163    "code": -17,
00:14:14.163    "message": "Failed to create RAID bdev raid_bdev1: File exists"
00:14:14.163  }
00:14:14.163   16:58:06	-- common/autotest_common.sh@653 -- # es=1
00:14:14.163   16:58:06	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:14:14.163   16:58:06	-- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:14:14.163   16:58:06	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:14:14.163    16:58:06	-- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:14:14.163    16:58:06	-- bdev/bdev_raid.sh@403 -- # jq -r '.[]'
00:14:14.421   16:58:07	-- bdev/bdev_raid.sh@403 -- # raid_bdev=
00:14:14.421   16:58:07	-- bdev/bdev_raid.sh@404 -- # '[' -n '' ']'
00:14:14.422   16:58:07	-- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001
00:14:14.681  [2024-11-19 16:58:07.292313] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1
00:14:14.681  [2024-11-19 16:58:07.292529] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:14:14.681  [2024-11-19 16:58:07.292625] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880
00:14:14.681  [2024-11-19 16:58:07.292721] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:14:14.681  [2024-11-19 16:58:07.295100] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:14:14.681  [2024-11-19 16:58:07.295246] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1
00:14:14.681  [2024-11-19 16:58:07.295387] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1
00:14:14.681  [2024-11-19 16:58:07.295515] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed
00:14:14.681  pt1
00:14:14.681   16:58:07	-- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2
00:14:14.681   16:58:07	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:14:14.681   16:58:07	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:14:14.681   16:58:07	-- bdev/bdev_raid.sh@119 -- # local raid_level=concat
00:14:14.681   16:58:07	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:14:14.681   16:58:07	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:14:14.681   16:58:07	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:14:14.681   16:58:07	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:14:14.681   16:58:07	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:14:14.681   16:58:07	-- bdev/bdev_raid.sh@125 -- # local tmp
00:14:14.681    16:58:07	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:14:14.681    16:58:07	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:14:14.681   16:58:07	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:14:14.681    "name": "raid_bdev1",
00:14:14.681    "uuid": "e778cedb-21fd-42e0-8d03-3ceb3b73636d",
00:14:14.681    "strip_size_kb": 64,
00:14:14.681    "state": "configuring",
00:14:14.681    "raid_level": "concat",
00:14:14.681    "superblock": true,
00:14:14.681    "num_base_bdevs": 2,
00:14:14.681    "num_base_bdevs_discovered": 1,
00:14:14.681    "num_base_bdevs_operational": 2,
00:14:14.681    "base_bdevs_list": [
00:14:14.681      {
00:14:14.681        "name": "pt1",
00:14:14.681        "uuid": "ac48b3f7-5993-5dac-b2fe-f1c4ced74255",
00:14:14.681        "is_configured": true,
00:14:14.681        "data_offset": 2048,
00:14:14.681        "data_size": 63488
00:14:14.681      },
00:14:14.681      {
00:14:14.681        "name": null,
00:14:14.681        "uuid": "c17d04cb-5e42-5ceb-a4c4-4b4490b21ed6",
00:14:14.681        "is_configured": false,
00:14:14.681        "data_offset": 2048,
00:14:14.681        "data_size": 63488
00:14:14.681      }
00:14:14.681    ]
00:14:14.681  }'
00:14:14.681   16:58:07	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:14:14.681   16:58:07	-- common/autotest_common.sh@10 -- # set +x
00:14:15.249   16:58:08	-- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']'
00:14:15.249   16:58:08	-- bdev/bdev_raid.sh@422 -- # (( i = 1 ))
00:14:15.249   16:58:08	-- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs ))
00:14:15.249   16:58:08	-- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:14:15.508  [2024-11-19 16:58:08.268532] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:14:15.508  [2024-11-19 16:58:08.268825] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:14:15.508  [2024-11-19 16:58:08.268897] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180
00:14:15.508  [2024-11-19 16:58:08.268996] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:14:15.508  [2024-11-19 16:58:08.269430] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:14:15.508  [2024-11-19 16:58:08.269588] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:14:15.508  [2024-11-19 16:58:08.269738] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2
00:14:15.508  [2024-11-19 16:58:08.269847] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:14:15.508  [2024-11-19 16:58:08.269973] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80
00:14:15.508  [2024-11-19 16:58:08.270060] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512
00:14:15.508  [2024-11-19 16:58:08.270163] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390
00:14:15.508  [2024-11-19 16:58:08.270525] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80
00:14:15.508  [2024-11-19 16:58:08.270629] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80
00:14:15.508  [2024-11-19 16:58:08.270792] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:14:15.508  pt2
00:14:15.508   16:58:08	-- bdev/bdev_raid.sh@422 -- # (( i++ ))
00:14:15.508   16:58:08	-- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs ))
00:14:15.508   16:58:08	-- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2
00:14:15.508   16:58:08	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:14:15.508   16:58:08	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:14:15.508   16:58:08	-- bdev/bdev_raid.sh@119 -- # local raid_level=concat
00:14:15.508   16:58:08	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:14:15.508   16:58:08	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:14:15.508   16:58:08	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:14:15.508   16:58:08	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:14:15.508   16:58:08	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:14:15.508   16:58:08	-- bdev/bdev_raid.sh@125 -- # local tmp
00:14:15.508    16:58:08	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:14:15.508    16:58:08	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:14:15.767   16:58:08	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:14:15.767    "name": "raid_bdev1",
00:14:15.767    "uuid": "e778cedb-21fd-42e0-8d03-3ceb3b73636d",
00:14:15.767    "strip_size_kb": 64,
00:14:15.767    "state": "online",
00:14:15.767    "raid_level": "concat",
00:14:15.767    "superblock": true,
00:14:15.767    "num_base_bdevs": 2,
00:14:15.767    "num_base_bdevs_discovered": 2,
00:14:15.767    "num_base_bdevs_operational": 2,
00:14:15.767    "base_bdevs_list": [
00:14:15.767      {
00:14:15.767        "name": "pt1",
00:14:15.767        "uuid": "ac48b3f7-5993-5dac-b2fe-f1c4ced74255",
00:14:15.767        "is_configured": true,
00:14:15.767        "data_offset": 2048,
00:14:15.767        "data_size": 63488
00:14:15.767      },
00:14:15.767      {
00:14:15.767        "name": "pt2",
00:14:15.767        "uuid": "c17d04cb-5e42-5ceb-a4c4-4b4490b21ed6",
00:14:15.767        "is_configured": true,
00:14:15.767        "data_offset": 2048,
00:14:15.767        "data_size": 63488
00:14:15.767      }
00:14:15.767    ]
00:14:15.767  }'
00:14:15.767   16:58:08	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:14:15.767   16:58:08	-- common/autotest_common.sh@10 -- # set +x
00:14:16.335    16:58:09	-- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1
00:14:16.335    16:58:09	-- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid'
00:14:16.594  [2024-11-19 16:58:09.268908] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:14:16.594   16:58:09	-- bdev/bdev_raid.sh@430 -- # '[' e778cedb-21fd-42e0-8d03-3ceb3b73636d '!=' e778cedb-21fd-42e0-8d03-3ceb3b73636d ']'
00:14:16.594   16:58:09	-- bdev/bdev_raid.sh@434 -- # has_redundancy concat
00:14:16.594   16:58:09	-- bdev/bdev_raid.sh@195 -- # case $1 in
00:14:16.594   16:58:09	-- bdev/bdev_raid.sh@197 -- # return 1
00:14:16.594   16:58:09	-- bdev/bdev_raid.sh@511 -- # killprocess 124190
00:14:16.594   16:58:09	-- common/autotest_common.sh@936 -- # '[' -z 124190 ']'
00:14:16.594   16:58:09	-- common/autotest_common.sh@940 -- # kill -0 124190
00:14:16.594    16:58:09	-- common/autotest_common.sh@941 -- # uname
00:14:16.594   16:58:09	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:14:16.594    16:58:09	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 124190
00:14:16.594   16:58:09	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:14:16.594   16:58:09	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:14:16.594   16:58:09	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 124190'
00:14:16.594  killing process with pid 124190
00:14:16.594   16:58:09	-- common/autotest_common.sh@955 -- # kill 124190
00:14:16.594  [2024-11-19 16:58:09.320635] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:14:16.594   16:58:09	-- common/autotest_common.sh@960 -- # wait 124190
00:14:16.594  [2024-11-19 16:58:09.320920] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:14:16.594  [2024-11-19 16:58:09.321100] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:14:16.594  [2024-11-19 16:58:09.321170] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline
00:14:16.594  [2024-11-19 16:58:09.344374] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:14:16.854   16:58:09	-- bdev/bdev_raid.sh@513 -- # return 0
00:14:16.854  
00:14:16.854  real	0m6.903s
00:14:16.854  user	0m12.008s
00:14:16.854  sys	0m1.268s
00:14:16.854   16:58:09	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:14:16.854   16:58:09	-- common/autotest_common.sh@10 -- # set +x
00:14:16.854  ************************************
00:14:16.854  END TEST raid_superblock_test
00:14:16.854  ************************************
00:14:16.854   16:58:09	-- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1
00:14:16.854   16:58:09	-- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false
00:14:16.854   16:58:09	-- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']'
00:14:16.854   16:58:09	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:14:16.854   16:58:09	-- common/autotest_common.sh@10 -- # set +x
00:14:16.854  ************************************
00:14:16.854  START TEST raid_state_function_test
00:14:16.854  ************************************
00:14:16.854   16:58:09	-- common/autotest_common.sh@1114 -- # raid_state_function_test raid1 2 false
00:14:16.854   16:58:09	-- bdev/bdev_raid.sh@202 -- # local raid_level=raid1
00:14:16.854   16:58:09	-- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2
00:14:16.854   16:58:09	-- bdev/bdev_raid.sh@204 -- # local superblock=false
00:14:16.854   16:58:09	-- bdev/bdev_raid.sh@205 -- # local raid_bdev
00:14:16.854    16:58:09	-- bdev/bdev_raid.sh@206 -- # (( i = 1 ))
00:14:16.854    16:58:09	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:14:16.854    16:58:09	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev1
00:14:16.854    16:58:09	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:14:16.854    16:58:09	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:14:16.854    16:58:09	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev2
00:14:16.854    16:58:09	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:14:16.854    16:58:09	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:14:16.854   16:58:09	-- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2')
00:14:16.854   16:58:09	-- bdev/bdev_raid.sh@206 -- # local base_bdevs
00:14:16.854   16:58:09	-- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid
00:14:16.854   16:58:09	-- bdev/bdev_raid.sh@208 -- # local strip_size
00:14:16.854   16:58:09	-- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg
00:14:16.854   16:58:09	-- bdev/bdev_raid.sh@210 -- # local superblock_create_arg
00:14:16.854   16:58:09	-- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']'
00:14:16.854   16:58:09	-- bdev/bdev_raid.sh@216 -- # strip_size=0
00:14:16.854   16:58:09	-- bdev/bdev_raid.sh@219 -- # '[' false = true ']'
00:14:16.854   16:58:09	-- bdev/bdev_raid.sh@222 -- # superblock_create_arg=
00:14:16.854   16:58:09	-- bdev/bdev_raid.sh@226 -- # raid_pid=124421
00:14:16.854   16:58:09	-- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 124421'
00:14:16.854   16:58:09	-- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid
00:14:16.854  Process raid pid: 124421
00:14:16.854   16:58:09	-- bdev/bdev_raid.sh@228 -- # waitforlisten 124421 /var/tmp/spdk-raid.sock
00:14:16.854   16:58:09	-- common/autotest_common.sh@829 -- # '[' -z 124421 ']'
00:14:16.854   16:58:09	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock
00:14:16.854   16:58:09	-- common/autotest_common.sh@834 -- # local max_retries=100
00:14:16.854   16:58:09	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...'
00:14:16.854  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...
00:14:16.854   16:58:09	-- common/autotest_common.sh@838 -- # xtrace_disable
00:14:16.854   16:58:09	-- common/autotest_common.sh@10 -- # set +x
00:14:17.112  [2024-11-19 16:58:09.724791] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:14:17.112  [2024-11-19 16:58:09.725098] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:14:17.112  [2024-11-19 16:58:09.867110] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:14:17.112  [2024-11-19 16:58:09.907496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:14:17.112  [2024-11-19 16:58:09.948759] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:14:18.050   16:58:10	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:14:18.050   16:58:10	-- common/autotest_common.sh@862 -- # return 0
00:14:18.050   16:58:10	-- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid
00:14:18.050  [2024-11-19 16:58:10.902219] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:14:18.050  [2024-11-19 16:58:10.904131] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:14:18.050  [2024-11-19 16:58:10.904341] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:14:18.050  [2024-11-19 16:58:10.904447] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:14:18.309   16:58:10	-- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2
00:14:18.309   16:58:10	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:14:18.309   16:58:10	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:14:18.309   16:58:10	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:14:18.309   16:58:10	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:14:18.309   16:58:10	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:14:18.309   16:58:10	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:14:18.309   16:58:10	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:14:18.309   16:58:10	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:14:18.309   16:58:10	-- bdev/bdev_raid.sh@125 -- # local tmp
00:14:18.309    16:58:10	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:14:18.309    16:58:10	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:14:18.568   16:58:11	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:14:18.568    "name": "Existed_Raid",
00:14:18.568    "uuid": "00000000-0000-0000-0000-000000000000",
00:14:18.568    "strip_size_kb": 0,
00:14:18.568    "state": "configuring",
00:14:18.568    "raid_level": "raid1",
00:14:18.568    "superblock": false,
00:14:18.568    "num_base_bdevs": 2,
00:14:18.568    "num_base_bdevs_discovered": 0,
00:14:18.568    "num_base_bdevs_operational": 2,
00:14:18.568    "base_bdevs_list": [
00:14:18.568      {
00:14:18.568        "name": "BaseBdev1",
00:14:18.568        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:18.568        "is_configured": false,
00:14:18.568        "data_offset": 0,
00:14:18.568        "data_size": 0
00:14:18.568      },
00:14:18.568      {
00:14:18.568        "name": "BaseBdev2",
00:14:18.568        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:18.568        "is_configured": false,
00:14:18.568        "data_offset": 0,
00:14:18.568        "data_size": 0
00:14:18.568      }
00:14:18.568    ]
00:14:18.568  }'
00:14:18.568   16:58:11	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:14:18.568   16:58:11	-- common/autotest_common.sh@10 -- # set +x
00:14:19.137   16:58:11	-- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid
00:14:19.137  [2024-11-19 16:58:11.902277] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:14:19.137  [2024-11-19 16:58:11.902474] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring
00:14:19.137   16:58:11	-- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid
00:14:19.397  [2024-11-19 16:58:12.154369] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:14:19.397  [2024-11-19 16:58:12.154559] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:14:19.397  [2024-11-19 16:58:12.154638] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:14:19.397  [2024-11-19 16:58:12.154695] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:14:19.397   16:58:12	-- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1
00:14:19.656  [2024-11-19 16:58:12.423607] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:14:19.656  BaseBdev1
00:14:19.656   16:58:12	-- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1
00:14:19.656   16:58:12	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1
00:14:19.656   16:58:12	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:14:19.656   16:58:12	-- common/autotest_common.sh@899 -- # local i
00:14:19.656   16:58:12	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:14:19.656   16:58:12	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:14:19.656   16:58:12	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:14:19.915   16:58:12	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000
00:14:19.915  [
00:14:19.916    {
00:14:19.916      "name": "BaseBdev1",
00:14:19.916      "aliases": [
00:14:19.916        "a358e480-a5cc-4f5c-92ea-e411dd2bf3c1"
00:14:19.916      ],
00:14:19.916      "product_name": "Malloc disk",
00:14:19.916      "block_size": 512,
00:14:19.916      "num_blocks": 65536,
00:14:19.916      "uuid": "a358e480-a5cc-4f5c-92ea-e411dd2bf3c1",
00:14:19.916      "assigned_rate_limits": {
00:14:19.916        "rw_ios_per_sec": 0,
00:14:19.916        "rw_mbytes_per_sec": 0,
00:14:19.916        "r_mbytes_per_sec": 0,
00:14:19.916        "w_mbytes_per_sec": 0
00:14:19.916      },
00:14:19.916      "claimed": true,
00:14:19.916      "claim_type": "exclusive_write",
00:14:19.916      "zoned": false,
00:14:19.916      "supported_io_types": {
00:14:19.916        "read": true,
00:14:19.916        "write": true,
00:14:19.916        "unmap": true,
00:14:19.916        "write_zeroes": true,
00:14:19.916        "flush": true,
00:14:19.916        "reset": true,
00:14:19.916        "compare": false,
00:14:19.916        "compare_and_write": false,
00:14:19.916        "abort": true,
00:14:19.916        "nvme_admin": false,
00:14:19.916        "nvme_io": false
00:14:19.916      },
00:14:19.916      "memory_domains": [
00:14:19.916        {
00:14:19.916          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:14:19.916          "dma_device_type": 2
00:14:19.916        }
00:14:19.916      ],
00:14:19.916      "driver_specific": {}
00:14:19.916    }
00:14:19.916  ]
00:14:20.175   16:58:12	-- common/autotest_common.sh@905 -- # return 0
00:14:20.175   16:58:12	-- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2
00:14:20.175   16:58:12	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:14:20.175   16:58:12	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:14:20.175   16:58:12	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:14:20.175   16:58:12	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:14:20.175   16:58:12	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:14:20.175   16:58:12	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:14:20.175   16:58:12	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:14:20.175   16:58:12	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:14:20.175   16:58:12	-- bdev/bdev_raid.sh@125 -- # local tmp
00:14:20.175    16:58:12	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:14:20.175    16:58:12	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:14:20.175   16:58:12	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:14:20.175    "name": "Existed_Raid",
00:14:20.175    "uuid": "00000000-0000-0000-0000-000000000000",
00:14:20.175    "strip_size_kb": 0,
00:14:20.175    "state": "configuring",
00:14:20.175    "raid_level": "raid1",
00:14:20.175    "superblock": false,
00:14:20.175    "num_base_bdevs": 2,
00:14:20.175    "num_base_bdevs_discovered": 1,
00:14:20.175    "num_base_bdevs_operational": 2,
00:14:20.175    "base_bdevs_list": [
00:14:20.175      {
00:14:20.175        "name": "BaseBdev1",
00:14:20.175        "uuid": "a358e480-a5cc-4f5c-92ea-e411dd2bf3c1",
00:14:20.175        "is_configured": true,
00:14:20.175        "data_offset": 0,
00:14:20.175        "data_size": 65536
00:14:20.175      },
00:14:20.175      {
00:14:20.175        "name": "BaseBdev2",
00:14:20.175        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:20.175        "is_configured": false,
00:14:20.175        "data_offset": 0,
00:14:20.175        "data_size": 0
00:14:20.175      }
00:14:20.175    ]
00:14:20.175  }'
00:14:20.175   16:58:12	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:14:20.175   16:58:12	-- common/autotest_common.sh@10 -- # set +x
00:14:20.743   16:58:13	-- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid
00:14:21.001  [2024-11-19 16:58:13.707857] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:14:21.001  [2024-11-19 16:58:13.708078] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring
00:14:21.001   16:58:13	-- bdev/bdev_raid.sh@244 -- # '[' false = true ']'
00:14:21.001   16:58:13	-- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid
00:14:21.260  [2024-11-19 16:58:13.879980] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:14:21.260  [2024-11-19 16:58:13.882287] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:14:21.260  [2024-11-19 16:58:13.882442] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:14:21.260   16:58:13	-- bdev/bdev_raid.sh@254 -- # (( i = 1 ))
00:14:21.260   16:58:13	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:14:21.260   16:58:13	-- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2
00:14:21.260   16:58:13	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:14:21.260   16:58:13	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:14:21.260   16:58:13	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:14:21.260   16:58:13	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:14:21.260   16:58:13	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:14:21.260   16:58:13	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:14:21.260   16:58:13	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:14:21.260   16:58:13	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:14:21.260   16:58:13	-- bdev/bdev_raid.sh@125 -- # local tmp
00:14:21.260    16:58:13	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:14:21.260    16:58:13	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:14:21.260   16:58:14	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:14:21.260    "name": "Existed_Raid",
00:14:21.260    "uuid": "00000000-0000-0000-0000-000000000000",
00:14:21.260    "strip_size_kb": 0,
00:14:21.260    "state": "configuring",
00:14:21.260    "raid_level": "raid1",
00:14:21.260    "superblock": false,
00:14:21.260    "num_base_bdevs": 2,
00:14:21.260    "num_base_bdevs_discovered": 1,
00:14:21.260    "num_base_bdevs_operational": 2,
00:14:21.260    "base_bdevs_list": [
00:14:21.260      {
00:14:21.260        "name": "BaseBdev1",
00:14:21.260        "uuid": "a358e480-a5cc-4f5c-92ea-e411dd2bf3c1",
00:14:21.260        "is_configured": true,
00:14:21.260        "data_offset": 0,
00:14:21.260        "data_size": 65536
00:14:21.260      },
00:14:21.260      {
00:14:21.260        "name": "BaseBdev2",
00:14:21.260        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:21.260        "is_configured": false,
00:14:21.260        "data_offset": 0,
00:14:21.260        "data_size": 0
00:14:21.260      }
00:14:21.260    ]
00:14:21.260  }'
00:14:21.260   16:58:14	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:14:21.260   16:58:14	-- common/autotest_common.sh@10 -- # set +x
00:14:21.828   16:58:14	-- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2
00:14:22.086  [2024-11-19 16:58:14.858328] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:14:22.086  [2024-11-19 16:58:14.858649] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080
00:14:22.086  [2024-11-19 16:58:14.858714] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512
00:14:22.086  [2024-11-19 16:58:14.859098] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000001f80
00:14:22.086  [2024-11-19 16:58:14.859873] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080
00:14:22.086  [2024-11-19 16:58:14.860036] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080
00:14:22.086  [2024-11-19 16:58:14.860489] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:14:22.086  BaseBdev2
00:14:22.086   16:58:14	-- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2
00:14:22.087   16:58:14	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2
00:14:22.087   16:58:14	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:14:22.087   16:58:14	-- common/autotest_common.sh@899 -- # local i
00:14:22.087   16:58:14	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:14:22.087   16:58:14	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:14:22.087   16:58:14	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:14:22.345   16:58:15	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000
00:14:22.605  [
00:14:22.605    {
00:14:22.605      "name": "BaseBdev2",
00:14:22.605      "aliases": [
00:14:22.605        "403daaad-47d7-4f81-a5b2-b931bdf27078"
00:14:22.605      ],
00:14:22.605      "product_name": "Malloc disk",
00:14:22.605      "block_size": 512,
00:14:22.605      "num_blocks": 65536,
00:14:22.605      "uuid": "403daaad-47d7-4f81-a5b2-b931bdf27078",
00:14:22.605      "assigned_rate_limits": {
00:14:22.605        "rw_ios_per_sec": 0,
00:14:22.605        "rw_mbytes_per_sec": 0,
00:14:22.605        "r_mbytes_per_sec": 0,
00:14:22.605        "w_mbytes_per_sec": 0
00:14:22.605      },
00:14:22.605      "claimed": true,
00:14:22.605      "claim_type": "exclusive_write",
00:14:22.605      "zoned": false,
00:14:22.605      "supported_io_types": {
00:14:22.605        "read": true,
00:14:22.605        "write": true,
00:14:22.605        "unmap": true,
00:14:22.605        "write_zeroes": true,
00:14:22.605        "flush": true,
00:14:22.605        "reset": true,
00:14:22.605        "compare": false,
00:14:22.605        "compare_and_write": false,
00:14:22.605        "abort": true,
00:14:22.605        "nvme_admin": false,
00:14:22.605        "nvme_io": false
00:14:22.605      },
00:14:22.605      "memory_domains": [
00:14:22.605        {
00:14:22.605          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:14:22.605          "dma_device_type": 2
00:14:22.605        }
00:14:22.605      ],
00:14:22.605      "driver_specific": {}
00:14:22.605    }
00:14:22.605  ]
00:14:22.605   16:58:15	-- common/autotest_common.sh@905 -- # return 0
00:14:22.605   16:58:15	-- bdev/bdev_raid.sh@254 -- # (( i++ ))
00:14:22.605   16:58:15	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:14:22.605   16:58:15	-- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2
00:14:22.605   16:58:15	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:14:22.605   16:58:15	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:14:22.605   16:58:15	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:14:22.605   16:58:15	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:14:22.605   16:58:15	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:14:22.605   16:58:15	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:14:22.605   16:58:15	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:14:22.605   16:58:15	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:14:22.605   16:58:15	-- bdev/bdev_raid.sh@125 -- # local tmp
00:14:22.605    16:58:15	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:14:22.605    16:58:15	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:14:22.864   16:58:15	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:14:22.865    "name": "Existed_Raid",
00:14:22.865    "uuid": "470b6cd6-3eba-4858-90f5-54de0297f07b",
00:14:22.865    "strip_size_kb": 0,
00:14:22.865    "state": "online",
00:14:22.865    "raid_level": "raid1",
00:14:22.865    "superblock": false,
00:14:22.865    "num_base_bdevs": 2,
00:14:22.865    "num_base_bdevs_discovered": 2,
00:14:22.865    "num_base_bdevs_operational": 2,
00:14:22.865    "base_bdevs_list": [
00:14:22.865      {
00:14:22.865        "name": "BaseBdev1",
00:14:22.865        "uuid": "a358e480-a5cc-4f5c-92ea-e411dd2bf3c1",
00:14:22.865        "is_configured": true,
00:14:22.865        "data_offset": 0,
00:14:22.865        "data_size": 65536
00:14:22.865      },
00:14:22.865      {
00:14:22.865        "name": "BaseBdev2",
00:14:22.865        "uuid": "403daaad-47d7-4f81-a5b2-b931bdf27078",
00:14:22.865        "is_configured": true,
00:14:22.865        "data_offset": 0,
00:14:22.865        "data_size": 65536
00:14:22.865      }
00:14:22.865    ]
00:14:22.865  }'
00:14:22.865   16:58:15	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:14:22.865   16:58:15	-- common/autotest_common.sh@10 -- # set +x
00:14:23.124   16:58:15	-- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1
00:14:23.383  [2024-11-19 16:58:16.238717] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:14:23.642   16:58:16	-- bdev/bdev_raid.sh@263 -- # local expected_state
00:14:23.642   16:58:16	-- bdev/bdev_raid.sh@264 -- # has_redundancy raid1
00:14:23.642   16:58:16	-- bdev/bdev_raid.sh@195 -- # case $1 in
00:14:23.642   16:58:16	-- bdev/bdev_raid.sh@196 -- # return 0
00:14:23.642   16:58:16	-- bdev/bdev_raid.sh@267 -- # expected_state=online
00:14:23.642   16:58:16	-- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1
00:14:23.642   16:58:16	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:14:23.642   16:58:16	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:14:23.642   16:58:16	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:14:23.642   16:58:16	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:14:23.642   16:58:16	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1
00:14:23.642   16:58:16	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:14:23.642   16:58:16	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:14:23.642   16:58:16	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:14:23.642   16:58:16	-- bdev/bdev_raid.sh@125 -- # local tmp
00:14:23.642    16:58:16	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:14:23.642    16:58:16	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:14:23.901   16:58:16	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:14:23.901    "name": "Existed_Raid",
00:14:23.901    "uuid": "470b6cd6-3eba-4858-90f5-54de0297f07b",
00:14:23.901    "strip_size_kb": 0,
00:14:23.901    "state": "online",
00:14:23.901    "raid_level": "raid1",
00:14:23.901    "superblock": false,
00:14:23.901    "num_base_bdevs": 2,
00:14:23.901    "num_base_bdevs_discovered": 1,
00:14:23.901    "num_base_bdevs_operational": 1,
00:14:23.901    "base_bdevs_list": [
00:14:23.901      {
00:14:23.901        "name": null,
00:14:23.901        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:23.901        "is_configured": false,
00:14:23.901        "data_offset": 0,
00:14:23.901        "data_size": 65536
00:14:23.901      },
00:14:23.901      {
00:14:23.901        "name": "BaseBdev2",
00:14:23.901        "uuid": "403daaad-47d7-4f81-a5b2-b931bdf27078",
00:14:23.901        "is_configured": true,
00:14:23.901        "data_offset": 0,
00:14:23.901        "data_size": 65536
00:14:23.901      }
00:14:23.901    ]
00:14:23.901  }'
00:14:23.901   16:58:16	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:14:23.901   16:58:16	-- common/autotest_common.sh@10 -- # set +x
00:14:24.468   16:58:17	-- bdev/bdev_raid.sh@273 -- # (( i = 1 ))
00:14:24.468   16:58:17	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:14:24.468    16:58:17	-- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:14:24.468    16:58:17	-- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]'
00:14:24.468   16:58:17	-- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid
00:14:24.468   16:58:17	-- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:14:24.468   16:58:17	-- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2
00:14:24.727  [2024-11-19 16:58:17.459240] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2
00:14:24.727  [2024-11-19 16:58:17.459427] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:14:24.727  [2024-11-19 16:58:17.459645] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:14:24.727  [2024-11-19 16:58:17.471468] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:14:24.727  [2024-11-19 16:58:17.471679] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline
00:14:24.727   16:58:17	-- bdev/bdev_raid.sh@273 -- # (( i++ ))
00:14:24.727   16:58:17	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:14:24.727    16:58:17	-- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:14:24.727    16:58:17	-- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)'
00:14:24.986   16:58:17	-- bdev/bdev_raid.sh@281 -- # raid_bdev=
00:14:24.986   16:58:17	-- bdev/bdev_raid.sh@282 -- # '[' -n '' ']'
00:14:24.986   16:58:17	-- bdev/bdev_raid.sh@287 -- # killprocess 124421
00:14:24.986   16:58:17	-- common/autotest_common.sh@936 -- # '[' -z 124421 ']'
00:14:24.986   16:58:17	-- common/autotest_common.sh@940 -- # kill -0 124421
00:14:24.986    16:58:17	-- common/autotest_common.sh@941 -- # uname
00:14:24.986   16:58:17	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:14:24.986    16:58:17	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 124421
00:14:24.986   16:58:17	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:14:24.986   16:58:17	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:14:24.986  killing process with pid 124421
00:14:24.986   16:58:17	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 124421'
00:14:24.986   16:58:17	-- common/autotest_common.sh@955 -- # kill 124421
00:14:24.986  [2024-11-19 16:58:17.694784] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:14:24.986   16:58:17	-- common/autotest_common.sh@960 -- # wait 124421
00:14:24.986  [2024-11-19 16:58:17.694873] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:14:25.245  ************************************
00:14:25.245  END TEST raid_state_function_test
00:14:25.245  ************************************
00:14:25.245   16:58:17	-- bdev/bdev_raid.sh@289 -- # return 0
00:14:25.245  
00:14:25.245  real	0m8.277s
00:14:25.245  user	0m14.623s
00:14:25.245  sys	0m1.394s
00:14:25.245   16:58:17	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:14:25.245   16:58:17	-- common/autotest_common.sh@10 -- # set +x
00:14:25.245   16:58:17	-- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true
00:14:25.245   16:58:17	-- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']'
00:14:25.245   16:58:17	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:14:25.245   16:58:17	-- common/autotest_common.sh@10 -- # set +x
00:14:25.245  ************************************
00:14:25.245  START TEST raid_state_function_test_sb
00:14:25.245  ************************************
00:14:25.245   16:58:18	-- common/autotest_common.sh@1114 -- # raid_state_function_test raid1 2 true
00:14:25.245   16:58:18	-- bdev/bdev_raid.sh@202 -- # local raid_level=raid1
00:14:25.245   16:58:18	-- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2
00:14:25.245   16:58:18	-- bdev/bdev_raid.sh@204 -- # local superblock=true
00:14:25.245   16:58:18	-- bdev/bdev_raid.sh@205 -- # local raid_bdev
00:14:25.245    16:58:18	-- bdev/bdev_raid.sh@206 -- # (( i = 1 ))
00:14:25.245    16:58:18	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:14:25.245    16:58:18	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev1
00:14:25.245    16:58:18	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:14:25.245    16:58:18	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:14:25.245    16:58:18	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev2
00:14:25.245    16:58:18	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:14:25.245    16:58:18	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:14:25.245   16:58:18	-- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2')
00:14:25.245   16:58:18	-- bdev/bdev_raid.sh@206 -- # local base_bdevs
00:14:25.245   16:58:18	-- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid
00:14:25.245   16:58:18	-- bdev/bdev_raid.sh@208 -- # local strip_size
00:14:25.245   16:58:18	-- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg
00:14:25.245   16:58:18	-- bdev/bdev_raid.sh@210 -- # local superblock_create_arg
00:14:25.245   16:58:18	-- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']'
00:14:25.245   16:58:18	-- bdev/bdev_raid.sh@216 -- # strip_size=0
00:14:25.245   16:58:18	-- bdev/bdev_raid.sh@219 -- # '[' true = true ']'
00:14:25.245   16:58:18	-- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s
00:14:25.245   16:58:18	-- bdev/bdev_raid.sh@226 -- # raid_pid=124723
00:14:25.245   16:58:18	-- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 124723'
00:14:25.245   16:58:18	-- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid
00:14:25.245  Process raid pid: 124723
00:14:25.245   16:58:18	-- bdev/bdev_raid.sh@228 -- # waitforlisten 124723 /var/tmp/spdk-raid.sock
00:14:25.245   16:58:18	-- common/autotest_common.sh@829 -- # '[' -z 124723 ']'
00:14:25.245   16:58:18	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock
00:14:25.245   16:58:18	-- common/autotest_common.sh@834 -- # local max_retries=100
00:14:25.245   16:58:18	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...'
00:14:25.245  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...
00:14:25.245   16:58:18	-- common/autotest_common.sh@838 -- # xtrace_disable
00:14:25.245   16:58:18	-- common/autotest_common.sh@10 -- # set +x
00:14:25.245  [2024-11-19 16:58:18.086585] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:14:25.245  [2024-11-19 16:58:18.087053] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:14:25.504  [2024-11-19 16:58:18.241642] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:14:25.504  [2024-11-19 16:58:18.282044] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:14:25.504  [2024-11-19 16:58:18.323272] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:14:26.439   16:58:18	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:14:26.439   16:58:18	-- common/autotest_common.sh@862 -- # return 0
00:14:26.439   16:58:18	-- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid
00:14:26.439  [2024-11-19 16:58:19.160198] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:14:26.439  [2024-11-19 16:58:19.160479] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:14:26.439  [2024-11-19 16:58:19.160563] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:14:26.439  [2024-11-19 16:58:19.160612] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:14:26.439   16:58:19	-- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2
00:14:26.439   16:58:19	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:14:26.439   16:58:19	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:14:26.439   16:58:19	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:14:26.439   16:58:19	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:14:26.439   16:58:19	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:14:26.439   16:58:19	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:14:26.439   16:58:19	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:14:26.439   16:58:19	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:14:26.439   16:58:19	-- bdev/bdev_raid.sh@125 -- # local tmp
00:14:26.439    16:58:19	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:14:26.439    16:58:19	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:14:26.697   16:58:19	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:14:26.697    "name": "Existed_Raid",
00:14:26.697    "uuid": "5de86e96-fa24-4b0e-91e8-399b63d78b6c",
00:14:26.697    "strip_size_kb": 0,
00:14:26.697    "state": "configuring",
00:14:26.697    "raid_level": "raid1",
00:14:26.697    "superblock": true,
00:14:26.697    "num_base_bdevs": 2,
00:14:26.697    "num_base_bdevs_discovered": 0,
00:14:26.697    "num_base_bdevs_operational": 2,
00:14:26.697    "base_bdevs_list": [
00:14:26.697      {
00:14:26.697        "name": "BaseBdev1",
00:14:26.697        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:26.697        "is_configured": false,
00:14:26.697        "data_offset": 0,
00:14:26.697        "data_size": 0
00:14:26.697      },
00:14:26.697      {
00:14:26.697        "name": "BaseBdev2",
00:14:26.697        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:26.697        "is_configured": false,
00:14:26.697        "data_offset": 0,
00:14:26.697        "data_size": 0
00:14:26.697      }
00:14:26.697    ]
00:14:26.697  }'
00:14:26.697   16:58:19	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:14:26.697   16:58:19	-- common/autotest_common.sh@10 -- # set +x
00:14:27.264   16:58:19	-- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid
00:14:27.522  [2024-11-19 16:58:20.124265] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:14:27.522  [2024-11-19 16:58:20.124446] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring
00:14:27.522   16:58:20	-- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid
00:14:27.522  [2024-11-19 16:58:20.372335] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:14:27.522  [2024-11-19 16:58:20.372570] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:14:27.522  [2024-11-19 16:58:20.372658] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:14:27.522  [2024-11-19 16:58:20.372711] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:14:27.781   16:58:20	-- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1
00:14:27.781  [2024-11-19 16:58:20.625412] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:14:27.781  BaseBdev1
00:14:28.040   16:58:20	-- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1
00:14:28.040   16:58:20	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1
00:14:28.040   16:58:20	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:14:28.040   16:58:20	-- common/autotest_common.sh@899 -- # local i
00:14:28.040   16:58:20	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:14:28.040   16:58:20	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:14:28.040   16:58:20	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:14:28.040   16:58:20	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000
00:14:28.299  [
00:14:28.299    {
00:14:28.299      "name": "BaseBdev1",
00:14:28.299      "aliases": [
00:14:28.299        "5977ba00-9b3b-4030-8f58-7c86c6114756"
00:14:28.299      ],
00:14:28.299      "product_name": "Malloc disk",
00:14:28.299      "block_size": 512,
00:14:28.299      "num_blocks": 65536,
00:14:28.299      "uuid": "5977ba00-9b3b-4030-8f58-7c86c6114756",
00:14:28.299      "assigned_rate_limits": {
00:14:28.299        "rw_ios_per_sec": 0,
00:14:28.299        "rw_mbytes_per_sec": 0,
00:14:28.299        "r_mbytes_per_sec": 0,
00:14:28.299        "w_mbytes_per_sec": 0
00:14:28.299      },
00:14:28.299      "claimed": true,
00:14:28.299      "claim_type": "exclusive_write",
00:14:28.299      "zoned": false,
00:14:28.299      "supported_io_types": {
00:14:28.299        "read": true,
00:14:28.299        "write": true,
00:14:28.299        "unmap": true,
00:14:28.299        "write_zeroes": true,
00:14:28.299        "flush": true,
00:14:28.299        "reset": true,
00:14:28.299        "compare": false,
00:14:28.299        "compare_and_write": false,
00:14:28.299        "abort": true,
00:14:28.299        "nvme_admin": false,
00:14:28.299        "nvme_io": false
00:14:28.299      },
00:14:28.299      "memory_domains": [
00:14:28.299        {
00:14:28.299          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:14:28.299          "dma_device_type": 2
00:14:28.299        }
00:14:28.299      ],
00:14:28.299      "driver_specific": {}
00:14:28.299    }
00:14:28.299  ]
00:14:28.299   16:58:21	-- common/autotest_common.sh@905 -- # return 0
00:14:28.299   16:58:21	-- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2
00:14:28.299   16:58:21	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:14:28.299   16:58:21	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:14:28.299   16:58:21	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:14:28.299   16:58:21	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:14:28.299   16:58:21	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:14:28.299   16:58:21	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:14:28.299   16:58:21	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:14:28.299   16:58:21	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:14:28.299   16:58:21	-- bdev/bdev_raid.sh@125 -- # local tmp
00:14:28.299    16:58:21	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:14:28.299    16:58:21	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:14:28.557   16:58:21	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:14:28.557    "name": "Existed_Raid",
00:14:28.557    "uuid": "bd76f6cf-51c1-43e9-b34d-d6902f2ac40c",
00:14:28.557    "strip_size_kb": 0,
00:14:28.557    "state": "configuring",
00:14:28.557    "raid_level": "raid1",
00:14:28.557    "superblock": true,
00:14:28.557    "num_base_bdevs": 2,
00:14:28.557    "num_base_bdevs_discovered": 1,
00:14:28.557    "num_base_bdevs_operational": 2,
00:14:28.557    "base_bdevs_list": [
00:14:28.557      {
00:14:28.557        "name": "BaseBdev1",
00:14:28.557        "uuid": "5977ba00-9b3b-4030-8f58-7c86c6114756",
00:14:28.557        "is_configured": true,
00:14:28.557        "data_offset": 2048,
00:14:28.557        "data_size": 63488
00:14:28.557      },
00:14:28.557      {
00:14:28.557        "name": "BaseBdev2",
00:14:28.557        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:28.557        "is_configured": false,
00:14:28.557        "data_offset": 0,
00:14:28.557        "data_size": 0
00:14:28.557      }
00:14:28.557    ]
00:14:28.557  }'
00:14:28.557   16:58:21	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:14:28.557   16:58:21	-- common/autotest_common.sh@10 -- # set +x
00:14:29.126   16:58:21	-- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid
00:14:29.126  [2024-11-19 16:58:21.949662] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:14:29.126  [2024-11-19 16:58:21.949900] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring
00:14:29.126   16:58:21	-- bdev/bdev_raid.sh@244 -- # '[' true = true ']'
00:14:29.126   16:58:21	-- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1
00:14:29.385   16:58:22	-- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1
00:14:29.645  BaseBdev1
00:14:29.645   16:58:22	-- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1
00:14:29.645   16:58:22	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1
00:14:29.645   16:58:22	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:14:29.645   16:58:22	-- common/autotest_common.sh@899 -- # local i
00:14:29.645   16:58:22	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:14:29.645   16:58:22	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:14:29.645   16:58:22	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:14:29.905   16:58:22	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000
00:14:29.905  [
00:14:29.905    {
00:14:29.905      "name": "BaseBdev1",
00:14:29.905      "aliases": [
00:14:29.905        "2bd9694c-943c-4955-bdd7-8a597a87f004"
00:14:29.905      ],
00:14:29.905      "product_name": "Malloc disk",
00:14:29.905      "block_size": 512,
00:14:29.905      "num_blocks": 65536,
00:14:29.905      "uuid": "2bd9694c-943c-4955-bdd7-8a597a87f004",
00:14:29.905      "assigned_rate_limits": {
00:14:29.905        "rw_ios_per_sec": 0,
00:14:29.905        "rw_mbytes_per_sec": 0,
00:14:29.905        "r_mbytes_per_sec": 0,
00:14:29.905        "w_mbytes_per_sec": 0
00:14:29.905      },
00:14:29.905      "claimed": false,
00:14:29.905      "zoned": false,
00:14:29.905      "supported_io_types": {
00:14:29.905        "read": true,
00:14:29.905        "write": true,
00:14:29.905        "unmap": true,
00:14:29.905        "write_zeroes": true,
00:14:29.905        "flush": true,
00:14:29.905        "reset": true,
00:14:29.905        "compare": false,
00:14:29.905        "compare_and_write": false,
00:14:29.905        "abort": true,
00:14:29.905        "nvme_admin": false,
00:14:29.905        "nvme_io": false
00:14:29.905      },
00:14:29.905      "memory_domains": [
00:14:29.905        {
00:14:29.905          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:14:29.905          "dma_device_type": 2
00:14:29.905        }
00:14:29.905      ],
00:14:29.905      "driver_specific": {}
00:14:29.905    }
00:14:29.905  ]
00:14:29.905   16:58:22	-- common/autotest_common.sh@905 -- # return 0
00:14:29.905   16:58:22	-- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid
00:14:30.174  [2024-11-19 16:58:22.954574] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:14:30.174  [2024-11-19 16:58:22.956854] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:14:30.174  [2024-11-19 16:58:22.957013] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:14:30.174   16:58:22	-- bdev/bdev_raid.sh@254 -- # (( i = 1 ))
00:14:30.174   16:58:22	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:14:30.174   16:58:22	-- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2
00:14:30.174   16:58:22	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:14:30.174   16:58:22	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:14:30.174   16:58:22	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:14:30.174   16:58:22	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:14:30.174   16:58:22	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:14:30.174   16:58:22	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:14:30.174   16:58:22	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:14:30.174   16:58:22	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:14:30.174   16:58:22	-- bdev/bdev_raid.sh@125 -- # local tmp
00:14:30.174    16:58:22	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:14:30.174    16:58:22	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:14:30.454   16:58:23	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:14:30.454    "name": "Existed_Raid",
00:14:30.454    "uuid": "8478c5ab-7094-4445-99ae-4f4bfde5c6b6",
00:14:30.454    "strip_size_kb": 0,
00:14:30.454    "state": "configuring",
00:14:30.454    "raid_level": "raid1",
00:14:30.454    "superblock": true,
00:14:30.454    "num_base_bdevs": 2,
00:14:30.454    "num_base_bdevs_discovered": 1,
00:14:30.454    "num_base_bdevs_operational": 2,
00:14:30.454    "base_bdevs_list": [
00:14:30.454      {
00:14:30.454        "name": "BaseBdev1",
00:14:30.454        "uuid": "2bd9694c-943c-4955-bdd7-8a597a87f004",
00:14:30.454        "is_configured": true,
00:14:30.454        "data_offset": 2048,
00:14:30.454        "data_size": 63488
00:14:30.454      },
00:14:30.454      {
00:14:30.454        "name": "BaseBdev2",
00:14:30.454        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:30.454        "is_configured": false,
00:14:30.454        "data_offset": 0,
00:14:30.454        "data_size": 0
00:14:30.454      }
00:14:30.454    ]
00:14:30.454  }'
00:14:30.454   16:58:23	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:14:30.454   16:58:23	-- common/autotest_common.sh@10 -- # set +x
00:14:31.024   16:58:23	-- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2
00:14:31.282  [2024-11-19 16:58:23.966500] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:14:31.282  [2024-11-19 16:58:23.967064] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006680
00:14:31.282  [2024-11-19 16:58:23.967245] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512
00:14:31.282  [2024-11-19 16:58:23.967499] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002050
00:14:31.282  BaseBdev2
00:14:31.282  [2024-11-19 16:58:23.968263] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006680
00:14:31.282  [2024-11-19 16:58:23.968437] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006680
00:14:31.282  [2024-11-19 16:58:23.968831] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:14:31.282   16:58:23	-- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2
00:14:31.282   16:58:23	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2
00:14:31.282   16:58:23	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:14:31.282   16:58:23	-- common/autotest_common.sh@899 -- # local i
00:14:31.282   16:58:23	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:14:31.282   16:58:23	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:14:31.282   16:58:23	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:14:31.541   16:58:24	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000
00:14:31.541  [
00:14:31.541    {
00:14:31.541      "name": "BaseBdev2",
00:14:31.541      "aliases": [
00:14:31.541        "4ebebe24-4e41-438e-be83-6d85072a975e"
00:14:31.541      ],
00:14:31.541      "product_name": "Malloc disk",
00:14:31.541      "block_size": 512,
00:14:31.541      "num_blocks": 65536,
00:14:31.541      "uuid": "4ebebe24-4e41-438e-be83-6d85072a975e",
00:14:31.541      "assigned_rate_limits": {
00:14:31.541        "rw_ios_per_sec": 0,
00:14:31.541        "rw_mbytes_per_sec": 0,
00:14:31.541        "r_mbytes_per_sec": 0,
00:14:31.541        "w_mbytes_per_sec": 0
00:14:31.541      },
00:14:31.541      "claimed": true,
00:14:31.541      "claim_type": "exclusive_write",
00:14:31.541      "zoned": false,
00:14:31.541      "supported_io_types": {
00:14:31.541        "read": true,
00:14:31.541        "write": true,
00:14:31.541        "unmap": true,
00:14:31.541        "write_zeroes": true,
00:14:31.541        "flush": true,
00:14:31.541        "reset": true,
00:14:31.541        "compare": false,
00:14:31.541        "compare_and_write": false,
00:14:31.541        "abort": true,
00:14:31.541        "nvme_admin": false,
00:14:31.541        "nvme_io": false
00:14:31.541      },
00:14:31.541      "memory_domains": [
00:14:31.541        {
00:14:31.541          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:14:31.541          "dma_device_type": 2
00:14:31.541        }
00:14:31.541      ],
00:14:31.541      "driver_specific": {}
00:14:31.541    }
00:14:31.541  ]
00:14:31.541   16:58:24	-- common/autotest_common.sh@905 -- # return 0
00:14:31.541   16:58:24	-- bdev/bdev_raid.sh@254 -- # (( i++ ))
00:14:31.541   16:58:24	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:14:31.541   16:58:24	-- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2
00:14:31.541   16:58:24	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:14:31.541   16:58:24	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:14:31.541   16:58:24	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:14:31.541   16:58:24	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:14:31.541   16:58:24	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:14:31.541   16:58:24	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:14:31.541   16:58:24	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:14:31.541   16:58:24	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:14:31.541   16:58:24	-- bdev/bdev_raid.sh@125 -- # local tmp
00:14:31.541    16:58:24	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:14:31.541    16:58:24	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:14:31.798   16:58:24	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:14:31.798    "name": "Existed_Raid",
00:14:31.798    "uuid": "8478c5ab-7094-4445-99ae-4f4bfde5c6b6",
00:14:31.798    "strip_size_kb": 0,
00:14:31.798    "state": "online",
00:14:31.798    "raid_level": "raid1",
00:14:31.798    "superblock": true,
00:14:31.798    "num_base_bdevs": 2,
00:14:31.798    "num_base_bdevs_discovered": 2,
00:14:31.798    "num_base_bdevs_operational": 2,
00:14:31.798    "base_bdevs_list": [
00:14:31.798      {
00:14:31.798        "name": "BaseBdev1",
00:14:31.798        "uuid": "2bd9694c-943c-4955-bdd7-8a597a87f004",
00:14:31.798        "is_configured": true,
00:14:31.798        "data_offset": 2048,
00:14:31.798        "data_size": 63488
00:14:31.798      },
00:14:31.798      {
00:14:31.798        "name": "BaseBdev2",
00:14:31.798        "uuid": "4ebebe24-4e41-438e-be83-6d85072a975e",
00:14:31.798        "is_configured": true,
00:14:31.798        "data_offset": 2048,
00:14:31.798        "data_size": 63488
00:14:31.798      }
00:14:31.798    ]
00:14:31.798  }'
00:14:31.798   16:58:24	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:14:31.798   16:58:24	-- common/autotest_common.sh@10 -- # set +x
00:14:32.364   16:58:25	-- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1
00:14:32.622  [2024-11-19 16:58:25.414837] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:14:32.622   16:58:25	-- bdev/bdev_raid.sh@263 -- # local expected_state
00:14:32.622   16:58:25	-- bdev/bdev_raid.sh@264 -- # has_redundancy raid1
00:14:32.622   16:58:25	-- bdev/bdev_raid.sh@195 -- # case $1 in
00:14:32.622   16:58:25	-- bdev/bdev_raid.sh@196 -- # return 0
00:14:32.622   16:58:25	-- bdev/bdev_raid.sh@267 -- # expected_state=online
00:14:32.622   16:58:25	-- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1
00:14:32.622   16:58:25	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:14:32.622   16:58:25	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:14:32.622   16:58:25	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:14:32.622   16:58:25	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:14:32.622   16:58:25	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1
00:14:32.622   16:58:25	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:14:32.622   16:58:25	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:14:32.622   16:58:25	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:14:32.622   16:58:25	-- bdev/bdev_raid.sh@125 -- # local tmp
00:14:32.622    16:58:25	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:14:32.622    16:58:25	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:14:32.880   16:58:25	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:14:32.880    "name": "Existed_Raid",
00:14:32.880    "uuid": "8478c5ab-7094-4445-99ae-4f4bfde5c6b6",
00:14:32.880    "strip_size_kb": 0,
00:14:32.880    "state": "online",
00:14:32.880    "raid_level": "raid1",
00:14:32.880    "superblock": true,
00:14:32.880    "num_base_bdevs": 2,
00:14:32.880    "num_base_bdevs_discovered": 1,
00:14:32.880    "num_base_bdevs_operational": 1,
00:14:32.880    "base_bdevs_list": [
00:14:32.880      {
00:14:32.880        "name": null,
00:14:32.880        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:32.880        "is_configured": false,
00:14:32.880        "data_offset": 2048,
00:14:32.880        "data_size": 63488
00:14:32.880      },
00:14:32.880      {
00:14:32.880        "name": "BaseBdev2",
00:14:32.880        "uuid": "4ebebe24-4e41-438e-be83-6d85072a975e",
00:14:32.880        "is_configured": true,
00:14:32.880        "data_offset": 2048,
00:14:32.880        "data_size": 63488
00:14:32.880      }
00:14:32.880    ]
00:14:32.880  }'
00:14:32.880   16:58:25	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:14:32.880   16:58:25	-- common/autotest_common.sh@10 -- # set +x
00:14:33.446   16:58:26	-- bdev/bdev_raid.sh@273 -- # (( i = 1 ))
00:14:33.446   16:58:26	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:14:33.446    16:58:26	-- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:14:33.446    16:58:26	-- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]'
00:14:33.704   16:58:26	-- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid
00:14:33.704   16:58:26	-- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:14:33.704   16:58:26	-- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2
00:14:33.962  [2024-11-19 16:58:26.773875] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2
00:14:33.962  [2024-11-19 16:58:26.774026] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:14:33.962  [2024-11-19 16:58:26.774242] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:14:33.962  [2024-11-19 16:58:26.786110] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:14:33.962  [2024-11-19 16:58:26.786258] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state offline
00:14:33.962   16:58:26	-- bdev/bdev_raid.sh@273 -- # (( i++ ))
00:14:33.962   16:58:26	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:14:33.962    16:58:26	-- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:14:33.962    16:58:26	-- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)'
00:14:34.221   16:58:26	-- bdev/bdev_raid.sh@281 -- # raid_bdev=
00:14:34.221   16:58:26	-- bdev/bdev_raid.sh@282 -- # '[' -n '' ']'
00:14:34.221   16:58:26	-- bdev/bdev_raid.sh@287 -- # killprocess 124723
00:14:34.221   16:58:26	-- common/autotest_common.sh@936 -- # '[' -z 124723 ']'
00:14:34.221   16:58:26	-- common/autotest_common.sh@940 -- # kill -0 124723
00:14:34.221    16:58:26	-- common/autotest_common.sh@941 -- # uname
00:14:34.221   16:58:26	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:14:34.221    16:58:26	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 124723
00:14:34.221   16:58:27	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:14:34.221   16:58:27	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:14:34.221   16:58:27	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 124723'
00:14:34.221  killing process with pid 124723
00:14:34.221   16:58:27	-- common/autotest_common.sh@955 -- # kill 124723
00:14:34.221  [2024-11-19 16:58:27.012444] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:14:34.221  [2024-11-19 16:58:27.012504] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:14:34.221   16:58:27	-- common/autotest_common.sh@960 -- # wait 124723
00:14:34.479   16:58:27	-- bdev/bdev_raid.sh@289 -- # return 0
00:14:34.479  
00:14:34.479  real	0m9.240s
00:14:34.479  user	0m16.329s
00:14:34.479  ************************************
00:14:34.479  END TEST raid_state_function_test_sb
00:14:34.479  ************************************
00:14:34.479  sys	0m1.584s
00:14:34.479   16:58:27	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:14:34.479   16:58:27	-- common/autotest_common.sh@10 -- # set +x
00:14:34.479   16:58:27	-- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 2
00:14:34.479   16:58:27	-- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']'
00:14:34.479   16:58:27	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:14:34.479   16:58:27	-- common/autotest_common.sh@10 -- # set +x
00:14:34.479  ************************************
00:14:34.479  START TEST raid_superblock_test
00:14:34.479  ************************************
00:14:34.479   16:58:27	-- common/autotest_common.sh@1114 -- # raid_superblock_test raid1 2
00:14:34.479   16:58:27	-- bdev/bdev_raid.sh@338 -- # local raid_level=raid1
00:14:34.479   16:58:27	-- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2
00:14:34.479   16:58:27	-- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=()
00:14:34.479   16:58:27	-- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc
00:14:34.479   16:58:27	-- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=()
00:14:34.738   16:58:27	-- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt
00:14:34.738   16:58:27	-- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=()
00:14:34.738   16:58:27	-- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid
00:14:34.738   16:58:27	-- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1
00:14:34.738   16:58:27	-- bdev/bdev_raid.sh@344 -- # local strip_size
00:14:34.738   16:58:27	-- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg
00:14:34.738   16:58:27	-- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid
00:14:34.738   16:58:27	-- bdev/bdev_raid.sh@347 -- # local raid_bdev
00:14:34.738   16:58:27	-- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']'
00:14:34.738   16:58:27	-- bdev/bdev_raid.sh@353 -- # strip_size=0
00:14:34.738   16:58:27	-- bdev/bdev_raid.sh@357 -- # raid_pid=125035
00:14:34.738   16:58:27	-- bdev/bdev_raid.sh@358 -- # waitforlisten 125035 /var/tmp/spdk-raid.sock
00:14:34.738   16:58:27	-- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid
00:14:34.738   16:58:27	-- common/autotest_common.sh@829 -- # '[' -z 125035 ']'
00:14:34.738   16:58:27	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock
00:14:34.738   16:58:27	-- common/autotest_common.sh@834 -- # local max_retries=100
00:14:34.738   16:58:27	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...'
00:14:34.738  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...
00:14:34.738   16:58:27	-- common/autotest_common.sh@838 -- # xtrace_disable
00:14:34.738   16:58:27	-- common/autotest_common.sh@10 -- # set +x
00:14:34.738  [2024-11-19 16:58:27.385782] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:14:34.738  [2024-11-19 16:58:27.386151] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125035 ]
00:14:34.738  [2024-11-19 16:58:27.527659] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:14:34.738  [2024-11-19 16:58:27.567152] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:14:34.996  [2024-11-19 16:58:27.608001] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:14:35.562   16:58:28	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:14:35.562   16:58:28	-- common/autotest_common.sh@862 -- # return 0
00:14:35.562   16:58:28	-- bdev/bdev_raid.sh@361 -- # (( i = 1 ))
00:14:35.562   16:58:28	-- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs ))
00:14:35.562   16:58:28	-- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1
00:14:35.562   16:58:28	-- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1
00:14:35.562   16:58:28	-- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001
00:14:35.562   16:58:28	-- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc)
00:14:35.562   16:58:28	-- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt)
00:14:35.562   16:58:28	-- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:14:35.562   16:58:28	-- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1
00:14:35.562  malloc1
00:14:35.562   16:58:28	-- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001
00:14:35.820  [2024-11-19 16:58:28.521829] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1
00:14:35.820  [2024-11-19 16:58:28.522098] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:14:35.820  [2024-11-19 16:58:28.522169] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80
00:14:35.820  [2024-11-19 16:58:28.522284] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:14:35.820  [2024-11-19 16:58:28.524879] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:14:35.820  [2024-11-19 16:58:28.525042] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1
00:14:35.820  pt1
00:14:35.820   16:58:28	-- bdev/bdev_raid.sh@361 -- # (( i++ ))
00:14:35.820   16:58:28	-- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs ))
00:14:35.820   16:58:28	-- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2
00:14:35.820   16:58:28	-- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2
00:14:35.820   16:58:28	-- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002
00:14:35.820   16:58:28	-- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc)
00:14:35.820   16:58:28	-- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt)
00:14:35.820   16:58:28	-- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:14:35.820   16:58:28	-- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2
00:14:36.078  malloc2
00:14:36.078   16:58:28	-- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:14:36.078  [2024-11-19 16:58:28.874586] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:14:36.078  [2024-11-19 16:58:28.874862] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:14:36.078  [2024-11-19 16:58:28.874932] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680
00:14:36.078  [2024-11-19 16:58:28.875040] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:14:36.078  [2024-11-19 16:58:28.877368] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:14:36.078  [2024-11-19 16:58:28.877516] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:14:36.078  pt2
00:14:36.078   16:58:28	-- bdev/bdev_raid.sh@361 -- # (( i++ ))
00:14:36.078   16:58:28	-- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs ))
00:14:36.078   16:58:28	-- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s
00:14:36.336  [2024-11-19 16:58:29.046676] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed
00:14:36.336  [2024-11-19 16:58:29.048884] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:14:36.336  [2024-11-19 16:58:29.049189] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006c80
00:14:36.337  [2024-11-19 16:58:29.049273] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512
00:14:36.337  [2024-11-19 16:58:29.049462] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120
00:14:36.337  [2024-11-19 16:58:29.049908] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006c80
00:14:36.337  [2024-11-19 16:58:29.050001] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000006c80
00:14:36.337  [2024-11-19 16:58:29.050207] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:14:36.337   16:58:29	-- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2
00:14:36.337   16:58:29	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:14:36.337   16:58:29	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:14:36.337   16:58:29	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:14:36.337   16:58:29	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:14:36.337   16:58:29	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:14:36.337   16:58:29	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:14:36.337   16:58:29	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:14:36.337   16:58:29	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:14:36.337   16:58:29	-- bdev/bdev_raid.sh@125 -- # local tmp
00:14:36.337    16:58:29	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:14:36.337    16:58:29	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:14:36.596   16:58:29	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:14:36.596    "name": "raid_bdev1",
00:14:36.596    "uuid": "21319cce-54e2-4ddb-a41a-f1126dc41f91",
00:14:36.596    "strip_size_kb": 0,
00:14:36.596    "state": "online",
00:14:36.596    "raid_level": "raid1",
00:14:36.596    "superblock": true,
00:14:36.596    "num_base_bdevs": 2,
00:14:36.596    "num_base_bdevs_discovered": 2,
00:14:36.596    "num_base_bdevs_operational": 2,
00:14:36.596    "base_bdevs_list": [
00:14:36.596      {
00:14:36.596        "name": "pt1",
00:14:36.596        "uuid": "76c4ade6-dc25-5960-8781-126000ab25dc",
00:14:36.596        "is_configured": true,
00:14:36.596        "data_offset": 2048,
00:14:36.596        "data_size": 63488
00:14:36.596      },
00:14:36.596      {
00:14:36.596        "name": "pt2",
00:14:36.596        "uuid": "46a2b984-c7d1-5dc8-81f6-fb80495312db",
00:14:36.596        "is_configured": true,
00:14:36.596        "data_offset": 2048,
00:14:36.596        "data_size": 63488
00:14:36.596      }
00:14:36.596    ]
00:14:36.596  }'
00:14:36.596   16:58:29	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:14:36.596   16:58:29	-- common/autotest_common.sh@10 -- # set +x
00:14:37.164    16:58:29	-- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1
00:14:37.164    16:58:29	-- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid'
00:14:37.164  [2024-11-19 16:58:29.930926] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:14:37.164   16:58:29	-- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=21319cce-54e2-4ddb-a41a-f1126dc41f91
00:14:37.164   16:58:29	-- bdev/bdev_raid.sh@380 -- # '[' -z 21319cce-54e2-4ddb-a41a-f1126dc41f91 ']'
00:14:37.164   16:58:29	-- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1
00:14:37.422  [2024-11-19 16:58:30.194798] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:14:37.422  [2024-11-19 16:58:30.194953] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:14:37.422  [2024-11-19 16:58:30.195178] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:14:37.422  [2024-11-19 16:58:30.195362] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:14:37.422  [2024-11-19 16:58:30.195448] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name raid_bdev1, state offline
00:14:37.422    16:58:30	-- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:14:37.422    16:58:30	-- bdev/bdev_raid.sh@386 -- # jq -r '.[]'
00:14:37.681   16:58:30	-- bdev/bdev_raid.sh@386 -- # raid_bdev=
00:14:37.681   16:58:30	-- bdev/bdev_raid.sh@387 -- # '[' -n '' ']'
00:14:37.681   16:58:30	-- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}"
00:14:37.681   16:58:30	-- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1
00:14:37.940   16:58:30	-- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}"
00:14:37.940   16:58:30	-- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2
00:14:38.199    16:58:30	-- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs
00:14:38.199    16:58:30	-- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any'
00:14:38.199   16:58:31	-- bdev/bdev_raid.sh@395 -- # '[' false == true ']'
00:14:38.199   16:58:31	-- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1
00:14:38.199   16:58:31	-- common/autotest_common.sh@650 -- # local es=0
00:14:38.199   16:58:31	-- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1
00:14:38.199   16:58:31	-- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:14:38.199   16:58:31	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:14:38.199    16:58:31	-- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:14:38.199   16:58:31	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:14:38.199    16:58:31	-- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:14:38.199   16:58:31	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:14:38.199   16:58:31	-- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:14:38.199   16:58:31	-- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]]
00:14:38.199   16:58:31	-- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1
00:14:38.458  [2024-11-19 16:58:31.226980] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed
00:14:38.458  [2024-11-19 16:58:31.229168] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed
00:14:38.458  [2024-11-19 16:58:31.229326] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1
00:14:38.458  [2024-11-19 16:58:31.229488] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2
00:14:38.458  [2024-11-19 16:58:31.229606] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:14:38.458  [2024-11-19 16:58:31.229641] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name raid_bdev1, state configuring
00:14:38.458  request:
00:14:38.458  {
00:14:38.458    "name": "raid_bdev1",
00:14:38.458    "raid_level": "raid1",
00:14:38.458    "base_bdevs": [
00:14:38.458      "malloc1",
00:14:38.458      "malloc2"
00:14:38.458    ],
00:14:38.458    "superblock": false,
00:14:38.458    "method": "bdev_raid_create",
00:14:38.458    "req_id": 1
00:14:38.458  }
00:14:38.458  Got JSON-RPC error response
00:14:38.458  response:
00:14:38.458  {
00:14:38.458    "code": -17,
00:14:38.458    "message": "Failed to create RAID bdev raid_bdev1: File exists"
00:14:38.459  }
00:14:38.459   16:58:31	-- common/autotest_common.sh@653 -- # es=1
00:14:38.459   16:58:31	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:14:38.459   16:58:31	-- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:14:38.459   16:58:31	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:14:38.459    16:58:31	-- bdev/bdev_raid.sh@403 -- # jq -r '.[]'
00:14:38.459    16:58:31	-- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:14:38.718   16:58:31	-- bdev/bdev_raid.sh@403 -- # raid_bdev=
00:14:38.718   16:58:31	-- bdev/bdev_raid.sh@404 -- # '[' -n '' ']'
00:14:38.718   16:58:31	-- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001
00:14:38.977  [2024-11-19 16:58:31.582989] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1
00:14:38.977  [2024-11-19 16:58:31.583187] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:14:38.977  [2024-11-19 16:58:31.583248] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880
00:14:38.977  [2024-11-19 16:58:31.583336] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:14:38.977  [2024-11-19 16:58:31.585667] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:14:38.977  [2024-11-19 16:58:31.585806] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1
00:14:38.977  [2024-11-19 16:58:31.585980] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1
00:14:38.977  [2024-11-19 16:58:31.586052] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed
00:14:38.977  pt1
00:14:38.977   16:58:31	-- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2
00:14:38.977   16:58:31	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:14:38.977   16:58:31	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:14:38.977   16:58:31	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:14:38.977   16:58:31	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:14:38.977   16:58:31	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:14:38.977   16:58:31	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:14:38.977   16:58:31	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:14:38.977   16:58:31	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:14:38.977   16:58:31	-- bdev/bdev_raid.sh@125 -- # local tmp
00:14:38.977    16:58:31	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:14:38.977    16:58:31	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:14:38.977   16:58:31	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:14:38.978    "name": "raid_bdev1",
00:14:38.978    "uuid": "21319cce-54e2-4ddb-a41a-f1126dc41f91",
00:14:38.978    "strip_size_kb": 0,
00:14:38.978    "state": "configuring",
00:14:38.978    "raid_level": "raid1",
00:14:38.978    "superblock": true,
00:14:38.978    "num_base_bdevs": 2,
00:14:38.978    "num_base_bdevs_discovered": 1,
00:14:38.978    "num_base_bdevs_operational": 2,
00:14:38.978    "base_bdevs_list": [
00:14:38.978      {
00:14:38.978        "name": "pt1",
00:14:38.978        "uuid": "76c4ade6-dc25-5960-8781-126000ab25dc",
00:14:38.978        "is_configured": true,
00:14:38.978        "data_offset": 2048,
00:14:38.978        "data_size": 63488
00:14:38.978      },
00:14:38.978      {
00:14:38.978        "name": null,
00:14:38.978        "uuid": "46a2b984-c7d1-5dc8-81f6-fb80495312db",
00:14:38.978        "is_configured": false,
00:14:38.978        "data_offset": 2048,
00:14:38.978        "data_size": 63488
00:14:38.978      }
00:14:38.978    ]
00:14:38.978  }'
00:14:38.978   16:58:31	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:14:38.978   16:58:31	-- common/autotest_common.sh@10 -- # set +x
00:14:39.546   16:58:32	-- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']'
00:14:39.546   16:58:32	-- bdev/bdev_raid.sh@422 -- # (( i = 1 ))
00:14:39.546   16:58:32	-- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs ))
00:14:39.546   16:58:32	-- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:14:39.805  [2024-11-19 16:58:32.567216] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:14:39.805  [2024-11-19 16:58:32.567431] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:14:39.805  [2024-11-19 16:58:32.567499] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180
00:14:39.805  [2024-11-19 16:58:32.567589] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:14:39.805  [2024-11-19 16:58:32.568012] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:14:39.805  [2024-11-19 16:58:32.568186] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:14:39.805  [2024-11-19 16:58:32.568341] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2
00:14:39.805  [2024-11-19 16:58:32.568439] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:14:39.805  [2024-11-19 16:58:32.568597] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80
00:14:39.805  [2024-11-19 16:58:32.568706] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512
00:14:39.805  [2024-11-19 16:58:32.568818] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390
00:14:39.805  [2024-11-19 16:58:32.569112] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80
00:14:39.805  [2024-11-19 16:58:32.569226] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80
00:14:39.805  [2024-11-19 16:58:32.569392] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:14:39.805  pt2
00:14:39.805   16:58:32	-- bdev/bdev_raid.sh@422 -- # (( i++ ))
00:14:39.805   16:58:32	-- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs ))
00:14:39.805   16:58:32	-- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2
00:14:39.805   16:58:32	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:14:39.805   16:58:32	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:14:39.805   16:58:32	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:14:39.805   16:58:32	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:14:39.806   16:58:32	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:14:39.806   16:58:32	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:14:39.806   16:58:32	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:14:39.806   16:58:32	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:14:39.806   16:58:32	-- bdev/bdev_raid.sh@125 -- # local tmp
00:14:39.806    16:58:32	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:14:39.806    16:58:32	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:14:40.064   16:58:32	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:14:40.064    "name": "raid_bdev1",
00:14:40.064    "uuid": "21319cce-54e2-4ddb-a41a-f1126dc41f91",
00:14:40.064    "strip_size_kb": 0,
00:14:40.064    "state": "online",
00:14:40.064    "raid_level": "raid1",
00:14:40.064    "superblock": true,
00:14:40.064    "num_base_bdevs": 2,
00:14:40.064    "num_base_bdevs_discovered": 2,
00:14:40.064    "num_base_bdevs_operational": 2,
00:14:40.064    "base_bdevs_list": [
00:14:40.064      {
00:14:40.064        "name": "pt1",
00:14:40.064        "uuid": "76c4ade6-dc25-5960-8781-126000ab25dc",
00:14:40.064        "is_configured": true,
00:14:40.064        "data_offset": 2048,
00:14:40.064        "data_size": 63488
00:14:40.064      },
00:14:40.064      {
00:14:40.064        "name": "pt2",
00:14:40.064        "uuid": "46a2b984-c7d1-5dc8-81f6-fb80495312db",
00:14:40.064        "is_configured": true,
00:14:40.064        "data_offset": 2048,
00:14:40.064        "data_size": 63488
00:14:40.064      }
00:14:40.064    ]
00:14:40.064  }'
00:14:40.064   16:58:32	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:14:40.064   16:58:32	-- common/autotest_common.sh@10 -- # set +x
00:14:40.631    16:58:33	-- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1
00:14:40.631    16:58:33	-- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid'
00:14:40.890  [2024-11-19 16:58:33.627570] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:14:40.890   16:58:33	-- bdev/bdev_raid.sh@430 -- # '[' 21319cce-54e2-4ddb-a41a-f1126dc41f91 '!=' 21319cce-54e2-4ddb-a41a-f1126dc41f91 ']'
00:14:40.890   16:58:33	-- bdev/bdev_raid.sh@434 -- # has_redundancy raid1
00:14:40.890   16:58:33	-- bdev/bdev_raid.sh@195 -- # case $1 in
00:14:40.890   16:58:33	-- bdev/bdev_raid.sh@196 -- # return 0
00:14:40.890   16:58:33	-- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1
00:14:41.149  [2024-11-19 16:58:33.879509] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1
00:14:41.149   16:58:33	-- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1
00:14:41.149   16:58:33	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:14:41.149   16:58:33	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:14:41.149   16:58:33	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:14:41.149   16:58:33	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:14:41.149   16:58:33	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1
00:14:41.149   16:58:33	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:14:41.149   16:58:33	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:14:41.149   16:58:33	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:14:41.149   16:58:33	-- bdev/bdev_raid.sh@125 -- # local tmp
00:14:41.149    16:58:33	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:14:41.149    16:58:33	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:14:41.408   16:58:34	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:14:41.408    "name": "raid_bdev1",
00:14:41.408    "uuid": "21319cce-54e2-4ddb-a41a-f1126dc41f91",
00:14:41.408    "strip_size_kb": 0,
00:14:41.408    "state": "online",
00:14:41.408    "raid_level": "raid1",
00:14:41.408    "superblock": true,
00:14:41.408    "num_base_bdevs": 2,
00:14:41.408    "num_base_bdevs_discovered": 1,
00:14:41.408    "num_base_bdevs_operational": 1,
00:14:41.408    "base_bdevs_list": [
00:14:41.408      {
00:14:41.408        "name": null,
00:14:41.408        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:41.408        "is_configured": false,
00:14:41.408        "data_offset": 2048,
00:14:41.408        "data_size": 63488
00:14:41.408      },
00:14:41.408      {
00:14:41.408        "name": "pt2",
00:14:41.408        "uuid": "46a2b984-c7d1-5dc8-81f6-fb80495312db",
00:14:41.408        "is_configured": true,
00:14:41.408        "data_offset": 2048,
00:14:41.408        "data_size": 63488
00:14:41.408      }
00:14:41.408    ]
00:14:41.408  }'
00:14:41.408   16:58:34	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:14:41.408   16:58:34	-- common/autotest_common.sh@10 -- # set +x
00:14:41.976   16:58:34	-- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1
00:14:41.976  [2024-11-19 16:58:34.775427] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:14:41.976  [2024-11-19 16:58:34.775608] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:14:41.976  [2024-11-19 16:58:34.775836] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:14:41.976  [2024-11-19 16:58:34.775982] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:14:41.976  [2024-11-19 16:58:34.776074] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline
00:14:41.976    16:58:34	-- bdev/bdev_raid.sh@443 -- # jq -r '.[]'
00:14:41.976    16:58:34	-- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:14:42.235   16:58:35	-- bdev/bdev_raid.sh@443 -- # raid_bdev=
00:14:42.235   16:58:35	-- bdev/bdev_raid.sh@444 -- # '[' -n '' ']'
00:14:42.235   16:58:35	-- bdev/bdev_raid.sh@449 -- # (( i = 1 ))
00:14:42.235   16:58:35	-- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs ))
00:14:42.235   16:58:35	-- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2
00:14:42.494   16:58:35	-- bdev/bdev_raid.sh@449 -- # (( i++ ))
00:14:42.494   16:58:35	-- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs ))
00:14:42.494   16:58:35	-- bdev/bdev_raid.sh@454 -- # (( i = 1 ))
00:14:42.494   16:58:35	-- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 ))
00:14:42.494   16:58:35	-- bdev/bdev_raid.sh@462 -- # i=1
00:14:42.494   16:58:35	-- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:14:42.753  [2024-11-19 16:58:35.427504] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:14:42.753  [2024-11-19 16:58:35.427830] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:14:42.753  [2024-11-19 16:58:35.427901] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480
00:14:42.753  [2024-11-19 16:58:35.428004] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:14:42.753  [2024-11-19 16:58:35.430692] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:14:42.753  [2024-11-19 16:58:35.430930] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:14:42.753  [2024-11-19 16:58:35.431171] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2
00:14:42.753  [2024-11-19 16:58:35.431381] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:14:42.753  [2024-11-19 16:58:35.431568] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80
00:14:42.753  [2024-11-19 16:58:35.431695] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512
00:14:42.753  [2024-11-19 16:58:35.431832] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530
00:14:42.753  [2024-11-19 16:58:35.432403] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80
00:14:42.753  [2024-11-19 16:58:35.432567] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80
00:14:42.753  [2024-11-19 16:58:35.432863] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:14:42.753  pt2
00:14:42.753   16:58:35	-- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1
00:14:42.753   16:58:35	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:14:42.753   16:58:35	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:14:42.753   16:58:35	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:14:42.753   16:58:35	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:14:42.753   16:58:35	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1
00:14:42.753   16:58:35	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:14:42.753   16:58:35	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:14:42.753   16:58:35	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:14:42.753   16:58:35	-- bdev/bdev_raid.sh@125 -- # local tmp
00:14:42.753    16:58:35	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:14:42.753    16:58:35	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:14:43.012   16:58:35	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:14:43.012    "name": "raid_bdev1",
00:14:43.012    "uuid": "21319cce-54e2-4ddb-a41a-f1126dc41f91",
00:14:43.012    "strip_size_kb": 0,
00:14:43.012    "state": "online",
00:14:43.012    "raid_level": "raid1",
00:14:43.012    "superblock": true,
00:14:43.012    "num_base_bdevs": 2,
00:14:43.012    "num_base_bdevs_discovered": 1,
00:14:43.012    "num_base_bdevs_operational": 1,
00:14:43.012    "base_bdevs_list": [
00:14:43.012      {
00:14:43.012        "name": null,
00:14:43.012        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:43.012        "is_configured": false,
00:14:43.012        "data_offset": 2048,
00:14:43.012        "data_size": 63488
00:14:43.012      },
00:14:43.012      {
00:14:43.012        "name": "pt2",
00:14:43.012        "uuid": "46a2b984-c7d1-5dc8-81f6-fb80495312db",
00:14:43.012        "is_configured": true,
00:14:43.012        "data_offset": 2048,
00:14:43.012        "data_size": 63488
00:14:43.012      }
00:14:43.012    ]
00:14:43.012  }'
00:14:43.012   16:58:35	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:14:43.012   16:58:35	-- common/autotest_common.sh@10 -- # set +x
00:14:43.579   16:58:36	-- bdev/bdev_raid.sh@468 -- # '[' 2 -gt 2 ']'
00:14:43.579    16:58:36	-- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid'
00:14:43.579    16:58:36	-- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1
00:14:43.579  [2024-11-19 16:58:36.315986] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:14:43.579   16:58:36	-- bdev/bdev_raid.sh@506 -- # '[' 21319cce-54e2-4ddb-a41a-f1126dc41f91 '!=' 21319cce-54e2-4ddb-a41a-f1126dc41f91 ']'
00:14:43.579   16:58:36	-- bdev/bdev_raid.sh@511 -- # killprocess 125035
00:14:43.579   16:58:36	-- common/autotest_common.sh@936 -- # '[' -z 125035 ']'
00:14:43.579   16:58:36	-- common/autotest_common.sh@940 -- # kill -0 125035
00:14:43.579    16:58:36	-- common/autotest_common.sh@941 -- # uname
00:14:43.579   16:58:36	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:14:43.579    16:58:36	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 125035
00:14:43.579   16:58:36	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:14:43.579   16:58:36	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:14:43.579   16:58:36	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 125035'
00:14:43.579  killing process with pid 125035
00:14:43.579   16:58:36	-- common/autotest_common.sh@955 -- # kill 125035
00:14:43.579  [2024-11-19 16:58:36.370250] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:14:43.579  [2024-11-19 16:58:36.370457] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:14:43.579  [2024-11-19 16:58:36.370623] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:14:43.579  [2024-11-19 16:58:36.370700] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline
00:14:43.579   16:58:36	-- common/autotest_common.sh@960 -- # wait 125035
00:14:43.579  [2024-11-19 16:58:36.394609] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:14:43.838   16:58:36	-- bdev/bdev_raid.sh@513 -- # return 0
00:14:43.838  
00:14:43.838  real	0m9.306s
00:14:43.838  user	0m16.766s
00:14:43.838  sys	0m1.448s
00:14:43.838   16:58:36	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:14:43.838   16:58:36	-- common/autotest_common.sh@10 -- # set +x
00:14:43.838  ************************************
00:14:43.838  END TEST raid_superblock_test
00:14:43.838  ************************************
00:14:44.098   16:58:36	-- bdev/bdev_raid.sh@725 -- # for n in {2..4}
00:14:44.098   16:58:36	-- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1
00:14:44.098   16:58:36	-- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false
00:14:44.098   16:58:36	-- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']'
00:14:44.098   16:58:36	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:14:44.098   16:58:36	-- common/autotest_common.sh@10 -- # set +x
00:14:44.098  ************************************
00:14:44.098  START TEST raid_state_function_test
00:14:44.098  ************************************
00:14:44.098   16:58:36	-- common/autotest_common.sh@1114 -- # raid_state_function_test raid0 3 false
00:14:44.098   16:58:36	-- bdev/bdev_raid.sh@202 -- # local raid_level=raid0
00:14:44.098   16:58:36	-- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3
00:14:44.098   16:58:36	-- bdev/bdev_raid.sh@204 -- # local superblock=false
00:14:44.098   16:58:36	-- bdev/bdev_raid.sh@205 -- # local raid_bdev
00:14:44.098    16:58:36	-- bdev/bdev_raid.sh@206 -- # (( i = 1 ))
00:14:44.098    16:58:36	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:14:44.098    16:58:36	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev1
00:14:44.098    16:58:36	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:14:44.098    16:58:36	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:14:44.098    16:58:36	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev2
00:14:44.098    16:58:36	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:14:44.098    16:58:36	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:14:44.098    16:58:36	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev3
00:14:44.098    16:58:36	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:14:44.098    16:58:36	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:14:44.098   16:58:36	-- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3')
00:14:44.098   16:58:36	-- bdev/bdev_raid.sh@206 -- # local base_bdevs
00:14:44.098   16:58:36	-- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid
00:14:44.098   16:58:36	-- bdev/bdev_raid.sh@208 -- # local strip_size
00:14:44.098   16:58:36	-- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg
00:14:44.098   16:58:36	-- bdev/bdev_raid.sh@210 -- # local superblock_create_arg
00:14:44.098   16:58:36	-- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']'
00:14:44.098   16:58:36	-- bdev/bdev_raid.sh@213 -- # strip_size=64
00:14:44.098   16:58:36	-- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64'
00:14:44.098   16:58:36	-- bdev/bdev_raid.sh@219 -- # '[' false = true ']'
00:14:44.098   16:58:36	-- bdev/bdev_raid.sh@222 -- # superblock_create_arg=
00:14:44.098   16:58:36	-- bdev/bdev_raid.sh@226 -- # raid_pid=125369
00:14:44.098   16:58:36	-- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 125369'
00:14:44.098  Process raid pid: 125369
00:14:44.098   16:58:36	-- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid
00:14:44.098   16:58:36	-- bdev/bdev_raid.sh@228 -- # waitforlisten 125369 /var/tmp/spdk-raid.sock
00:14:44.098   16:58:36	-- common/autotest_common.sh@829 -- # '[' -z 125369 ']'
00:14:44.098   16:58:36	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock
00:14:44.098   16:58:36	-- common/autotest_common.sh@834 -- # local max_retries=100
00:14:44.098   16:58:36	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...'
00:14:44.098  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...
00:14:44.098   16:58:36	-- common/autotest_common.sh@838 -- # xtrace_disable
00:14:44.098   16:58:36	-- common/autotest_common.sh@10 -- # set +x
00:14:44.098  [2024-11-19 16:58:36.785792] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:14:44.098  [2024-11-19 16:58:36.786209] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:14:44.098  [2024-11-19 16:58:36.944394] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:14:44.357  [2024-11-19 16:58:36.995530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:14:44.357  [2024-11-19 16:58:37.043457] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:14:44.926   16:58:37	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:14:44.926   16:58:37	-- common/autotest_common.sh@862 -- # return 0
00:14:44.926   16:58:37	-- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid
00:14:45.185  [2024-11-19 16:58:37.891547] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:14:45.185  [2024-11-19 16:58:37.891814] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:14:45.185  [2024-11-19 16:58:37.891897] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:14:45.185  [2024-11-19 16:58:37.891946] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:14:45.185  [2024-11-19 16:58:37.891972] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:14:45.185  [2024-11-19 16:58:37.892080] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:14:45.185   16:58:37	-- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3
00:14:45.185   16:58:37	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:14:45.185   16:58:37	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:14:45.185   16:58:37	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid0
00:14:45.185   16:58:37	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:14:45.185   16:58:37	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:14:45.185   16:58:37	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:14:45.185   16:58:37	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:14:45.185   16:58:37	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:14:45.185   16:58:37	-- bdev/bdev_raid.sh@125 -- # local tmp
00:14:45.185    16:58:37	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:14:45.185    16:58:37	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:14:45.444   16:58:38	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:14:45.444    "name": "Existed_Raid",
00:14:45.444    "uuid": "00000000-0000-0000-0000-000000000000",
00:14:45.444    "strip_size_kb": 64,
00:14:45.444    "state": "configuring",
00:14:45.444    "raid_level": "raid0",
00:14:45.444    "superblock": false,
00:14:45.444    "num_base_bdevs": 3,
00:14:45.444    "num_base_bdevs_discovered": 0,
00:14:45.444    "num_base_bdevs_operational": 3,
00:14:45.444    "base_bdevs_list": [
00:14:45.444      {
00:14:45.444        "name": "BaseBdev1",
00:14:45.444        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:45.444        "is_configured": false,
00:14:45.444        "data_offset": 0,
00:14:45.444        "data_size": 0
00:14:45.444      },
00:14:45.444      {
00:14:45.444        "name": "BaseBdev2",
00:14:45.444        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:45.444        "is_configured": false,
00:14:45.444        "data_offset": 0,
00:14:45.444        "data_size": 0
00:14:45.444      },
00:14:45.444      {
00:14:45.444        "name": "BaseBdev3",
00:14:45.444        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:45.444        "is_configured": false,
00:14:45.444        "data_offset": 0,
00:14:45.444        "data_size": 0
00:14:45.444      }
00:14:45.444    ]
00:14:45.444  }'
00:14:45.444   16:58:38	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:14:45.444   16:58:38	-- common/autotest_common.sh@10 -- # set +x
00:14:46.012   16:58:38	-- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid
00:14:46.012  [2024-11-19 16:58:38.839574] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:14:46.012  [2024-11-19 16:58:38.839818] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring
00:14:46.012   16:58:38	-- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid
00:14:46.271  [2024-11-19 16:58:39.087666] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:14:46.271  [2024-11-19 16:58:39.087925] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:14:46.271  [2024-11-19 16:58:39.088011] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:14:46.271  [2024-11-19 16:58:39.088063] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:14:46.271  [2024-11-19 16:58:39.088088] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:14:46.271  [2024-11-19 16:58:39.088131] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:14:46.271   16:58:39	-- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1
00:14:46.529  [2024-11-19 16:58:39.260542] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:14:46.529  BaseBdev1
00:14:46.529   16:58:39	-- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1
00:14:46.529   16:58:39	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1
00:14:46.529   16:58:39	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:14:46.529   16:58:39	-- common/autotest_common.sh@899 -- # local i
00:14:46.529   16:58:39	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:14:46.529   16:58:39	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:14:46.529   16:58:39	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:14:46.788   16:58:39	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000
00:14:47.046  [
00:14:47.046    {
00:14:47.046      "name": "BaseBdev1",
00:14:47.046      "aliases": [
00:14:47.046        "2e7ce5cf-feb2-4c1e-b9ea-c063809842e5"
00:14:47.046      ],
00:14:47.046      "product_name": "Malloc disk",
00:14:47.046      "block_size": 512,
00:14:47.046      "num_blocks": 65536,
00:14:47.046      "uuid": "2e7ce5cf-feb2-4c1e-b9ea-c063809842e5",
00:14:47.046      "assigned_rate_limits": {
00:14:47.046        "rw_ios_per_sec": 0,
00:14:47.046        "rw_mbytes_per_sec": 0,
00:14:47.046        "r_mbytes_per_sec": 0,
00:14:47.046        "w_mbytes_per_sec": 0
00:14:47.046      },
00:14:47.046      "claimed": true,
00:14:47.046      "claim_type": "exclusive_write",
00:14:47.046      "zoned": false,
00:14:47.046      "supported_io_types": {
00:14:47.046        "read": true,
00:14:47.046        "write": true,
00:14:47.046        "unmap": true,
00:14:47.046        "write_zeroes": true,
00:14:47.046        "flush": true,
00:14:47.046        "reset": true,
00:14:47.046        "compare": false,
00:14:47.046        "compare_and_write": false,
00:14:47.046        "abort": true,
00:14:47.046        "nvme_admin": false,
00:14:47.046        "nvme_io": false
00:14:47.046      },
00:14:47.046      "memory_domains": [
00:14:47.046        {
00:14:47.046          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:14:47.046          "dma_device_type": 2
00:14:47.046        }
00:14:47.046      ],
00:14:47.046      "driver_specific": {}
00:14:47.046    }
00:14:47.046  ]
00:14:47.046   16:58:39	-- common/autotest_common.sh@905 -- # return 0
00:14:47.046   16:58:39	-- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3
00:14:47.046   16:58:39	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:14:47.046   16:58:39	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:14:47.046   16:58:39	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid0
00:14:47.046   16:58:39	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:14:47.046   16:58:39	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:14:47.046   16:58:39	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:14:47.046   16:58:39	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:14:47.047   16:58:39	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:14:47.047   16:58:39	-- bdev/bdev_raid.sh@125 -- # local tmp
00:14:47.047    16:58:39	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:14:47.047    16:58:39	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:14:47.305   16:58:39	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:14:47.305    "name": "Existed_Raid",
00:14:47.305    "uuid": "00000000-0000-0000-0000-000000000000",
00:14:47.305    "strip_size_kb": 64,
00:14:47.305    "state": "configuring",
00:14:47.305    "raid_level": "raid0",
00:14:47.305    "superblock": false,
00:14:47.305    "num_base_bdevs": 3,
00:14:47.305    "num_base_bdevs_discovered": 1,
00:14:47.305    "num_base_bdevs_operational": 3,
00:14:47.305    "base_bdevs_list": [
00:14:47.306      {
00:14:47.306        "name": "BaseBdev1",
00:14:47.306        "uuid": "2e7ce5cf-feb2-4c1e-b9ea-c063809842e5",
00:14:47.306        "is_configured": true,
00:14:47.306        "data_offset": 0,
00:14:47.306        "data_size": 65536
00:14:47.306      },
00:14:47.306      {
00:14:47.306        "name": "BaseBdev2",
00:14:47.306        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:47.306        "is_configured": false,
00:14:47.306        "data_offset": 0,
00:14:47.306        "data_size": 0
00:14:47.306      },
00:14:47.306      {
00:14:47.306        "name": "BaseBdev3",
00:14:47.306        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:47.306        "is_configured": false,
00:14:47.306        "data_offset": 0,
00:14:47.306        "data_size": 0
00:14:47.306      }
00:14:47.306    ]
00:14:47.306  }'
00:14:47.306   16:58:39	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:14:47.306   16:58:39	-- common/autotest_common.sh@10 -- # set +x
00:14:47.874   16:58:40	-- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid
00:14:48.133  [2024-11-19 16:58:40.752818] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:14:48.133  [2024-11-19 16:58:40.753037] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring
00:14:48.133   16:58:40	-- bdev/bdev_raid.sh@244 -- # '[' false = true ']'
00:14:48.133   16:58:40	-- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid
00:14:48.133  [2024-11-19 16:58:40.920920] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:14:48.133  [2024-11-19 16:58:40.923175] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:14:48.133  [2024-11-19 16:58:40.923342] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:14:48.133  [2024-11-19 16:58:40.923428] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:14:48.133  [2024-11-19 16:58:40.923484] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:14:48.133   16:58:40	-- bdev/bdev_raid.sh@254 -- # (( i = 1 ))
00:14:48.133   16:58:40	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:14:48.133   16:58:40	-- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3
00:14:48.133   16:58:40	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:14:48.133   16:58:40	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:14:48.133   16:58:40	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid0
00:14:48.133   16:58:40	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:14:48.133   16:58:40	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:14:48.133   16:58:40	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:14:48.133   16:58:40	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:14:48.133   16:58:40	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:14:48.133   16:58:40	-- bdev/bdev_raid.sh@125 -- # local tmp
00:14:48.133    16:58:40	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:14:48.133    16:58:40	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:14:48.392   16:58:41	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:14:48.392    "name": "Existed_Raid",
00:14:48.392    "uuid": "00000000-0000-0000-0000-000000000000",
00:14:48.392    "strip_size_kb": 64,
00:14:48.392    "state": "configuring",
00:14:48.392    "raid_level": "raid0",
00:14:48.392    "superblock": false,
00:14:48.392    "num_base_bdevs": 3,
00:14:48.392    "num_base_bdevs_discovered": 1,
00:14:48.392    "num_base_bdevs_operational": 3,
00:14:48.392    "base_bdevs_list": [
00:14:48.392      {
00:14:48.392        "name": "BaseBdev1",
00:14:48.392        "uuid": "2e7ce5cf-feb2-4c1e-b9ea-c063809842e5",
00:14:48.392        "is_configured": true,
00:14:48.392        "data_offset": 0,
00:14:48.392        "data_size": 65536
00:14:48.392      },
00:14:48.392      {
00:14:48.392        "name": "BaseBdev2",
00:14:48.392        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:48.392        "is_configured": false,
00:14:48.392        "data_offset": 0,
00:14:48.392        "data_size": 0
00:14:48.392      },
00:14:48.392      {
00:14:48.392        "name": "BaseBdev3",
00:14:48.392        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:48.392        "is_configured": false,
00:14:48.392        "data_offset": 0,
00:14:48.392        "data_size": 0
00:14:48.392      }
00:14:48.392    ]
00:14:48.392  }'
00:14:48.392   16:58:41	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:14:48.392   16:58:41	-- common/autotest_common.sh@10 -- # set +x
00:14:48.960   16:58:41	-- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2
00:14:49.219  [2024-11-19 16:58:42.054459] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:14:49.219  BaseBdev2
00:14:49.219   16:58:42	-- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2
00:14:49.219   16:58:42	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2
00:14:49.219   16:58:42	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:14:49.219   16:58:42	-- common/autotest_common.sh@899 -- # local i
00:14:49.479   16:58:42	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:14:49.479   16:58:42	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:14:49.479   16:58:42	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:14:49.479   16:58:42	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000
00:14:49.739  [
00:14:49.739    {
00:14:49.739      "name": "BaseBdev2",
00:14:49.739      "aliases": [
00:14:49.739        "19a54dc4-0296-4e48-9adf-ca8e7dbee821"
00:14:49.739      ],
00:14:49.739      "product_name": "Malloc disk",
00:14:49.739      "block_size": 512,
00:14:49.739      "num_blocks": 65536,
00:14:49.739      "uuid": "19a54dc4-0296-4e48-9adf-ca8e7dbee821",
00:14:49.739      "assigned_rate_limits": {
00:14:49.739        "rw_ios_per_sec": 0,
00:14:49.739        "rw_mbytes_per_sec": 0,
00:14:49.739        "r_mbytes_per_sec": 0,
00:14:49.739        "w_mbytes_per_sec": 0
00:14:49.739      },
00:14:49.739      "claimed": true,
00:14:49.739      "claim_type": "exclusive_write",
00:14:49.739      "zoned": false,
00:14:49.739      "supported_io_types": {
00:14:49.739        "read": true,
00:14:49.739        "write": true,
00:14:49.739        "unmap": true,
00:14:49.739        "write_zeroes": true,
00:14:49.739        "flush": true,
00:14:49.739        "reset": true,
00:14:49.739        "compare": false,
00:14:49.739        "compare_and_write": false,
00:14:49.739        "abort": true,
00:14:49.739        "nvme_admin": false,
00:14:49.739        "nvme_io": false
00:14:49.739      },
00:14:49.739      "memory_domains": [
00:14:49.739        {
00:14:49.739          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:14:49.739          "dma_device_type": 2
00:14:49.739        }
00:14:49.739      ],
00:14:49.739      "driver_specific": {}
00:14:49.739    }
00:14:49.739  ]
00:14:49.739   16:58:42	-- common/autotest_common.sh@905 -- # return 0
00:14:49.739   16:58:42	-- bdev/bdev_raid.sh@254 -- # (( i++ ))
00:14:49.739   16:58:42	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:14:49.739   16:58:42	-- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3
00:14:49.739   16:58:42	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:14:49.739   16:58:42	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:14:49.739   16:58:42	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid0
00:14:49.739   16:58:42	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:14:49.739   16:58:42	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:14:49.739   16:58:42	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:14:49.739   16:58:42	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:14:49.739   16:58:42	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:14:49.739   16:58:42	-- bdev/bdev_raid.sh@125 -- # local tmp
00:14:49.739    16:58:42	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:14:49.739    16:58:42	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:14:50.014   16:58:42	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:14:50.014    "name": "Existed_Raid",
00:14:50.014    "uuid": "00000000-0000-0000-0000-000000000000",
00:14:50.014    "strip_size_kb": 64,
00:14:50.014    "state": "configuring",
00:14:50.014    "raid_level": "raid0",
00:14:50.014    "superblock": false,
00:14:50.014    "num_base_bdevs": 3,
00:14:50.014    "num_base_bdevs_discovered": 2,
00:14:50.014    "num_base_bdevs_operational": 3,
00:14:50.014    "base_bdevs_list": [
00:14:50.014      {
00:14:50.014        "name": "BaseBdev1",
00:14:50.014        "uuid": "2e7ce5cf-feb2-4c1e-b9ea-c063809842e5",
00:14:50.014        "is_configured": true,
00:14:50.014        "data_offset": 0,
00:14:50.014        "data_size": 65536
00:14:50.014      },
00:14:50.014      {
00:14:50.014        "name": "BaseBdev2",
00:14:50.014        "uuid": "19a54dc4-0296-4e48-9adf-ca8e7dbee821",
00:14:50.014        "is_configured": true,
00:14:50.014        "data_offset": 0,
00:14:50.014        "data_size": 65536
00:14:50.014      },
00:14:50.014      {
00:14:50.014        "name": "BaseBdev3",
00:14:50.014        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:50.014        "is_configured": false,
00:14:50.014        "data_offset": 0,
00:14:50.014        "data_size": 0
00:14:50.014      }
00:14:50.014    ]
00:14:50.014  }'
00:14:50.014   16:58:42	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:14:50.014   16:58:42	-- common/autotest_common.sh@10 -- # set +x
00:14:50.597   16:58:43	-- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3
00:14:50.855  [2024-11-19 16:58:43.487856] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:14:50.855  [2024-11-19 16:58:43.488105] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080
00:14:50.855  [2024-11-19 16:58:43.488148] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512
00:14:50.855  [2024-11-19 16:58:43.488378] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002050
00:14:50.855  [2024-11-19 16:58:43.488883] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080
00:14:50.855  [2024-11-19 16:58:43.488999] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080
00:14:50.855  [2024-11-19 16:58:43.489337] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:14:50.855  BaseBdev3
00:14:50.855   16:58:43	-- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3
00:14:50.855   16:58:43	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3
00:14:50.855   16:58:43	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:14:50.855   16:58:43	-- common/autotest_common.sh@899 -- # local i
00:14:50.855   16:58:43	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:14:50.855   16:58:43	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:14:50.855   16:58:43	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:14:50.855   16:58:43	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000
00:14:51.114  [
00:14:51.114    {
00:14:51.114      "name": "BaseBdev3",
00:14:51.114      "aliases": [
00:14:51.114        "4fa0cac5-d7e3-463d-8963-16c7e873fba0"
00:14:51.114      ],
00:14:51.114      "product_name": "Malloc disk",
00:14:51.114      "block_size": 512,
00:14:51.114      "num_blocks": 65536,
00:14:51.114      "uuid": "4fa0cac5-d7e3-463d-8963-16c7e873fba0",
00:14:51.114      "assigned_rate_limits": {
00:14:51.114        "rw_ios_per_sec": 0,
00:14:51.114        "rw_mbytes_per_sec": 0,
00:14:51.114        "r_mbytes_per_sec": 0,
00:14:51.114        "w_mbytes_per_sec": 0
00:14:51.114      },
00:14:51.114      "claimed": true,
00:14:51.114      "claim_type": "exclusive_write",
00:14:51.114      "zoned": false,
00:14:51.114      "supported_io_types": {
00:14:51.114        "read": true,
00:14:51.114        "write": true,
00:14:51.114        "unmap": true,
00:14:51.114        "write_zeroes": true,
00:14:51.114        "flush": true,
00:14:51.114        "reset": true,
00:14:51.114        "compare": false,
00:14:51.114        "compare_and_write": false,
00:14:51.114        "abort": true,
00:14:51.114        "nvme_admin": false,
00:14:51.114        "nvme_io": false
00:14:51.114      },
00:14:51.114      "memory_domains": [
00:14:51.114        {
00:14:51.114          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:14:51.114          "dma_device_type": 2
00:14:51.114        }
00:14:51.114      ],
00:14:51.114      "driver_specific": {}
00:14:51.114    }
00:14:51.114  ]
00:14:51.114   16:58:43	-- common/autotest_common.sh@905 -- # return 0
00:14:51.114   16:58:43	-- bdev/bdev_raid.sh@254 -- # (( i++ ))
00:14:51.114   16:58:43	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:14:51.114   16:58:43	-- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3
00:14:51.114   16:58:43	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:14:51.114   16:58:43	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:14:51.114   16:58:43	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid0
00:14:51.114   16:58:43	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:14:51.114   16:58:43	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:14:51.114   16:58:43	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:14:51.114   16:58:43	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:14:51.114   16:58:43	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:14:51.114   16:58:43	-- bdev/bdev_raid.sh@125 -- # local tmp
00:14:51.114    16:58:43	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:14:51.114    16:58:43	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:14:51.373   16:58:44	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:14:51.373    "name": "Existed_Raid",
00:14:51.373    "uuid": "dcb60ed3-4a75-44fb-be5d-90d73268c67a",
00:14:51.373    "strip_size_kb": 64,
00:14:51.373    "state": "online",
00:14:51.373    "raid_level": "raid0",
00:14:51.373    "superblock": false,
00:14:51.373    "num_base_bdevs": 3,
00:14:51.373    "num_base_bdevs_discovered": 3,
00:14:51.373    "num_base_bdevs_operational": 3,
00:14:51.373    "base_bdevs_list": [
00:14:51.373      {
00:14:51.373        "name": "BaseBdev1",
00:14:51.373        "uuid": "2e7ce5cf-feb2-4c1e-b9ea-c063809842e5",
00:14:51.373        "is_configured": true,
00:14:51.373        "data_offset": 0,
00:14:51.373        "data_size": 65536
00:14:51.373      },
00:14:51.373      {
00:14:51.373        "name": "BaseBdev2",
00:14:51.373        "uuid": "19a54dc4-0296-4e48-9adf-ca8e7dbee821",
00:14:51.373        "is_configured": true,
00:14:51.373        "data_offset": 0,
00:14:51.373        "data_size": 65536
00:14:51.373      },
00:14:51.373      {
00:14:51.373        "name": "BaseBdev3",
00:14:51.373        "uuid": "4fa0cac5-d7e3-463d-8963-16c7e873fba0",
00:14:51.373        "is_configured": true,
00:14:51.373        "data_offset": 0,
00:14:51.373        "data_size": 65536
00:14:51.373      }
00:14:51.373    ]
00:14:51.373  }'
00:14:51.373   16:58:44	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:14:51.373   16:58:44	-- common/autotest_common.sh@10 -- # set +x
00:14:51.940   16:58:44	-- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1
00:14:52.199  [2024-11-19 16:58:44.940266] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:14:52.199  [2024-11-19 16:58:44.940423] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:14:52.199  [2024-11-19 16:58:44.940664] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:14:52.199   16:58:44	-- bdev/bdev_raid.sh@263 -- # local expected_state
00:14:52.199   16:58:44	-- bdev/bdev_raid.sh@264 -- # has_redundancy raid0
00:14:52.199   16:58:44	-- bdev/bdev_raid.sh@195 -- # case $1 in
00:14:52.199   16:58:44	-- bdev/bdev_raid.sh@197 -- # return 1
00:14:52.199   16:58:44	-- bdev/bdev_raid.sh@265 -- # expected_state=offline
00:14:52.199   16:58:44	-- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2
00:14:52.199   16:58:44	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:14:52.199   16:58:44	-- bdev/bdev_raid.sh@118 -- # local expected_state=offline
00:14:52.199   16:58:44	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid0
00:14:52.199   16:58:44	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:14:52.199   16:58:44	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:14:52.199   16:58:44	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:14:52.199   16:58:44	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:14:52.199   16:58:44	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:14:52.199   16:58:44	-- bdev/bdev_raid.sh@125 -- # local tmp
00:14:52.199    16:58:44	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:14:52.199    16:58:44	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:14:52.458   16:58:45	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:14:52.458    "name": "Existed_Raid",
00:14:52.458    "uuid": "dcb60ed3-4a75-44fb-be5d-90d73268c67a",
00:14:52.458    "strip_size_kb": 64,
00:14:52.458    "state": "offline",
00:14:52.458    "raid_level": "raid0",
00:14:52.458    "superblock": false,
00:14:52.458    "num_base_bdevs": 3,
00:14:52.458    "num_base_bdevs_discovered": 2,
00:14:52.458    "num_base_bdevs_operational": 2,
00:14:52.458    "base_bdevs_list": [
00:14:52.458      {
00:14:52.458        "name": null,
00:14:52.458        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:52.458        "is_configured": false,
00:14:52.458        "data_offset": 0,
00:14:52.458        "data_size": 65536
00:14:52.458      },
00:14:52.458      {
00:14:52.458        "name": "BaseBdev2",
00:14:52.458        "uuid": "19a54dc4-0296-4e48-9adf-ca8e7dbee821",
00:14:52.458        "is_configured": true,
00:14:52.458        "data_offset": 0,
00:14:52.458        "data_size": 65536
00:14:52.458      },
00:14:52.458      {
00:14:52.458        "name": "BaseBdev3",
00:14:52.458        "uuid": "4fa0cac5-d7e3-463d-8963-16c7e873fba0",
00:14:52.458        "is_configured": true,
00:14:52.458        "data_offset": 0,
00:14:52.458        "data_size": 65536
00:14:52.458      }
00:14:52.458    ]
00:14:52.458  }'
00:14:52.458   16:58:45	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:14:52.458   16:58:45	-- common/autotest_common.sh@10 -- # set +x
00:14:53.025   16:58:45	-- bdev/bdev_raid.sh@273 -- # (( i = 1 ))
00:14:53.025   16:58:45	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:14:53.025    16:58:45	-- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]'
00:14:53.025    16:58:45	-- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:14:53.284   16:58:46	-- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid
00:14:53.284   16:58:46	-- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:14:53.284   16:58:46	-- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2
00:14:53.543  [2024-11-19 16:58:46.236755] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2
00:14:53.543   16:58:46	-- bdev/bdev_raid.sh@273 -- # (( i++ ))
00:14:53.543   16:58:46	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:14:53.543    16:58:46	-- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:14:53.543    16:58:46	-- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]'
00:14:53.803   16:58:46	-- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid
00:14:53.803   16:58:46	-- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:14:53.803   16:58:46	-- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3
00:14:53.803  [2024-11-19 16:58:46.592789] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3
00:14:53.803  [2024-11-19 16:58:46.593117] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline
00:14:53.803   16:58:46	-- bdev/bdev_raid.sh@273 -- # (( i++ ))
00:14:53.803   16:58:46	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:14:53.803    16:58:46	-- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:14:53.803    16:58:46	-- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)'
00:14:54.062   16:58:46	-- bdev/bdev_raid.sh@281 -- # raid_bdev=
00:14:54.062   16:58:46	-- bdev/bdev_raid.sh@282 -- # '[' -n '' ']'
00:14:54.062   16:58:46	-- bdev/bdev_raid.sh@287 -- # killprocess 125369
00:14:54.062   16:58:46	-- common/autotest_common.sh@936 -- # '[' -z 125369 ']'
00:14:54.062   16:58:46	-- common/autotest_common.sh@940 -- # kill -0 125369
00:14:54.062    16:58:46	-- common/autotest_common.sh@941 -- # uname
00:14:54.062   16:58:46	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:14:54.062    16:58:46	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 125369
00:14:54.062   16:58:46	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:14:54.062   16:58:46	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:14:54.062   16:58:46	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 125369'
00:14:54.062  killing process with pid 125369
00:14:54.062   16:58:46	-- common/autotest_common.sh@955 -- # kill 125369
00:14:54.062  [2024-11-19 16:58:46.827378] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:14:54.062   16:58:46	-- common/autotest_common.sh@960 -- # wait 125369
00:14:54.062  [2024-11-19 16:58:46.827613] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:14:54.321   16:58:47	-- bdev/bdev_raid.sh@289 -- # return 0
00:14:54.321  
00:14:54.321  real	0m10.355s
00:14:54.321  user	0m18.440s
00:14:54.321  sys	0m1.724s
00:14:54.321   16:58:47	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:14:54.321   16:58:47	-- common/autotest_common.sh@10 -- # set +x
00:14:54.321  ************************************
00:14:54.321  END TEST raid_state_function_test
00:14:54.321  ************************************
00:14:54.321   16:58:47	-- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true
00:14:54.321   16:58:47	-- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']'
00:14:54.321   16:58:47	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:14:54.321   16:58:47	-- common/autotest_common.sh@10 -- # set +x
00:14:54.321  ************************************
00:14:54.321  START TEST raid_state_function_test_sb
00:14:54.321  ************************************
00:14:54.321   16:58:47	-- common/autotest_common.sh@1114 -- # raid_state_function_test raid0 3 true
00:14:54.321   16:58:47	-- bdev/bdev_raid.sh@202 -- # local raid_level=raid0
00:14:54.321   16:58:47	-- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3
00:14:54.321   16:58:47	-- bdev/bdev_raid.sh@204 -- # local superblock=true
00:14:54.321   16:58:47	-- bdev/bdev_raid.sh@205 -- # local raid_bdev
00:14:54.321    16:58:47	-- bdev/bdev_raid.sh@206 -- # (( i = 1 ))
00:14:54.321    16:58:47	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:14:54.321    16:58:47	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev1
00:14:54.321    16:58:47	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:14:54.321    16:58:47	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:14:54.321    16:58:47	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev2
00:14:54.321    16:58:47	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:14:54.321    16:58:47	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:14:54.321    16:58:47	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev3
00:14:54.321    16:58:47	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:14:54.321    16:58:47	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:14:54.321   16:58:47	-- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3')
00:14:54.321   16:58:47	-- bdev/bdev_raid.sh@206 -- # local base_bdevs
00:14:54.321   16:58:47	-- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid
00:14:54.321   16:58:47	-- bdev/bdev_raid.sh@208 -- # local strip_size
00:14:54.321   16:58:47	-- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg
00:14:54.321   16:58:47	-- bdev/bdev_raid.sh@210 -- # local superblock_create_arg
00:14:54.321   16:58:47	-- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']'
00:14:54.321   16:58:47	-- bdev/bdev_raid.sh@213 -- # strip_size=64
00:14:54.321   16:58:47	-- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64'
00:14:54.321   16:58:47	-- bdev/bdev_raid.sh@219 -- # '[' true = true ']'
00:14:54.321   16:58:47	-- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s
00:14:54.321   16:58:47	-- bdev/bdev_raid.sh@226 -- # raid_pid=125734
00:14:54.321   16:58:47	-- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 125734'
00:14:54.321  Process raid pid: 125734
00:14:54.321   16:58:47	-- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid
00:14:54.321   16:58:47	-- bdev/bdev_raid.sh@228 -- # waitforlisten 125734 /var/tmp/spdk-raid.sock
00:14:54.321   16:58:47	-- common/autotest_common.sh@829 -- # '[' -z 125734 ']'
00:14:54.321   16:58:47	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock
00:14:54.321   16:58:47	-- common/autotest_common.sh@834 -- # local max_retries=100
00:14:54.321   16:58:47	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...'
00:14:54.321  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...
00:14:54.321   16:58:47	-- common/autotest_common.sh@838 -- # xtrace_disable
00:14:54.321   16:58:47	-- common/autotest_common.sh@10 -- # set +x
00:14:54.581  [2024-11-19 16:58:47.200826] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:14:54.581  [2024-11-19 16:58:47.201160] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:14:54.581  [2024-11-19 16:58:47.343964] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:14:54.581  [2024-11-19 16:58:47.385532] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:14:54.581  [2024-11-19 16:58:47.426781] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:14:55.519   16:58:48	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:14:55.519   16:58:48	-- common/autotest_common.sh@862 -- # return 0
00:14:55.519   16:58:48	-- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid
00:14:55.519  [2024-11-19 16:58:48.296131] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:14:55.519  [2024-11-19 16:58:48.296342] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:14:55.519  [2024-11-19 16:58:48.296419] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:14:55.519  [2024-11-19 16:58:48.296475] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:14:55.519  [2024-11-19 16:58:48.296503] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:14:55.519  [2024-11-19 16:58:48.296559] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:14:55.519   16:58:48	-- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3
00:14:55.520   16:58:48	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:14:55.520   16:58:48	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:14:55.520   16:58:48	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid0
00:14:55.520   16:58:48	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:14:55.520   16:58:48	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:14:55.520   16:58:48	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:14:55.520   16:58:48	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:14:55.520   16:58:48	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:14:55.520   16:58:48	-- bdev/bdev_raid.sh@125 -- # local tmp
00:14:55.520    16:58:48	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:14:55.520    16:58:48	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:14:55.779   16:58:48	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:14:55.779    "name": "Existed_Raid",
00:14:55.779    "uuid": "54fd3b7a-5d50-4e10-87fd-39507a4d22bd",
00:14:55.779    "strip_size_kb": 64,
00:14:55.779    "state": "configuring",
00:14:55.779    "raid_level": "raid0",
00:14:55.779    "superblock": true,
00:14:55.779    "num_base_bdevs": 3,
00:14:55.779    "num_base_bdevs_discovered": 0,
00:14:55.779    "num_base_bdevs_operational": 3,
00:14:55.779    "base_bdevs_list": [
00:14:55.779      {
00:14:55.779        "name": "BaseBdev1",
00:14:55.779        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:55.779        "is_configured": false,
00:14:55.779        "data_offset": 0,
00:14:55.779        "data_size": 0
00:14:55.779      },
00:14:55.779      {
00:14:55.779        "name": "BaseBdev2",
00:14:55.779        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:55.779        "is_configured": false,
00:14:55.779        "data_offset": 0,
00:14:55.779        "data_size": 0
00:14:55.779      },
00:14:55.779      {
00:14:55.779        "name": "BaseBdev3",
00:14:55.779        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:55.779        "is_configured": false,
00:14:55.779        "data_offset": 0,
00:14:55.779        "data_size": 0
00:14:55.779      }
00:14:55.779    ]
00:14:55.779  }'
00:14:55.779   16:58:48	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:14:55.779   16:58:48	-- common/autotest_common.sh@10 -- # set +x
00:14:56.348   16:58:49	-- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid
00:14:56.606  [2024-11-19 16:58:49.280160] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:14:56.606  [2024-11-19 16:58:49.280372] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring
00:14:56.606   16:58:49	-- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid
00:14:56.864  [2024-11-19 16:58:49.536277] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:14:56.864  [2024-11-19 16:58:49.536481] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:14:56.864  [2024-11-19 16:58:49.536586] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:14:56.864  [2024-11-19 16:58:49.536640] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:14:56.864  [2024-11-19 16:58:49.536666] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:14:56.864  [2024-11-19 16:58:49.536708] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:14:56.864   16:58:49	-- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1
00:14:56.864  [2024-11-19 16:58:49.721381] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:14:56.864  BaseBdev1
00:14:57.124   16:58:49	-- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1
00:14:57.124   16:58:49	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1
00:14:57.124   16:58:49	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:14:57.124   16:58:49	-- common/autotest_common.sh@899 -- # local i
00:14:57.124   16:58:49	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:14:57.124   16:58:49	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:14:57.124   16:58:49	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:14:57.124   16:58:49	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000
00:14:57.384  [
00:14:57.384    {
00:14:57.384      "name": "BaseBdev1",
00:14:57.384      "aliases": [
00:14:57.384        "c1b8613b-7c23-485b-aa82-1e23d4c7df7f"
00:14:57.384      ],
00:14:57.384      "product_name": "Malloc disk",
00:14:57.384      "block_size": 512,
00:14:57.384      "num_blocks": 65536,
00:14:57.384      "uuid": "c1b8613b-7c23-485b-aa82-1e23d4c7df7f",
00:14:57.384      "assigned_rate_limits": {
00:14:57.384        "rw_ios_per_sec": 0,
00:14:57.384        "rw_mbytes_per_sec": 0,
00:14:57.384        "r_mbytes_per_sec": 0,
00:14:57.384        "w_mbytes_per_sec": 0
00:14:57.384      },
00:14:57.384      "claimed": true,
00:14:57.384      "claim_type": "exclusive_write",
00:14:57.384      "zoned": false,
00:14:57.384      "supported_io_types": {
00:14:57.384        "read": true,
00:14:57.384        "write": true,
00:14:57.384        "unmap": true,
00:14:57.384        "write_zeroes": true,
00:14:57.384        "flush": true,
00:14:57.384        "reset": true,
00:14:57.384        "compare": false,
00:14:57.384        "compare_and_write": false,
00:14:57.384        "abort": true,
00:14:57.384        "nvme_admin": false,
00:14:57.384        "nvme_io": false
00:14:57.384      },
00:14:57.384      "memory_domains": [
00:14:57.384        {
00:14:57.384          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:14:57.384          "dma_device_type": 2
00:14:57.384        }
00:14:57.384      ],
00:14:57.384      "driver_specific": {}
00:14:57.384    }
00:14:57.384  ]
00:14:57.384   16:58:50	-- common/autotest_common.sh@905 -- # return 0
00:14:57.384   16:58:50	-- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3
00:14:57.384   16:58:50	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:14:57.384   16:58:50	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:14:57.384   16:58:50	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid0
00:14:57.384   16:58:50	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:14:57.384   16:58:50	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:14:57.384   16:58:50	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:14:57.384   16:58:50	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:14:57.384   16:58:50	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:14:57.384   16:58:50	-- bdev/bdev_raid.sh@125 -- # local tmp
00:14:57.384    16:58:50	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:14:57.384    16:58:50	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:14:57.644   16:58:50	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:14:57.644    "name": "Existed_Raid",
00:14:57.644    "uuid": "fca156c7-0f35-4f48-ac42-4f15e429303c",
00:14:57.644    "strip_size_kb": 64,
00:14:57.644    "state": "configuring",
00:14:57.644    "raid_level": "raid0",
00:14:57.644    "superblock": true,
00:14:57.644    "num_base_bdevs": 3,
00:14:57.644    "num_base_bdevs_discovered": 1,
00:14:57.644    "num_base_bdevs_operational": 3,
00:14:57.644    "base_bdevs_list": [
00:14:57.644      {
00:14:57.644        "name": "BaseBdev1",
00:14:57.644        "uuid": "c1b8613b-7c23-485b-aa82-1e23d4c7df7f",
00:14:57.644        "is_configured": true,
00:14:57.644        "data_offset": 2048,
00:14:57.644        "data_size": 63488
00:14:57.644      },
00:14:57.644      {
00:14:57.644        "name": "BaseBdev2",
00:14:57.644        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:57.644        "is_configured": false,
00:14:57.644        "data_offset": 0,
00:14:57.644        "data_size": 0
00:14:57.644      },
00:14:57.644      {
00:14:57.644        "name": "BaseBdev3",
00:14:57.644        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:57.644        "is_configured": false,
00:14:57.644        "data_offset": 0,
00:14:57.644        "data_size": 0
00:14:57.644      }
00:14:57.644    ]
00:14:57.644  }'
00:14:57.644   16:58:50	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:14:57.644   16:58:50	-- common/autotest_common.sh@10 -- # set +x
00:14:58.211   16:58:50	-- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid
00:14:58.211  [2024-11-19 16:58:51.053647] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:14:58.211  [2024-11-19 16:58:51.053854] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring
00:14:58.471   16:58:51	-- bdev/bdev_raid.sh@244 -- # '[' true = true ']'
00:14:58.471   16:58:51	-- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1
00:14:58.730   16:58:51	-- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1
00:14:58.730  BaseBdev1
00:14:58.730   16:58:51	-- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1
00:14:58.730   16:58:51	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1
00:14:58.730   16:58:51	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:14:58.730   16:58:51	-- common/autotest_common.sh@899 -- # local i
00:14:58.730   16:58:51	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:14:58.730   16:58:51	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:14:58.730   16:58:51	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:14:58.988   16:58:51	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000
00:14:59.248  [
00:14:59.248    {
00:14:59.248      "name": "BaseBdev1",
00:14:59.248      "aliases": [
00:14:59.248        "0b4a9024-00f7-4fde-8d96-d4c5f344895a"
00:14:59.248      ],
00:14:59.248      "product_name": "Malloc disk",
00:14:59.248      "block_size": 512,
00:14:59.248      "num_blocks": 65536,
00:14:59.248      "uuid": "0b4a9024-00f7-4fde-8d96-d4c5f344895a",
00:14:59.248      "assigned_rate_limits": {
00:14:59.248        "rw_ios_per_sec": 0,
00:14:59.248        "rw_mbytes_per_sec": 0,
00:14:59.248        "r_mbytes_per_sec": 0,
00:14:59.248        "w_mbytes_per_sec": 0
00:14:59.248      },
00:14:59.248      "claimed": false,
00:14:59.248      "zoned": false,
00:14:59.248      "supported_io_types": {
00:14:59.248        "read": true,
00:14:59.248        "write": true,
00:14:59.248        "unmap": true,
00:14:59.248        "write_zeroes": true,
00:14:59.248        "flush": true,
00:14:59.248        "reset": true,
00:14:59.248        "compare": false,
00:14:59.248        "compare_and_write": false,
00:14:59.248        "abort": true,
00:14:59.248        "nvme_admin": false,
00:14:59.248        "nvme_io": false
00:14:59.248      },
00:14:59.248      "memory_domains": [
00:14:59.248        {
00:14:59.248          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:14:59.248          "dma_device_type": 2
00:14:59.248        }
00:14:59.248      ],
00:14:59.248      "driver_specific": {}
00:14:59.248    }
00:14:59.248  ]
00:14:59.248   16:58:51	-- common/autotest_common.sh@905 -- # return 0
00:14:59.248   16:58:51	-- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid
00:14:59.508  [2024-11-19 16:58:52.154257] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:14:59.508  [2024-11-19 16:58:52.156623] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:14:59.508  [2024-11-19 16:58:52.156785] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:14:59.508  [2024-11-19 16:58:52.156863] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:14:59.508  [2024-11-19 16:58:52.156917] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:14:59.508   16:58:52	-- bdev/bdev_raid.sh@254 -- # (( i = 1 ))
00:14:59.508   16:58:52	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:14:59.508   16:58:52	-- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3
00:14:59.508   16:58:52	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:14:59.508   16:58:52	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:14:59.508   16:58:52	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid0
00:14:59.508   16:58:52	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:14:59.508   16:58:52	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:14:59.508   16:58:52	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:14:59.508   16:58:52	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:14:59.508   16:58:52	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:14:59.508   16:58:52	-- bdev/bdev_raid.sh@125 -- # local tmp
00:14:59.508    16:58:52	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:14:59.508    16:58:52	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:14:59.508   16:58:52	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:14:59.508    "name": "Existed_Raid",
00:14:59.508    "uuid": "b3e489d8-39b5-4629-999f-e39a63ff4fb5",
00:14:59.508    "strip_size_kb": 64,
00:14:59.508    "state": "configuring",
00:14:59.508    "raid_level": "raid0",
00:14:59.508    "superblock": true,
00:14:59.508    "num_base_bdevs": 3,
00:14:59.508    "num_base_bdevs_discovered": 1,
00:14:59.508    "num_base_bdevs_operational": 3,
00:14:59.508    "base_bdevs_list": [
00:14:59.508      {
00:14:59.508        "name": "BaseBdev1",
00:14:59.508        "uuid": "0b4a9024-00f7-4fde-8d96-d4c5f344895a",
00:14:59.508        "is_configured": true,
00:14:59.508        "data_offset": 2048,
00:14:59.508        "data_size": 63488
00:14:59.508      },
00:14:59.508      {
00:14:59.508        "name": "BaseBdev2",
00:14:59.508        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:59.508        "is_configured": false,
00:14:59.508        "data_offset": 0,
00:14:59.508        "data_size": 0
00:14:59.508      },
00:14:59.508      {
00:14:59.508        "name": "BaseBdev3",
00:14:59.508        "uuid": "00000000-0000-0000-0000-000000000000",
00:14:59.508        "is_configured": false,
00:14:59.508        "data_offset": 0,
00:14:59.508        "data_size": 0
00:14:59.508      }
00:14:59.508    ]
00:14:59.508  }'
00:14:59.508   16:58:52	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:14:59.508   16:58:52	-- common/autotest_common.sh@10 -- # set +x
00:15:00.074   16:58:52	-- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2
00:15:00.332  [2024-11-19 16:58:53.074209] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:15:00.332  BaseBdev2
00:15:00.332   16:58:53	-- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2
00:15:00.332   16:58:53	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2
00:15:00.332   16:58:53	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:15:00.332   16:58:53	-- common/autotest_common.sh@899 -- # local i
00:15:00.332   16:58:53	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:15:00.332   16:58:53	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:15:00.332   16:58:53	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:15:00.590   16:58:53	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000
00:15:00.590  [
00:15:00.590    {
00:15:00.590      "name": "BaseBdev2",
00:15:00.590      "aliases": [
00:15:00.590        "72c17805-216c-4005-955a-405a91caa96c"
00:15:00.590      ],
00:15:00.590      "product_name": "Malloc disk",
00:15:00.590      "block_size": 512,
00:15:00.590      "num_blocks": 65536,
00:15:00.590      "uuid": "72c17805-216c-4005-955a-405a91caa96c",
00:15:00.590      "assigned_rate_limits": {
00:15:00.590        "rw_ios_per_sec": 0,
00:15:00.590        "rw_mbytes_per_sec": 0,
00:15:00.590        "r_mbytes_per_sec": 0,
00:15:00.590        "w_mbytes_per_sec": 0
00:15:00.590      },
00:15:00.590      "claimed": true,
00:15:00.590      "claim_type": "exclusive_write",
00:15:00.590      "zoned": false,
00:15:00.590      "supported_io_types": {
00:15:00.590        "read": true,
00:15:00.590        "write": true,
00:15:00.590        "unmap": true,
00:15:00.590        "write_zeroes": true,
00:15:00.590        "flush": true,
00:15:00.590        "reset": true,
00:15:00.590        "compare": false,
00:15:00.590        "compare_and_write": false,
00:15:00.590        "abort": true,
00:15:00.590        "nvme_admin": false,
00:15:00.590        "nvme_io": false
00:15:00.590      },
00:15:00.590      "memory_domains": [
00:15:00.590        {
00:15:00.590          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:15:00.590          "dma_device_type": 2
00:15:00.590        }
00:15:00.590      ],
00:15:00.590      "driver_specific": {}
00:15:00.590    }
00:15:00.590  ]
00:15:00.848   16:58:53	-- common/autotest_common.sh@905 -- # return 0
00:15:00.848   16:58:53	-- bdev/bdev_raid.sh@254 -- # (( i++ ))
00:15:00.848   16:58:53	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:15:00.848   16:58:53	-- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3
00:15:00.848   16:58:53	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:15:00.848   16:58:53	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:15:00.848   16:58:53	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid0
00:15:00.848   16:58:53	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:15:00.848   16:58:53	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:15:00.848   16:58:53	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:15:00.848   16:58:53	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:15:00.848   16:58:53	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:15:00.848   16:58:53	-- bdev/bdev_raid.sh@125 -- # local tmp
00:15:00.848    16:58:53	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:15:00.848    16:58:53	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:15:01.107   16:58:53	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:15:01.107    "name": "Existed_Raid",
00:15:01.107    "uuid": "b3e489d8-39b5-4629-999f-e39a63ff4fb5",
00:15:01.107    "strip_size_kb": 64,
00:15:01.107    "state": "configuring",
00:15:01.107    "raid_level": "raid0",
00:15:01.107    "superblock": true,
00:15:01.107    "num_base_bdevs": 3,
00:15:01.107    "num_base_bdevs_discovered": 2,
00:15:01.107    "num_base_bdevs_operational": 3,
00:15:01.107    "base_bdevs_list": [
00:15:01.107      {
00:15:01.107        "name": "BaseBdev1",
00:15:01.107        "uuid": "0b4a9024-00f7-4fde-8d96-d4c5f344895a",
00:15:01.107        "is_configured": true,
00:15:01.107        "data_offset": 2048,
00:15:01.107        "data_size": 63488
00:15:01.107      },
00:15:01.107      {
00:15:01.107        "name": "BaseBdev2",
00:15:01.107        "uuid": "72c17805-216c-4005-955a-405a91caa96c",
00:15:01.107        "is_configured": true,
00:15:01.107        "data_offset": 2048,
00:15:01.107        "data_size": 63488
00:15:01.107      },
00:15:01.107      {
00:15:01.107        "name": "BaseBdev3",
00:15:01.107        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:01.107        "is_configured": false,
00:15:01.107        "data_offset": 0,
00:15:01.107        "data_size": 0
00:15:01.107      }
00:15:01.107    ]
00:15:01.107  }'
00:15:01.107   16:58:53	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:15:01.107   16:58:53	-- common/autotest_common.sh@10 -- # set +x
00:15:01.673   16:58:54	-- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3
00:15:01.674  [2024-11-19 16:58:54.429514] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:15:01.674  [2024-11-19 16:58:54.429933] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006680
00:15:01.674  [2024-11-19 16:58:54.429981] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512
00:15:01.674  [2024-11-19 16:58:54.430193] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120
00:15:01.674  [2024-11-19 16:58:54.430585] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006680
00:15:01.674  [2024-11-19 16:58:54.430687] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006680
00:15:01.674  BaseBdev3
00:15:01.674  [2024-11-19 16:58:54.430915] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:15:01.674   16:58:54	-- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3
00:15:01.674   16:58:54	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3
00:15:01.674   16:58:54	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:15:01.674   16:58:54	-- common/autotest_common.sh@899 -- # local i
00:15:01.674   16:58:54	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:15:01.674   16:58:54	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:15:01.674   16:58:54	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:15:01.933   16:58:54	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000
00:15:02.191  [
00:15:02.191    {
00:15:02.191      "name": "BaseBdev3",
00:15:02.191      "aliases": [
00:15:02.191        "8a9ead11-d63a-48c5-a347-c1c9105d3686"
00:15:02.191      ],
00:15:02.191      "product_name": "Malloc disk",
00:15:02.191      "block_size": 512,
00:15:02.191      "num_blocks": 65536,
00:15:02.191      "uuid": "8a9ead11-d63a-48c5-a347-c1c9105d3686",
00:15:02.191      "assigned_rate_limits": {
00:15:02.191        "rw_ios_per_sec": 0,
00:15:02.191        "rw_mbytes_per_sec": 0,
00:15:02.191        "r_mbytes_per_sec": 0,
00:15:02.191        "w_mbytes_per_sec": 0
00:15:02.191      },
00:15:02.191      "claimed": true,
00:15:02.191      "claim_type": "exclusive_write",
00:15:02.191      "zoned": false,
00:15:02.191      "supported_io_types": {
00:15:02.191        "read": true,
00:15:02.191        "write": true,
00:15:02.191        "unmap": true,
00:15:02.191        "write_zeroes": true,
00:15:02.192        "flush": true,
00:15:02.192        "reset": true,
00:15:02.192        "compare": false,
00:15:02.192        "compare_and_write": false,
00:15:02.192        "abort": true,
00:15:02.192        "nvme_admin": false,
00:15:02.192        "nvme_io": false
00:15:02.192      },
00:15:02.192      "memory_domains": [
00:15:02.192        {
00:15:02.192          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:15:02.192          "dma_device_type": 2
00:15:02.192        }
00:15:02.192      ],
00:15:02.192      "driver_specific": {}
00:15:02.192    }
00:15:02.192  ]
00:15:02.192   16:58:55	-- common/autotest_common.sh@905 -- # return 0
00:15:02.192   16:58:55	-- bdev/bdev_raid.sh@254 -- # (( i++ ))
00:15:02.192   16:58:55	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:15:02.192   16:58:55	-- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3
00:15:02.192   16:58:55	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:15:02.192   16:58:55	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:15:02.192   16:58:55	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid0
00:15:02.192   16:58:55	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:15:02.192   16:58:55	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:15:02.192   16:58:55	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:15:02.192   16:58:55	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:15:02.192   16:58:55	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:15:02.192   16:58:55	-- bdev/bdev_raid.sh@125 -- # local tmp
00:15:02.192    16:58:55	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:15:02.192    16:58:55	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:15:02.451   16:58:55	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:15:02.451    "name": "Existed_Raid",
00:15:02.451    "uuid": "b3e489d8-39b5-4629-999f-e39a63ff4fb5",
00:15:02.451    "strip_size_kb": 64,
00:15:02.451    "state": "online",
00:15:02.451    "raid_level": "raid0",
00:15:02.451    "superblock": true,
00:15:02.451    "num_base_bdevs": 3,
00:15:02.451    "num_base_bdevs_discovered": 3,
00:15:02.451    "num_base_bdevs_operational": 3,
00:15:02.451    "base_bdevs_list": [
00:15:02.451      {
00:15:02.451        "name": "BaseBdev1",
00:15:02.451        "uuid": "0b4a9024-00f7-4fde-8d96-d4c5f344895a",
00:15:02.451        "is_configured": true,
00:15:02.451        "data_offset": 2048,
00:15:02.451        "data_size": 63488
00:15:02.451      },
00:15:02.451      {
00:15:02.451        "name": "BaseBdev2",
00:15:02.451        "uuid": "72c17805-216c-4005-955a-405a91caa96c",
00:15:02.451        "is_configured": true,
00:15:02.451        "data_offset": 2048,
00:15:02.451        "data_size": 63488
00:15:02.451      },
00:15:02.451      {
00:15:02.451        "name": "BaseBdev3",
00:15:02.451        "uuid": "8a9ead11-d63a-48c5-a347-c1c9105d3686",
00:15:02.451        "is_configured": true,
00:15:02.451        "data_offset": 2048,
00:15:02.451        "data_size": 63488
00:15:02.451      }
00:15:02.451    ]
00:15:02.451  }'
00:15:02.451   16:58:55	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:15:02.451   16:58:55	-- common/autotest_common.sh@10 -- # set +x
00:15:03.017   16:58:55	-- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1
00:15:03.275  [2024-11-19 16:58:56.038001] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:15:03.275  [2024-11-19 16:58:56.038192] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:15:03.275  [2024-11-19 16:58:56.038442] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:15:03.275   16:58:56	-- bdev/bdev_raid.sh@263 -- # local expected_state
00:15:03.275   16:58:56	-- bdev/bdev_raid.sh@264 -- # has_redundancy raid0
00:15:03.275   16:58:56	-- bdev/bdev_raid.sh@195 -- # case $1 in
00:15:03.275   16:58:56	-- bdev/bdev_raid.sh@197 -- # return 1
00:15:03.275   16:58:56	-- bdev/bdev_raid.sh@265 -- # expected_state=offline
00:15:03.275   16:58:56	-- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2
00:15:03.275   16:58:56	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:15:03.275   16:58:56	-- bdev/bdev_raid.sh@118 -- # local expected_state=offline
00:15:03.275   16:58:56	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid0
00:15:03.275   16:58:56	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:15:03.275   16:58:56	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:15:03.275   16:58:56	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:15:03.275   16:58:56	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:15:03.275   16:58:56	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:15:03.275   16:58:56	-- bdev/bdev_raid.sh@125 -- # local tmp
00:15:03.275    16:58:56	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:15:03.275    16:58:56	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:15:03.532   16:58:56	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:15:03.532    "name": "Existed_Raid",
00:15:03.532    "uuid": "b3e489d8-39b5-4629-999f-e39a63ff4fb5",
00:15:03.532    "strip_size_kb": 64,
00:15:03.532    "state": "offline",
00:15:03.532    "raid_level": "raid0",
00:15:03.532    "superblock": true,
00:15:03.532    "num_base_bdevs": 3,
00:15:03.532    "num_base_bdevs_discovered": 2,
00:15:03.532    "num_base_bdevs_operational": 2,
00:15:03.532    "base_bdevs_list": [
00:15:03.532      {
00:15:03.532        "name": null,
00:15:03.532        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:03.532        "is_configured": false,
00:15:03.532        "data_offset": 2048,
00:15:03.532        "data_size": 63488
00:15:03.532      },
00:15:03.532      {
00:15:03.532        "name": "BaseBdev2",
00:15:03.532        "uuid": "72c17805-216c-4005-955a-405a91caa96c",
00:15:03.532        "is_configured": true,
00:15:03.532        "data_offset": 2048,
00:15:03.532        "data_size": 63488
00:15:03.532      },
00:15:03.532      {
00:15:03.532        "name": "BaseBdev3",
00:15:03.532        "uuid": "8a9ead11-d63a-48c5-a347-c1c9105d3686",
00:15:03.532        "is_configured": true,
00:15:03.532        "data_offset": 2048,
00:15:03.532        "data_size": 63488
00:15:03.532      }
00:15:03.532    ]
00:15:03.532  }'
00:15:03.532   16:58:56	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:15:03.532   16:58:56	-- common/autotest_common.sh@10 -- # set +x
00:15:04.099   16:58:56	-- bdev/bdev_raid.sh@273 -- # (( i = 1 ))
00:15:04.099   16:58:56	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:15:04.357    16:58:56	-- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:15:04.357    16:58:56	-- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]'
00:15:04.615   16:58:57	-- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid
00:15:04.615   16:58:57	-- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:15:04.615   16:58:57	-- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2
00:15:04.615  [2024-11-19 16:58:57.393437] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2
00:15:04.615   16:58:57	-- bdev/bdev_raid.sh@273 -- # (( i++ ))
00:15:04.615   16:58:57	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:15:04.615    16:58:57	-- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:15:04.615    16:58:57	-- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]'
00:15:04.872   16:58:57	-- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid
00:15:04.872   16:58:57	-- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:15:04.872   16:58:57	-- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3
00:15:05.130  [2024-11-19 16:58:57.781771] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3
00:15:05.130  [2024-11-19 16:58:57.782075] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state offline
00:15:05.130   16:58:57	-- bdev/bdev_raid.sh@273 -- # (( i++ ))
00:15:05.130   16:58:57	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:15:05.130    16:58:57	-- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)'
00:15:05.130    16:58:57	-- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:15:05.388   16:58:57	-- bdev/bdev_raid.sh@281 -- # raid_bdev=
00:15:05.388   16:58:57	-- bdev/bdev_raid.sh@282 -- # '[' -n '' ']'
00:15:05.388   16:58:57	-- bdev/bdev_raid.sh@287 -- # killprocess 125734
00:15:05.388   16:58:57	-- common/autotest_common.sh@936 -- # '[' -z 125734 ']'
00:15:05.388   16:58:57	-- common/autotest_common.sh@940 -- # kill -0 125734
00:15:05.388    16:58:57	-- common/autotest_common.sh@941 -- # uname
00:15:05.388   16:58:57	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:15:05.388    16:58:57	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 125734
00:15:05.388   16:58:58	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:15:05.388   16:58:58	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:15:05.388   16:58:58	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 125734'
00:15:05.388  killing process with pid 125734
00:15:05.388   16:58:58	-- common/autotest_common.sh@955 -- # kill 125734
00:15:05.388  [2024-11-19 16:58:58.021746] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:15:05.388   16:58:58	-- common/autotest_common.sh@960 -- # wait 125734
00:15:05.388  [2024-11-19 16:58:58.021944] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:15:05.646   16:58:58	-- bdev/bdev_raid.sh@289 -- # return 0
00:15:05.646  
00:15:05.646  real	0m11.139s
00:15:05.646  user	0m19.827s
00:15:05.646  sys	0m1.927s
00:15:05.646   16:58:58	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:15:05.646   16:58:58	-- common/autotest_common.sh@10 -- # set +x
00:15:05.646  ************************************
00:15:05.646  END TEST raid_state_function_test_sb
00:15:05.646  ************************************
00:15:05.646   16:58:58	-- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 3
00:15:05.646   16:58:58	-- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']'
00:15:05.646   16:58:58	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:15:05.646   16:58:58	-- common/autotest_common.sh@10 -- # set +x
00:15:05.646  ************************************
00:15:05.646  START TEST raid_superblock_test
00:15:05.646  ************************************
00:15:05.646   16:58:58	-- common/autotest_common.sh@1114 -- # raid_superblock_test raid0 3
00:15:05.646   16:58:58	-- bdev/bdev_raid.sh@338 -- # local raid_level=raid0
00:15:05.646   16:58:58	-- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3
00:15:05.646   16:58:58	-- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=()
00:15:05.646   16:58:58	-- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc
00:15:05.646   16:58:58	-- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=()
00:15:05.646   16:58:58	-- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt
00:15:05.646   16:58:58	-- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=()
00:15:05.646   16:58:58	-- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid
00:15:05.646   16:58:58	-- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1
00:15:05.646   16:58:58	-- bdev/bdev_raid.sh@344 -- # local strip_size
00:15:05.646   16:58:58	-- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg
00:15:05.646   16:58:58	-- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid
00:15:05.646   16:58:58	-- bdev/bdev_raid.sh@347 -- # local raid_bdev
00:15:05.646   16:58:58	-- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']'
00:15:05.646   16:58:58	-- bdev/bdev_raid.sh@350 -- # strip_size=64
00:15:05.646   16:58:58	-- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64'
00:15:05.646   16:58:58	-- bdev/bdev_raid.sh@357 -- # raid_pid=126106
00:15:05.646   16:58:58	-- bdev/bdev_raid.sh@358 -- # waitforlisten 126106 /var/tmp/spdk-raid.sock
00:15:05.646   16:58:58	-- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid
00:15:05.646   16:58:58	-- common/autotest_common.sh@829 -- # '[' -z 126106 ']'
00:15:05.646   16:58:58	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock
00:15:05.646   16:58:58	-- common/autotest_common.sh@834 -- # local max_retries=100
00:15:05.646   16:58:58	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...'
00:15:05.646  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...
00:15:05.646   16:58:58	-- common/autotest_common.sh@838 -- # xtrace_disable
00:15:05.646   16:58:58	-- common/autotest_common.sh@10 -- # set +x
00:15:05.646  [2024-11-19 16:58:58.409427] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:15:05.646  [2024-11-19 16:58:58.409802] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126106 ]
00:15:05.905  [2024-11-19 16:58:58.551774] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:15:05.905  [2024-11-19 16:58:58.599118] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:15:05.905  [2024-11-19 16:58:58.641821] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:15:06.846   16:58:59	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:15:06.846   16:58:59	-- common/autotest_common.sh@862 -- # return 0
00:15:06.846   16:58:59	-- bdev/bdev_raid.sh@361 -- # (( i = 1 ))
00:15:06.846   16:58:59	-- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs ))
00:15:06.846   16:58:59	-- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1
00:15:06.846   16:58:59	-- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1
00:15:06.846   16:58:59	-- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001
00:15:06.846   16:58:59	-- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc)
00:15:06.846   16:58:59	-- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt)
00:15:06.846   16:58:59	-- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:15:06.846   16:58:59	-- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1
00:15:06.846  malloc1
00:15:06.846   16:58:59	-- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001
00:15:07.104  [2024-11-19 16:58:59.901599] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1
00:15:07.104  [2024-11-19 16:58:59.901934] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:15:07.104  [2024-11-19 16:58:59.902021] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80
00:15:07.104  [2024-11-19 16:58:59.902163] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:15:07.104  [2024-11-19 16:58:59.905098] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:15:07.104  [2024-11-19 16:58:59.905320] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1
00:15:07.104  pt1
00:15:07.104   16:58:59	-- bdev/bdev_raid.sh@361 -- # (( i++ ))
00:15:07.104   16:58:59	-- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs ))
00:15:07.104   16:58:59	-- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2
00:15:07.104   16:58:59	-- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2
00:15:07.104   16:58:59	-- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002
00:15:07.104   16:58:59	-- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc)
00:15:07.104   16:58:59	-- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt)
00:15:07.104   16:58:59	-- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:15:07.104   16:58:59	-- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2
00:15:07.362  malloc2
00:15:07.362   16:59:00	-- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:15:07.645  [2024-11-19 16:59:00.290872] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:15:07.645  [2024-11-19 16:59:00.291166] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:15:07.645  [2024-11-19 16:59:00.291241] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680
00:15:07.645  [2024-11-19 16:59:00.291366] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:15:07.645  [2024-11-19 16:59:00.293845] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:15:07.645  [2024-11-19 16:59:00.294001] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:15:07.645  pt2
00:15:07.645   16:59:00	-- bdev/bdev_raid.sh@361 -- # (( i++ ))
00:15:07.645   16:59:00	-- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs ))
00:15:07.645   16:59:00	-- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3
00:15:07.645   16:59:00	-- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3
00:15:07.645   16:59:00	-- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003
00:15:07.645   16:59:00	-- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc)
00:15:07.645   16:59:00	-- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt)
00:15:07.645   16:59:00	-- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:15:07.645   16:59:00	-- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3
00:15:07.914  malloc3
00:15:07.914   16:59:00	-- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003
00:15:07.914  [2024-11-19 16:59:00.754113] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3
00:15:07.914  [2024-11-19 16:59:00.754388] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:15:07.914  [2024-11-19 16:59:00.754483] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280
00:15:07.914  [2024-11-19 16:59:00.754614] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:15:07.914  [2024-11-19 16:59:00.757169] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:15:07.914  [2024-11-19 16:59:00.757348] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3
00:15:07.914  pt3
00:15:08.172   16:59:00	-- bdev/bdev_raid.sh@361 -- # (( i++ ))
00:15:08.172   16:59:00	-- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs ))
00:15:08.172   16:59:00	-- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3' -n raid_bdev1 -s
00:15:08.172  [2024-11-19 16:59:00.942258] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed
00:15:08.172  [2024-11-19 16:59:00.944654] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:15:08.172  [2024-11-19 16:59:00.944844] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed
00:15:08.172  [2024-11-19 16:59:00.945057] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880
00:15:08.172  [2024-11-19 16:59:00.945246] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512
00:15:08.172  [2024-11-19 16:59:00.945445] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0
00:15:08.172  [2024-11-19 16:59:00.946018] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880
00:15:08.172  [2024-11-19 16:59:00.946124] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007880
00:15:08.172  [2024-11-19 16:59:00.946383] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:15:08.172   16:59:00	-- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3
00:15:08.172   16:59:00	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:15:08.172   16:59:00	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:15:08.172   16:59:00	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid0
00:15:08.172   16:59:00	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:15:08.172   16:59:00	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:15:08.172   16:59:00	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:15:08.172   16:59:00	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:15:08.172   16:59:00	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:15:08.172   16:59:00	-- bdev/bdev_raid.sh@125 -- # local tmp
00:15:08.172    16:59:00	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:15:08.172    16:59:00	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:15:08.430   16:59:01	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:15:08.431    "name": "raid_bdev1",
00:15:08.431    "uuid": "88492d3a-a3b5-44d7-b78f-21740c597cec",
00:15:08.431    "strip_size_kb": 64,
00:15:08.431    "state": "online",
00:15:08.431    "raid_level": "raid0",
00:15:08.431    "superblock": true,
00:15:08.431    "num_base_bdevs": 3,
00:15:08.431    "num_base_bdevs_discovered": 3,
00:15:08.431    "num_base_bdevs_operational": 3,
00:15:08.431    "base_bdevs_list": [
00:15:08.431      {
00:15:08.431        "name": "pt1",
00:15:08.431        "uuid": "5f4d0a55-4a7c-5ef7-a604-e3c2534853e9",
00:15:08.431        "is_configured": true,
00:15:08.431        "data_offset": 2048,
00:15:08.431        "data_size": 63488
00:15:08.431      },
00:15:08.431      {
00:15:08.431        "name": "pt2",
00:15:08.431        "uuid": "2f201a04-80db-513d-8a77-eea4c6072c92",
00:15:08.431        "is_configured": true,
00:15:08.431        "data_offset": 2048,
00:15:08.431        "data_size": 63488
00:15:08.431      },
00:15:08.431      {
00:15:08.431        "name": "pt3",
00:15:08.431        "uuid": "4e9a72fe-f2b4-5721-b2f0-80dc6db70ada",
00:15:08.431        "is_configured": true,
00:15:08.431        "data_offset": 2048,
00:15:08.431        "data_size": 63488
00:15:08.431      }
00:15:08.431    ]
00:15:08.431  }'
00:15:08.431   16:59:01	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:15:08.431   16:59:01	-- common/autotest_common.sh@10 -- # set +x
00:15:08.996    16:59:01	-- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid'
00:15:08.996    16:59:01	-- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1
00:15:09.253  [2024-11-19 16:59:01.926736] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:15:09.253   16:59:01	-- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=88492d3a-a3b5-44d7-b78f-21740c597cec
00:15:09.254   16:59:01	-- bdev/bdev_raid.sh@380 -- # '[' -z 88492d3a-a3b5-44d7-b78f-21740c597cec ']'
00:15:09.254   16:59:01	-- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1
00:15:09.254  [2024-11-19 16:59:02.110550] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:15:09.254  [2024-11-19 16:59:02.110751] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:15:09.254  [2024-11-19 16:59:02.111003] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:15:09.254  [2024-11-19 16:59:02.111220] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:15:09.254  [2024-11-19 16:59:02.111315] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name raid_bdev1, state offline
00:15:09.512    16:59:02	-- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:15:09.512    16:59:02	-- bdev/bdev_raid.sh@386 -- # jq -r '.[]'
00:15:09.512   16:59:02	-- bdev/bdev_raid.sh@386 -- # raid_bdev=
00:15:09.512   16:59:02	-- bdev/bdev_raid.sh@387 -- # '[' -n '' ']'
00:15:09.512   16:59:02	-- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}"
00:15:09.512   16:59:02	-- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1
00:15:09.770   16:59:02	-- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}"
00:15:09.770   16:59:02	-- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2
00:15:10.028   16:59:02	-- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}"
00:15:10.028   16:59:02	-- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3
00:15:10.288    16:59:02	-- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs
00:15:10.288    16:59:02	-- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any'
00:15:10.288   16:59:03	-- bdev/bdev_raid.sh@395 -- # '[' false == true ']'
00:15:10.288   16:59:03	-- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1
00:15:10.288   16:59:03	-- common/autotest_common.sh@650 -- # local es=0
00:15:10.288   16:59:03	-- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1
00:15:10.288   16:59:03	-- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:15:10.288   16:59:03	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:15:10.288    16:59:03	-- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:15:10.288   16:59:03	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:15:10.288    16:59:03	-- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:15:10.288   16:59:03	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:15:10.288   16:59:03	-- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:15:10.288   16:59:03	-- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]]
00:15:10.288   16:59:03	-- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1
00:15:10.547  [2024-11-19 16:59:03.322746] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed
00:15:10.547  [2024-11-19 16:59:03.325213] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed
00:15:10.547  [2024-11-19 16:59:03.325409] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed
00:15:10.547  [2024-11-19 16:59:03.325492] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1
00:15:10.547  [2024-11-19 16:59:03.325724] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2
00:15:10.547  [2024-11-19 16:59:03.325788] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3
00:15:10.547  [2024-11-19 16:59:03.325991] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:15:10.547  [2024-11-19 16:59:03.326029] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state configuring
00:15:10.547  request:
00:15:10.547  {
00:15:10.547    "name": "raid_bdev1",
00:15:10.547    "raid_level": "raid0",
00:15:10.547    "base_bdevs": [
00:15:10.547      "malloc1",
00:15:10.547      "malloc2",
00:15:10.547      "malloc3"
00:15:10.547    ],
00:15:10.547    "superblock": false,
00:15:10.547    "strip_size_kb": 64,
00:15:10.547    "method": "bdev_raid_create",
00:15:10.547    "req_id": 1
00:15:10.547  }
00:15:10.547  Got JSON-RPC error response
00:15:10.547  response:
00:15:10.547  {
00:15:10.547    "code": -17,
00:15:10.547    "message": "Failed to create RAID bdev raid_bdev1: File exists"
00:15:10.547  }
00:15:10.547   16:59:03	-- common/autotest_common.sh@653 -- # es=1
00:15:10.547   16:59:03	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:15:10.547   16:59:03	-- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:15:10.547   16:59:03	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:15:10.547    16:59:03	-- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:15:10.547    16:59:03	-- bdev/bdev_raid.sh@403 -- # jq -r '.[]'
00:15:10.806   16:59:03	-- bdev/bdev_raid.sh@403 -- # raid_bdev=
00:15:10.806   16:59:03	-- bdev/bdev_raid.sh@404 -- # '[' -n '' ']'
00:15:10.806   16:59:03	-- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001
00:15:11.065  [2024-11-19 16:59:03.686739] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1
00:15:11.065  [2024-11-19 16:59:03.687049] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:15:11.065  [2024-11-19 16:59:03.687129] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480
00:15:11.065  [2024-11-19 16:59:03.687220] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:15:11.065  [2024-11-19 16:59:03.689764] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:15:11.065  [2024-11-19 16:59:03.689905] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1
00:15:11.065  [2024-11-19 16:59:03.690103] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1
00:15:11.065  [2024-11-19 16:59:03.690211] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed
00:15:11.065  pt1
00:15:11.065   16:59:03	-- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3
00:15:11.065   16:59:03	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:15:11.065   16:59:03	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:15:11.065   16:59:03	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid0
00:15:11.065   16:59:03	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:15:11.065   16:59:03	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:15:11.065   16:59:03	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:15:11.065   16:59:03	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:15:11.065   16:59:03	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:15:11.065   16:59:03	-- bdev/bdev_raid.sh@125 -- # local tmp
00:15:11.065    16:59:03	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:15:11.065    16:59:03	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:15:11.065   16:59:03	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:15:11.065    "name": "raid_bdev1",
00:15:11.065    "uuid": "88492d3a-a3b5-44d7-b78f-21740c597cec",
00:15:11.065    "strip_size_kb": 64,
00:15:11.065    "state": "configuring",
00:15:11.065    "raid_level": "raid0",
00:15:11.065    "superblock": true,
00:15:11.065    "num_base_bdevs": 3,
00:15:11.065    "num_base_bdevs_discovered": 1,
00:15:11.065    "num_base_bdevs_operational": 3,
00:15:11.065    "base_bdevs_list": [
00:15:11.065      {
00:15:11.065        "name": "pt1",
00:15:11.065        "uuid": "5f4d0a55-4a7c-5ef7-a604-e3c2534853e9",
00:15:11.065        "is_configured": true,
00:15:11.065        "data_offset": 2048,
00:15:11.065        "data_size": 63488
00:15:11.065      },
00:15:11.065      {
00:15:11.065        "name": null,
00:15:11.065        "uuid": "2f201a04-80db-513d-8a77-eea4c6072c92",
00:15:11.065        "is_configured": false,
00:15:11.065        "data_offset": 2048,
00:15:11.065        "data_size": 63488
00:15:11.065      },
00:15:11.065      {
00:15:11.065        "name": null,
00:15:11.065        "uuid": "4e9a72fe-f2b4-5721-b2f0-80dc6db70ada",
00:15:11.065        "is_configured": false,
00:15:11.065        "data_offset": 2048,
00:15:11.065        "data_size": 63488
00:15:11.065      }
00:15:11.065    ]
00:15:11.065  }'
00:15:11.065   16:59:03	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:15:11.065   16:59:03	-- common/autotest_common.sh@10 -- # set +x
00:15:11.632   16:59:04	-- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']'
00:15:11.632   16:59:04	-- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:15:11.891  [2024-11-19 16:59:04.726999] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:15:11.891  [2024-11-19 16:59:04.727318] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:15:11.891  [2024-11-19 16:59:04.727402] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80
00:15:11.891  [2024-11-19 16:59:04.727522] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:15:11.891  [2024-11-19 16:59:04.728060] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:15:11.891  [2024-11-19 16:59:04.728210] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:15:11.891  [2024-11-19 16:59:04.728398] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2
00:15:11.891  [2024-11-19 16:59:04.728500] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:15:11.891  pt2
00:15:12.149   16:59:04	-- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2
00:15:12.149  [2024-11-19 16:59:04.923023] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2
00:15:12.149   16:59:04	-- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3
00:15:12.149   16:59:04	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:15:12.149   16:59:04	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:15:12.149   16:59:04	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid0
00:15:12.149   16:59:04	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:15:12.149   16:59:04	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:15:12.149   16:59:04	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:15:12.149   16:59:04	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:15:12.149   16:59:04	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:15:12.150   16:59:04	-- bdev/bdev_raid.sh@125 -- # local tmp
00:15:12.150    16:59:04	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:15:12.150    16:59:04	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:15:12.409   16:59:05	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:15:12.409    "name": "raid_bdev1",
00:15:12.409    "uuid": "88492d3a-a3b5-44d7-b78f-21740c597cec",
00:15:12.409    "strip_size_kb": 64,
00:15:12.409    "state": "configuring",
00:15:12.409    "raid_level": "raid0",
00:15:12.409    "superblock": true,
00:15:12.409    "num_base_bdevs": 3,
00:15:12.409    "num_base_bdevs_discovered": 1,
00:15:12.409    "num_base_bdevs_operational": 3,
00:15:12.409    "base_bdevs_list": [
00:15:12.409      {
00:15:12.409        "name": "pt1",
00:15:12.409        "uuid": "5f4d0a55-4a7c-5ef7-a604-e3c2534853e9",
00:15:12.409        "is_configured": true,
00:15:12.409        "data_offset": 2048,
00:15:12.409        "data_size": 63488
00:15:12.409      },
00:15:12.409      {
00:15:12.409        "name": null,
00:15:12.409        "uuid": "2f201a04-80db-513d-8a77-eea4c6072c92",
00:15:12.409        "is_configured": false,
00:15:12.409        "data_offset": 2048,
00:15:12.409        "data_size": 63488
00:15:12.409      },
00:15:12.409      {
00:15:12.409        "name": null,
00:15:12.409        "uuid": "4e9a72fe-f2b4-5721-b2f0-80dc6db70ada",
00:15:12.409        "is_configured": false,
00:15:12.409        "data_offset": 2048,
00:15:12.409        "data_size": 63488
00:15:12.409      }
00:15:12.409    ]
00:15:12.409  }'
00:15:12.409   16:59:05	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:15:12.409   16:59:05	-- common/autotest_common.sh@10 -- # set +x
00:15:12.977   16:59:05	-- bdev/bdev_raid.sh@422 -- # (( i = 1 ))
00:15:12.977   16:59:05	-- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs ))
00:15:12.977   16:59:05	-- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:15:13.235  [2024-11-19 16:59:05.907214] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:15:13.235  [2024-11-19 16:59:05.907499] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:15:13.235  [2024-11-19 16:59:05.907567] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080
00:15:13.235  [2024-11-19 16:59:05.907670] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:15:13.235  [2024-11-19 16:59:05.908100] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:15:13.235  [2024-11-19 16:59:05.908268] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:15:13.235  [2024-11-19 16:59:05.908458] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2
00:15:13.235  [2024-11-19 16:59:05.908604] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:15:13.235  pt2
00:15:13.235   16:59:05	-- bdev/bdev_raid.sh@422 -- # (( i++ ))
00:15:13.235   16:59:05	-- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs ))
00:15:13.235   16:59:05	-- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003
00:15:13.493  [2024-11-19 16:59:06.131308] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3
00:15:13.493  [2024-11-19 16:59:06.131606] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:15:13.493  [2024-11-19 16:59:06.131675] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380
00:15:13.493  [2024-11-19 16:59:06.131782] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:15:13.493  [2024-11-19 16:59:06.132240] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:15:13.493  [2024-11-19 16:59:06.132401] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3
00:15:13.493  [2024-11-19 16:59:06.132581] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3
00:15:13.493  [2024-11-19 16:59:06.132680] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed
00:15:13.493  [2024-11-19 16:59:06.132828] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80
00:15:13.493  [2024-11-19 16:59:06.132967] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512
00:15:13.493  [2024-11-19 16:59:06.133078] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0
00:15:13.493  [2024-11-19 16:59:06.133394] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80
00:15:13.493  [2024-11-19 16:59:06.133495] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80
00:15:13.493  [2024-11-19 16:59:06.133664] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:15:13.493  pt3
00:15:13.493   16:59:06	-- bdev/bdev_raid.sh@422 -- # (( i++ ))
00:15:13.493   16:59:06	-- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs ))
00:15:13.493   16:59:06	-- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3
00:15:13.493   16:59:06	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:15:13.493   16:59:06	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:15:13.493   16:59:06	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid0
00:15:13.493   16:59:06	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:15:13.493   16:59:06	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:15:13.493   16:59:06	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:15:13.493   16:59:06	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:15:13.493   16:59:06	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:15:13.493   16:59:06	-- bdev/bdev_raid.sh@125 -- # local tmp
00:15:13.493    16:59:06	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:15:13.493    16:59:06	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:15:13.752   16:59:06	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:15:13.752    "name": "raid_bdev1",
00:15:13.752    "uuid": "88492d3a-a3b5-44d7-b78f-21740c597cec",
00:15:13.752    "strip_size_kb": 64,
00:15:13.752    "state": "online",
00:15:13.752    "raid_level": "raid0",
00:15:13.752    "superblock": true,
00:15:13.752    "num_base_bdevs": 3,
00:15:13.752    "num_base_bdevs_discovered": 3,
00:15:13.752    "num_base_bdevs_operational": 3,
00:15:13.752    "base_bdevs_list": [
00:15:13.752      {
00:15:13.752        "name": "pt1",
00:15:13.752        "uuid": "5f4d0a55-4a7c-5ef7-a604-e3c2534853e9",
00:15:13.752        "is_configured": true,
00:15:13.752        "data_offset": 2048,
00:15:13.752        "data_size": 63488
00:15:13.752      },
00:15:13.752      {
00:15:13.753        "name": "pt2",
00:15:13.753        "uuid": "2f201a04-80db-513d-8a77-eea4c6072c92",
00:15:13.753        "is_configured": true,
00:15:13.753        "data_offset": 2048,
00:15:13.753        "data_size": 63488
00:15:13.753      },
00:15:13.753      {
00:15:13.753        "name": "pt3",
00:15:13.753        "uuid": "4e9a72fe-f2b4-5721-b2f0-80dc6db70ada",
00:15:13.753        "is_configured": true,
00:15:13.753        "data_offset": 2048,
00:15:13.753        "data_size": 63488
00:15:13.753      }
00:15:13.753    ]
00:15:13.753  }'
00:15:13.753   16:59:06	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:15:13.753   16:59:06	-- common/autotest_common.sh@10 -- # set +x
00:15:14.321    16:59:06	-- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1
00:15:14.321    16:59:06	-- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid'
00:15:14.321  [2024-11-19 16:59:07.167695] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:15:14.579   16:59:07	-- bdev/bdev_raid.sh@430 -- # '[' 88492d3a-a3b5-44d7-b78f-21740c597cec '!=' 88492d3a-a3b5-44d7-b78f-21740c597cec ']'
00:15:14.579   16:59:07	-- bdev/bdev_raid.sh@434 -- # has_redundancy raid0
00:15:14.579   16:59:07	-- bdev/bdev_raid.sh@195 -- # case $1 in
00:15:14.579   16:59:07	-- bdev/bdev_raid.sh@197 -- # return 1
00:15:14.579   16:59:07	-- bdev/bdev_raid.sh@511 -- # killprocess 126106
00:15:14.579   16:59:07	-- common/autotest_common.sh@936 -- # '[' -z 126106 ']'
00:15:14.579   16:59:07	-- common/autotest_common.sh@940 -- # kill -0 126106
00:15:14.579    16:59:07	-- common/autotest_common.sh@941 -- # uname
00:15:14.579   16:59:07	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:15:14.579    16:59:07	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 126106
00:15:14.579   16:59:07	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:15:14.579   16:59:07	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:15:14.579   16:59:07	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 126106'
00:15:14.579  killing process with pid 126106
00:15:14.579   16:59:07	-- common/autotest_common.sh@955 -- # kill 126106
00:15:14.579  [2024-11-19 16:59:07.227604] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:15:14.579   16:59:07	-- common/autotest_common.sh@960 -- # wait 126106
00:15:14.579  [2024-11-19 16:59:07.227841] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:15:14.579  [2024-11-19 16:59:07.228080] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:15:14.579  [2024-11-19 16:59:07.228120] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline
00:15:14.579  [2024-11-19 16:59:07.264287] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:15:14.838  ************************************
00:15:14.838  END TEST raid_superblock_test
00:15:14.838  ************************************
00:15:14.838   16:59:07	-- bdev/bdev_raid.sh@513 -- # return 0
00:15:14.838  
00:15:14.838  real	0m9.166s
00:15:14.838  user	0m16.046s
00:15:14.838  sys	0m1.639s
00:15:14.838   16:59:07	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:15:14.838   16:59:07	-- common/autotest_common.sh@10 -- # set +x
00:15:14.838   16:59:07	-- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1
00:15:14.838   16:59:07	-- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 3 false
00:15:14.838   16:59:07	-- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']'
00:15:14.838   16:59:07	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:15:14.838   16:59:07	-- common/autotest_common.sh@10 -- # set +x
00:15:14.838  ************************************
00:15:14.838  START TEST raid_state_function_test
00:15:14.838  ************************************
00:15:14.838   16:59:07	-- common/autotest_common.sh@1114 -- # raid_state_function_test concat 3 false
00:15:14.838   16:59:07	-- bdev/bdev_raid.sh@202 -- # local raid_level=concat
00:15:14.838   16:59:07	-- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3
00:15:14.838   16:59:07	-- bdev/bdev_raid.sh@204 -- # local superblock=false
00:15:14.838   16:59:07	-- bdev/bdev_raid.sh@205 -- # local raid_bdev
00:15:14.838    16:59:07	-- bdev/bdev_raid.sh@206 -- # (( i = 1 ))
00:15:14.838    16:59:07	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:15:14.838    16:59:07	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev1
00:15:14.838    16:59:07	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:15:14.838    16:59:07	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:15:14.838    16:59:07	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev2
00:15:14.838    16:59:07	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:15:14.838    16:59:07	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:15:14.838    16:59:07	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev3
00:15:14.838    16:59:07	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:15:14.838    16:59:07	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:15:14.838   16:59:07	-- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3')
00:15:14.838   16:59:07	-- bdev/bdev_raid.sh@206 -- # local base_bdevs
00:15:14.838   16:59:07	-- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid
00:15:14.838   16:59:07	-- bdev/bdev_raid.sh@208 -- # local strip_size
00:15:14.838   16:59:07	-- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg
00:15:14.838   16:59:07	-- bdev/bdev_raid.sh@210 -- # local superblock_create_arg
00:15:14.838   16:59:07	-- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']'
00:15:14.838   16:59:07	-- bdev/bdev_raid.sh@213 -- # strip_size=64
00:15:14.838   16:59:07	-- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64'
00:15:14.838   16:59:07	-- bdev/bdev_raid.sh@219 -- # '[' false = true ']'
00:15:14.838   16:59:07	-- bdev/bdev_raid.sh@222 -- # superblock_create_arg=
00:15:14.838   16:59:07	-- bdev/bdev_raid.sh@226 -- # raid_pid=126398
00:15:14.838   16:59:07	-- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid
00:15:14.838  Process raid pid: 126398
00:15:14.838   16:59:07	-- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 126398'
00:15:14.838   16:59:07	-- bdev/bdev_raid.sh@228 -- # waitforlisten 126398 /var/tmp/spdk-raid.sock
00:15:14.838   16:59:07	-- common/autotest_common.sh@829 -- # '[' -z 126398 ']'
00:15:14.838   16:59:07	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock
00:15:14.838   16:59:07	-- common/autotest_common.sh@834 -- # local max_retries=100
00:15:14.838   16:59:07	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...'
00:15:14.838  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...
00:15:14.838   16:59:07	-- common/autotest_common.sh@838 -- # xtrace_disable
00:15:14.838   16:59:07	-- common/autotest_common.sh@10 -- # set +x
00:15:14.838  [2024-11-19 16:59:07.651695] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:15:14.838  [2024-11-19 16:59:07.652071] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:15:15.096  [2024-11-19 16:59:07.794738] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:15:15.096  [2024-11-19 16:59:07.836574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:15:15.096  [2024-11-19 16:59:07.877593] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:15:16.032   16:59:08	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:15:16.032   16:59:08	-- common/autotest_common.sh@862 -- # return 0
00:15:16.032   16:59:08	-- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid
00:15:16.033  [2024-11-19 16:59:08.842699] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:15:16.033  [2024-11-19 16:59:08.842927] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:15:16.033  [2024-11-19 16:59:08.843015] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:15:16.033  [2024-11-19 16:59:08.843065] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:15:16.033  [2024-11-19 16:59:08.843091] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:15:16.033  [2024-11-19 16:59:08.843150] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:15:16.033   16:59:08	-- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3
00:15:16.033   16:59:08	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:15:16.033   16:59:08	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:15:16.033   16:59:08	-- bdev/bdev_raid.sh@119 -- # local raid_level=concat
00:15:16.033   16:59:08	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:15:16.033   16:59:08	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:15:16.033   16:59:08	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:15:16.033   16:59:08	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:15:16.033   16:59:08	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:15:16.033   16:59:08	-- bdev/bdev_raid.sh@125 -- # local tmp
00:15:16.033    16:59:08	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:15:16.033    16:59:08	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:15:16.291   16:59:09	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:15:16.291    "name": "Existed_Raid",
00:15:16.291    "uuid": "00000000-0000-0000-0000-000000000000",
00:15:16.291    "strip_size_kb": 64,
00:15:16.291    "state": "configuring",
00:15:16.291    "raid_level": "concat",
00:15:16.291    "superblock": false,
00:15:16.291    "num_base_bdevs": 3,
00:15:16.291    "num_base_bdevs_discovered": 0,
00:15:16.291    "num_base_bdevs_operational": 3,
00:15:16.291    "base_bdevs_list": [
00:15:16.291      {
00:15:16.291        "name": "BaseBdev1",
00:15:16.291        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:16.291        "is_configured": false,
00:15:16.291        "data_offset": 0,
00:15:16.291        "data_size": 0
00:15:16.291      },
00:15:16.291      {
00:15:16.291        "name": "BaseBdev2",
00:15:16.291        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:16.291        "is_configured": false,
00:15:16.291        "data_offset": 0,
00:15:16.291        "data_size": 0
00:15:16.291      },
00:15:16.291      {
00:15:16.291        "name": "BaseBdev3",
00:15:16.291        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:16.291        "is_configured": false,
00:15:16.291        "data_offset": 0,
00:15:16.291        "data_size": 0
00:15:16.291      }
00:15:16.291    ]
00:15:16.291  }'
00:15:16.291   16:59:09	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:15:16.291   16:59:09	-- common/autotest_common.sh@10 -- # set +x
00:15:16.858   16:59:09	-- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid
00:15:17.118  [2024-11-19 16:59:09.942778] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:15:17.118  [2024-11-19 16:59:09.943065] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring
00:15:17.118   16:59:09	-- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid
00:15:17.378  [2024-11-19 16:59:10.190841] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:15:17.378  [2024-11-19 16:59:10.191996] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:15:17.378  [2024-11-19 16:59:10.192402] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:15:17.378  [2024-11-19 16:59:10.192844] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:15:17.378  [2024-11-19 16:59:10.193145] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:15:17.378  [2024-11-19 16:59:10.193686] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:15:17.378   16:59:10	-- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1
00:15:17.636  [2024-11-19 16:59:10.388633] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:15:17.636  BaseBdev1
00:15:17.636   16:59:10	-- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1
00:15:17.636   16:59:10	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1
00:15:17.636   16:59:10	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:15:17.636   16:59:10	-- common/autotest_common.sh@899 -- # local i
00:15:17.636   16:59:10	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:15:17.636   16:59:10	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:15:17.636   16:59:10	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:15:17.896   16:59:10	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000
00:15:18.155  [
00:15:18.155    {
00:15:18.155      "name": "BaseBdev1",
00:15:18.155      "aliases": [
00:15:18.155        "b4433952-976c-4442-a335-8e10a9876e03"
00:15:18.155      ],
00:15:18.155      "product_name": "Malloc disk",
00:15:18.155      "block_size": 512,
00:15:18.155      "num_blocks": 65536,
00:15:18.155      "uuid": "b4433952-976c-4442-a335-8e10a9876e03",
00:15:18.155      "assigned_rate_limits": {
00:15:18.155        "rw_ios_per_sec": 0,
00:15:18.155        "rw_mbytes_per_sec": 0,
00:15:18.155        "r_mbytes_per_sec": 0,
00:15:18.155        "w_mbytes_per_sec": 0
00:15:18.155      },
00:15:18.155      "claimed": true,
00:15:18.155      "claim_type": "exclusive_write",
00:15:18.155      "zoned": false,
00:15:18.155      "supported_io_types": {
00:15:18.155        "read": true,
00:15:18.155        "write": true,
00:15:18.155        "unmap": true,
00:15:18.155        "write_zeroes": true,
00:15:18.155        "flush": true,
00:15:18.155        "reset": true,
00:15:18.155        "compare": false,
00:15:18.155        "compare_and_write": false,
00:15:18.155        "abort": true,
00:15:18.155        "nvme_admin": false,
00:15:18.155        "nvme_io": false
00:15:18.155      },
00:15:18.155      "memory_domains": [
00:15:18.155        {
00:15:18.155          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:15:18.155          "dma_device_type": 2
00:15:18.155        }
00:15:18.155      ],
00:15:18.155      "driver_specific": {}
00:15:18.155    }
00:15:18.155  ]
00:15:18.155   16:59:10	-- common/autotest_common.sh@905 -- # return 0
00:15:18.155   16:59:10	-- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3
00:15:18.155   16:59:10	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:15:18.155   16:59:10	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:15:18.155   16:59:10	-- bdev/bdev_raid.sh@119 -- # local raid_level=concat
00:15:18.155   16:59:10	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:15:18.155   16:59:10	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:15:18.155   16:59:10	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:15:18.155   16:59:10	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:15:18.155   16:59:10	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:15:18.155   16:59:10	-- bdev/bdev_raid.sh@125 -- # local tmp
00:15:18.155    16:59:10	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:15:18.155    16:59:10	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:15:18.415   16:59:11	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:15:18.415    "name": "Existed_Raid",
00:15:18.415    "uuid": "00000000-0000-0000-0000-000000000000",
00:15:18.415    "strip_size_kb": 64,
00:15:18.415    "state": "configuring",
00:15:18.415    "raid_level": "concat",
00:15:18.415    "superblock": false,
00:15:18.415    "num_base_bdevs": 3,
00:15:18.415    "num_base_bdevs_discovered": 1,
00:15:18.415    "num_base_bdevs_operational": 3,
00:15:18.415    "base_bdevs_list": [
00:15:18.415      {
00:15:18.415        "name": "BaseBdev1",
00:15:18.415        "uuid": "b4433952-976c-4442-a335-8e10a9876e03",
00:15:18.415        "is_configured": true,
00:15:18.415        "data_offset": 0,
00:15:18.415        "data_size": 65536
00:15:18.415      },
00:15:18.415      {
00:15:18.415        "name": "BaseBdev2",
00:15:18.415        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:18.415        "is_configured": false,
00:15:18.415        "data_offset": 0,
00:15:18.415        "data_size": 0
00:15:18.415      },
00:15:18.415      {
00:15:18.415        "name": "BaseBdev3",
00:15:18.415        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:18.415        "is_configured": false,
00:15:18.415        "data_offset": 0,
00:15:18.415        "data_size": 0
00:15:18.415      }
00:15:18.415    ]
00:15:18.415  }'
00:15:18.415   16:59:11	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:15:18.415   16:59:11	-- common/autotest_common.sh@10 -- # set +x
00:15:19.071   16:59:11	-- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid
00:15:19.071  [2024-11-19 16:59:11.772981] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:15:19.071  [2024-11-19 16:59:11.773300] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring
00:15:19.071   16:59:11	-- bdev/bdev_raid.sh@244 -- # '[' false = true ']'
00:15:19.071   16:59:11	-- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid
00:15:19.332  [2024-11-19 16:59:12.049175] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:15:19.332  [2024-11-19 16:59:12.051803] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:15:19.332  [2024-11-19 16:59:12.052450] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:15:19.332  [2024-11-19 16:59:12.052594] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:15:19.332  [2024-11-19 16:59:12.052767] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:15:19.332   16:59:12	-- bdev/bdev_raid.sh@254 -- # (( i = 1 ))
00:15:19.332   16:59:12	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:15:19.332   16:59:12	-- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3
00:15:19.332   16:59:12	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:15:19.332   16:59:12	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:15:19.332   16:59:12	-- bdev/bdev_raid.sh@119 -- # local raid_level=concat
00:15:19.332   16:59:12	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:15:19.332   16:59:12	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:15:19.332   16:59:12	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:15:19.332   16:59:12	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:15:19.332   16:59:12	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:15:19.332   16:59:12	-- bdev/bdev_raid.sh@125 -- # local tmp
00:15:19.332    16:59:12	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:15:19.332    16:59:12	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:15:19.591   16:59:12	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:15:19.591    "name": "Existed_Raid",
00:15:19.591    "uuid": "00000000-0000-0000-0000-000000000000",
00:15:19.591    "strip_size_kb": 64,
00:15:19.591    "state": "configuring",
00:15:19.591    "raid_level": "concat",
00:15:19.591    "superblock": false,
00:15:19.591    "num_base_bdevs": 3,
00:15:19.591    "num_base_bdevs_discovered": 1,
00:15:19.591    "num_base_bdevs_operational": 3,
00:15:19.591    "base_bdevs_list": [
00:15:19.591      {
00:15:19.591        "name": "BaseBdev1",
00:15:19.591        "uuid": "b4433952-976c-4442-a335-8e10a9876e03",
00:15:19.591        "is_configured": true,
00:15:19.591        "data_offset": 0,
00:15:19.591        "data_size": 65536
00:15:19.591      },
00:15:19.591      {
00:15:19.591        "name": "BaseBdev2",
00:15:19.591        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:19.591        "is_configured": false,
00:15:19.591        "data_offset": 0,
00:15:19.591        "data_size": 0
00:15:19.591      },
00:15:19.591      {
00:15:19.591        "name": "BaseBdev3",
00:15:19.591        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:19.591        "is_configured": false,
00:15:19.591        "data_offset": 0,
00:15:19.591        "data_size": 0
00:15:19.591      }
00:15:19.591    ]
00:15:19.591  }'
00:15:19.591   16:59:12	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:15:19.591   16:59:12	-- common/autotest_common.sh@10 -- # set +x
00:15:20.159   16:59:12	-- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2
00:15:20.418  [2024-11-19 16:59:13.102390] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:15:20.418  BaseBdev2
00:15:20.418   16:59:13	-- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2
00:15:20.418   16:59:13	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2
00:15:20.418   16:59:13	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:15:20.418   16:59:13	-- common/autotest_common.sh@899 -- # local i
00:15:20.418   16:59:13	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:15:20.418   16:59:13	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:15:20.418   16:59:13	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:15:20.676   16:59:13	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000
00:15:20.676  [
00:15:20.676    {
00:15:20.676      "name": "BaseBdev2",
00:15:20.676      "aliases": [
00:15:20.676        "3ed4c0fa-f1f6-4621-bbda-2892d28b68c2"
00:15:20.676      ],
00:15:20.676      "product_name": "Malloc disk",
00:15:20.676      "block_size": 512,
00:15:20.676      "num_blocks": 65536,
00:15:20.676      "uuid": "3ed4c0fa-f1f6-4621-bbda-2892d28b68c2",
00:15:20.676      "assigned_rate_limits": {
00:15:20.676        "rw_ios_per_sec": 0,
00:15:20.676        "rw_mbytes_per_sec": 0,
00:15:20.676        "r_mbytes_per_sec": 0,
00:15:20.676        "w_mbytes_per_sec": 0
00:15:20.676      },
00:15:20.676      "claimed": true,
00:15:20.676      "claim_type": "exclusive_write",
00:15:20.676      "zoned": false,
00:15:20.676      "supported_io_types": {
00:15:20.676        "read": true,
00:15:20.676        "write": true,
00:15:20.676        "unmap": true,
00:15:20.676        "write_zeroes": true,
00:15:20.676        "flush": true,
00:15:20.676        "reset": true,
00:15:20.676        "compare": false,
00:15:20.676        "compare_and_write": false,
00:15:20.676        "abort": true,
00:15:20.676        "nvme_admin": false,
00:15:20.676        "nvme_io": false
00:15:20.676      },
00:15:20.676      "memory_domains": [
00:15:20.676        {
00:15:20.676          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:15:20.676          "dma_device_type": 2
00:15:20.676        }
00:15:20.676      ],
00:15:20.676      "driver_specific": {}
00:15:20.676    }
00:15:20.676  ]
00:15:20.676   16:59:13	-- common/autotest_common.sh@905 -- # return 0
00:15:20.676   16:59:13	-- bdev/bdev_raid.sh@254 -- # (( i++ ))
00:15:20.676   16:59:13	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:15:20.676   16:59:13	-- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3
00:15:20.676   16:59:13	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:15:20.676   16:59:13	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:15:20.676   16:59:13	-- bdev/bdev_raid.sh@119 -- # local raid_level=concat
00:15:20.936   16:59:13	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:15:20.936   16:59:13	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:15:20.936   16:59:13	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:15:20.936   16:59:13	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:15:20.936   16:59:13	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:15:20.936   16:59:13	-- bdev/bdev_raid.sh@125 -- # local tmp
00:15:20.936    16:59:13	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:15:20.936    16:59:13	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:15:20.936   16:59:13	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:15:20.936    "name": "Existed_Raid",
00:15:20.936    "uuid": "00000000-0000-0000-0000-000000000000",
00:15:20.936    "strip_size_kb": 64,
00:15:20.936    "state": "configuring",
00:15:20.936    "raid_level": "concat",
00:15:20.936    "superblock": false,
00:15:20.936    "num_base_bdevs": 3,
00:15:20.936    "num_base_bdevs_discovered": 2,
00:15:20.936    "num_base_bdevs_operational": 3,
00:15:20.936    "base_bdevs_list": [
00:15:20.936      {
00:15:20.936        "name": "BaseBdev1",
00:15:20.936        "uuid": "b4433952-976c-4442-a335-8e10a9876e03",
00:15:20.936        "is_configured": true,
00:15:20.936        "data_offset": 0,
00:15:20.936        "data_size": 65536
00:15:20.936      },
00:15:20.936      {
00:15:20.936        "name": "BaseBdev2",
00:15:20.936        "uuid": "3ed4c0fa-f1f6-4621-bbda-2892d28b68c2",
00:15:20.936        "is_configured": true,
00:15:20.936        "data_offset": 0,
00:15:20.936        "data_size": 65536
00:15:20.936      },
00:15:20.936      {
00:15:20.936        "name": "BaseBdev3",
00:15:20.936        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:20.936        "is_configured": false,
00:15:20.936        "data_offset": 0,
00:15:20.936        "data_size": 0
00:15:20.936      }
00:15:20.936    ]
00:15:20.936  }'
00:15:20.936   16:59:13	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:15:20.936   16:59:13	-- common/autotest_common.sh@10 -- # set +x
00:15:21.505   16:59:14	-- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3
00:15:21.764  [2024-11-19 16:59:14.485967] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:15:21.764  [2024-11-19 16:59:14.486225] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080
00:15:21.764  [2024-11-19 16:59:14.486265] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512
00:15:21.764  [2024-11-19 16:59:14.486506] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002050
00:15:21.764  [2024-11-19 16:59:14.487015] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080
00:15:21.764  [2024-11-19 16:59:14.487143] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080
00:15:21.764  [2024-11-19 16:59:14.487474] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:15:21.764  BaseBdev3
00:15:21.764   16:59:14	-- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3
00:15:21.764   16:59:14	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3
00:15:21.764   16:59:14	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:15:21.765   16:59:14	-- common/autotest_common.sh@899 -- # local i
00:15:21.765   16:59:14	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:15:21.765   16:59:14	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:15:21.765   16:59:14	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:15:22.023   16:59:14	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000
00:15:22.282  [
00:15:22.282    {
00:15:22.282      "name": "BaseBdev3",
00:15:22.282      "aliases": [
00:15:22.282        "927a0f9a-aaa9-4268-9232-994012e4d950"
00:15:22.282      ],
00:15:22.282      "product_name": "Malloc disk",
00:15:22.282      "block_size": 512,
00:15:22.282      "num_blocks": 65536,
00:15:22.282      "uuid": "927a0f9a-aaa9-4268-9232-994012e4d950",
00:15:22.282      "assigned_rate_limits": {
00:15:22.282        "rw_ios_per_sec": 0,
00:15:22.282        "rw_mbytes_per_sec": 0,
00:15:22.282        "r_mbytes_per_sec": 0,
00:15:22.282        "w_mbytes_per_sec": 0
00:15:22.282      },
00:15:22.282      "claimed": true,
00:15:22.282      "claim_type": "exclusive_write",
00:15:22.282      "zoned": false,
00:15:22.282      "supported_io_types": {
00:15:22.282        "read": true,
00:15:22.282        "write": true,
00:15:22.282        "unmap": true,
00:15:22.282        "write_zeroes": true,
00:15:22.282        "flush": true,
00:15:22.282        "reset": true,
00:15:22.282        "compare": false,
00:15:22.282        "compare_and_write": false,
00:15:22.282        "abort": true,
00:15:22.282        "nvme_admin": false,
00:15:22.282        "nvme_io": false
00:15:22.282      },
00:15:22.282      "memory_domains": [
00:15:22.282        {
00:15:22.282          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:15:22.282          "dma_device_type": 2
00:15:22.282        }
00:15:22.282      ],
00:15:22.282      "driver_specific": {}
00:15:22.282    }
00:15:22.282  ]
00:15:22.282   16:59:14	-- common/autotest_common.sh@905 -- # return 0
00:15:22.282   16:59:14	-- bdev/bdev_raid.sh@254 -- # (( i++ ))
00:15:22.282   16:59:14	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:15:22.282   16:59:14	-- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 3
00:15:22.282   16:59:14	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:15:22.282   16:59:14	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:15:22.282   16:59:14	-- bdev/bdev_raid.sh@119 -- # local raid_level=concat
00:15:22.282   16:59:14	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:15:22.282   16:59:14	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:15:22.282   16:59:14	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:15:22.282   16:59:14	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:15:22.282   16:59:14	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:15:22.282   16:59:14	-- bdev/bdev_raid.sh@125 -- # local tmp
00:15:22.282    16:59:14	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:15:22.282    16:59:14	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:15:22.540   16:59:15	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:15:22.540    "name": "Existed_Raid",
00:15:22.540    "uuid": "72059aa9-625f-4b20-ae22-5f72a58c13ac",
00:15:22.540    "strip_size_kb": 64,
00:15:22.540    "state": "online",
00:15:22.540    "raid_level": "concat",
00:15:22.540    "superblock": false,
00:15:22.540    "num_base_bdevs": 3,
00:15:22.540    "num_base_bdevs_discovered": 3,
00:15:22.540    "num_base_bdevs_operational": 3,
00:15:22.540    "base_bdevs_list": [
00:15:22.540      {
00:15:22.540        "name": "BaseBdev1",
00:15:22.540        "uuid": "b4433952-976c-4442-a335-8e10a9876e03",
00:15:22.540        "is_configured": true,
00:15:22.540        "data_offset": 0,
00:15:22.540        "data_size": 65536
00:15:22.540      },
00:15:22.540      {
00:15:22.540        "name": "BaseBdev2",
00:15:22.540        "uuid": "3ed4c0fa-f1f6-4621-bbda-2892d28b68c2",
00:15:22.540        "is_configured": true,
00:15:22.540        "data_offset": 0,
00:15:22.540        "data_size": 65536
00:15:22.540      },
00:15:22.540      {
00:15:22.540        "name": "BaseBdev3",
00:15:22.540        "uuid": "927a0f9a-aaa9-4268-9232-994012e4d950",
00:15:22.540        "is_configured": true,
00:15:22.540        "data_offset": 0,
00:15:22.540        "data_size": 65536
00:15:22.540      }
00:15:22.540    ]
00:15:22.540  }'
00:15:22.540   16:59:15	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:15:22.540   16:59:15	-- common/autotest_common.sh@10 -- # set +x
00:15:23.107   16:59:15	-- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1
00:15:23.107  [2024-11-19 16:59:15.938437] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:15:23.107  [2024-11-19 16:59:15.938682] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:15:23.107  [2024-11-19 16:59:15.938870] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:15:23.366   16:59:15	-- bdev/bdev_raid.sh@263 -- # local expected_state
00:15:23.366   16:59:15	-- bdev/bdev_raid.sh@264 -- # has_redundancy concat
00:15:23.366   16:59:15	-- bdev/bdev_raid.sh@195 -- # case $1 in
00:15:23.366   16:59:15	-- bdev/bdev_raid.sh@197 -- # return 1
00:15:23.366   16:59:15	-- bdev/bdev_raid.sh@265 -- # expected_state=offline
00:15:23.366   16:59:15	-- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2
00:15:23.366   16:59:15	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:15:23.366   16:59:15	-- bdev/bdev_raid.sh@118 -- # local expected_state=offline
00:15:23.366   16:59:15	-- bdev/bdev_raid.sh@119 -- # local raid_level=concat
00:15:23.366   16:59:15	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:15:23.366   16:59:15	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:15:23.366   16:59:15	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:15:23.366   16:59:15	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:15:23.366   16:59:15	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:15:23.366   16:59:15	-- bdev/bdev_raid.sh@125 -- # local tmp
00:15:23.366    16:59:15	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:15:23.366    16:59:15	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:15:23.366   16:59:16	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:15:23.366    "name": "Existed_Raid",
00:15:23.366    "uuid": "72059aa9-625f-4b20-ae22-5f72a58c13ac",
00:15:23.366    "strip_size_kb": 64,
00:15:23.366    "state": "offline",
00:15:23.366    "raid_level": "concat",
00:15:23.366    "superblock": false,
00:15:23.366    "num_base_bdevs": 3,
00:15:23.366    "num_base_bdevs_discovered": 2,
00:15:23.366    "num_base_bdevs_operational": 2,
00:15:23.366    "base_bdevs_list": [
00:15:23.366      {
00:15:23.366        "name": null,
00:15:23.366        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:23.366        "is_configured": false,
00:15:23.366        "data_offset": 0,
00:15:23.366        "data_size": 65536
00:15:23.366      },
00:15:23.366      {
00:15:23.366        "name": "BaseBdev2",
00:15:23.366        "uuid": "3ed4c0fa-f1f6-4621-bbda-2892d28b68c2",
00:15:23.366        "is_configured": true,
00:15:23.366        "data_offset": 0,
00:15:23.366        "data_size": 65536
00:15:23.366      },
00:15:23.366      {
00:15:23.366        "name": "BaseBdev3",
00:15:23.366        "uuid": "927a0f9a-aaa9-4268-9232-994012e4d950",
00:15:23.366        "is_configured": true,
00:15:23.366        "data_offset": 0,
00:15:23.366        "data_size": 65536
00:15:23.366      }
00:15:23.366    ]
00:15:23.366  }'
00:15:23.366   16:59:16	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:15:23.366   16:59:16	-- common/autotest_common.sh@10 -- # set +x
00:15:23.934   16:59:16	-- bdev/bdev_raid.sh@273 -- # (( i = 1 ))
00:15:23.934   16:59:16	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:15:23.934    16:59:16	-- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]'
00:15:23.934    16:59:16	-- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:15:24.192   16:59:16	-- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid
00:15:24.192   16:59:16	-- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:15:24.192   16:59:16	-- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2
00:15:24.451  [2024-11-19 16:59:17.155561] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2
00:15:24.451   16:59:17	-- bdev/bdev_raid.sh@273 -- # (( i++ ))
00:15:24.451   16:59:17	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:15:24.451    16:59:17	-- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:15:24.451    16:59:17	-- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]'
00:15:24.710   16:59:17	-- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid
00:15:24.710   16:59:17	-- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:15:24.711   16:59:17	-- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3
00:15:24.969  [2024-11-19 16:59:17.577248] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3
00:15:24.969  [2024-11-19 16:59:17.577470] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline
00:15:24.969   16:59:17	-- bdev/bdev_raid.sh@273 -- # (( i++ ))
00:15:24.969   16:59:17	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:15:24.969    16:59:17	-- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:15:24.969    16:59:17	-- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)'
00:15:25.228   16:59:17	-- bdev/bdev_raid.sh@281 -- # raid_bdev=
00:15:25.228   16:59:17	-- bdev/bdev_raid.sh@282 -- # '[' -n '' ']'
00:15:25.228   16:59:17	-- bdev/bdev_raid.sh@287 -- # killprocess 126398
00:15:25.228   16:59:17	-- common/autotest_common.sh@936 -- # '[' -z 126398 ']'
00:15:25.228   16:59:17	-- common/autotest_common.sh@940 -- # kill -0 126398
00:15:25.228    16:59:17	-- common/autotest_common.sh@941 -- # uname
00:15:25.228   16:59:17	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:15:25.228    16:59:17	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 126398
00:15:25.228  killing process with pid 126398
00:15:25.228   16:59:17	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:15:25.228   16:59:17	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:15:25.228   16:59:17	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 126398'
00:15:25.228   16:59:17	-- common/autotest_common.sh@955 -- # kill 126398
00:15:25.228   16:59:17	-- common/autotest_common.sh@960 -- # wait 126398
00:15:25.228  [2024-11-19 16:59:17.885963] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:15:25.228  [2024-11-19 16:59:17.886069] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:15:25.486   16:59:18	-- bdev/bdev_raid.sh@289 -- # return 0
00:15:25.486  
00:15:25.486  real	0m10.685s
00:15:25.486  user	0m18.763s
00:15:25.486  ************************************
00:15:25.486  END TEST raid_state_function_test
00:15:25.486  ************************************
00:15:25.486  sys	0m1.848s
00:15:25.486   16:59:18	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:15:25.486   16:59:18	-- common/autotest_common.sh@10 -- # set +x
00:15:25.486   16:59:18	-- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true
00:15:25.486   16:59:18	-- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']'
00:15:25.486   16:59:18	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:15:25.486   16:59:18	-- common/autotest_common.sh@10 -- # set +x
00:15:25.746  ************************************
00:15:25.746  START TEST raid_state_function_test_sb
00:15:25.746  ************************************
00:15:25.746   16:59:18	-- common/autotest_common.sh@1114 -- # raid_state_function_test concat 3 true
00:15:25.746   16:59:18	-- bdev/bdev_raid.sh@202 -- # local raid_level=concat
00:15:25.746   16:59:18	-- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3
00:15:25.746   16:59:18	-- bdev/bdev_raid.sh@204 -- # local superblock=true
00:15:25.746   16:59:18	-- bdev/bdev_raid.sh@205 -- # local raid_bdev
00:15:25.746    16:59:18	-- bdev/bdev_raid.sh@206 -- # (( i = 1 ))
00:15:25.746    16:59:18	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:15:25.746    16:59:18	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev1
00:15:25.746    16:59:18	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:15:25.746    16:59:18	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:15:25.746    16:59:18	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev2
00:15:25.746    16:59:18	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:15:25.746    16:59:18	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:15:25.746    16:59:18	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev3
00:15:25.746    16:59:18	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:15:25.746    16:59:18	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:15:25.746   16:59:18	-- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3')
00:15:25.746   16:59:18	-- bdev/bdev_raid.sh@206 -- # local base_bdevs
00:15:25.746   16:59:18	-- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid
00:15:25.746   16:59:18	-- bdev/bdev_raid.sh@208 -- # local strip_size
00:15:25.746   16:59:18	-- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg
00:15:25.746   16:59:18	-- bdev/bdev_raid.sh@210 -- # local superblock_create_arg
00:15:25.746   16:59:18	-- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']'
00:15:25.746   16:59:18	-- bdev/bdev_raid.sh@213 -- # strip_size=64
00:15:25.746   16:59:18	-- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64'
00:15:25.746   16:59:18	-- bdev/bdev_raid.sh@219 -- # '[' true = true ']'
00:15:25.746   16:59:18	-- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s
00:15:25.746   16:59:18	-- bdev/bdev_raid.sh@226 -- # raid_pid=126756
00:15:25.746   16:59:18	-- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 126756'
00:15:25.746  Process raid pid: 126756
00:15:25.746   16:59:18	-- bdev/bdev_raid.sh@228 -- # waitforlisten 126756 /var/tmp/spdk-raid.sock
00:15:25.746   16:59:18	-- common/autotest_common.sh@829 -- # '[' -z 126756 ']'
00:15:25.746   16:59:18	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock
00:15:25.746   16:59:18	-- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid
00:15:25.746   16:59:18	-- common/autotest_common.sh@834 -- # local max_retries=100
00:15:25.746   16:59:18	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...'
00:15:25.746  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...
00:15:25.746   16:59:18	-- common/autotest_common.sh@838 -- # xtrace_disable
00:15:25.746   16:59:18	-- common/autotest_common.sh@10 -- # set +x
00:15:25.746  [2024-11-19 16:59:18.410525] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:15:25.746  [2024-11-19 16:59:18.411001] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:15:25.746  [2024-11-19 16:59:18.558292] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:15:26.006  [2024-11-19 16:59:18.612158] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:15:26.006  [2024-11-19 16:59:18.660521] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:15:26.577   16:59:19	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:15:26.577   16:59:19	-- common/autotest_common.sh@862 -- # return 0
00:15:26.577   16:59:19	-- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid
00:15:26.836  [2024-11-19 16:59:19.537389] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:15:26.836  [2024-11-19 16:59:19.537724] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:15:26.836  [2024-11-19 16:59:19.537826] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:15:26.836  [2024-11-19 16:59:19.537882] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:15:26.836  [2024-11-19 16:59:19.537910] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:15:26.836  [2024-11-19 16:59:19.537978] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:15:26.836   16:59:19	-- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3
00:15:26.836   16:59:19	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:15:26.836   16:59:19	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:15:26.836   16:59:19	-- bdev/bdev_raid.sh@119 -- # local raid_level=concat
00:15:26.836   16:59:19	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:15:26.836   16:59:19	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:15:26.836   16:59:19	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:15:26.836   16:59:19	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:15:26.836   16:59:19	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:15:26.836   16:59:19	-- bdev/bdev_raid.sh@125 -- # local tmp
00:15:26.836    16:59:19	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:15:26.836    16:59:19	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:15:27.094   16:59:19	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:15:27.094    "name": "Existed_Raid",
00:15:27.094    "uuid": "6a17d25a-e1c0-4d05-a490-f3dda2690cc8",
00:15:27.094    "strip_size_kb": 64,
00:15:27.094    "state": "configuring",
00:15:27.094    "raid_level": "concat",
00:15:27.094    "superblock": true,
00:15:27.094    "num_base_bdevs": 3,
00:15:27.094    "num_base_bdevs_discovered": 0,
00:15:27.094    "num_base_bdevs_operational": 3,
00:15:27.094    "base_bdevs_list": [
00:15:27.094      {
00:15:27.094        "name": "BaseBdev1",
00:15:27.094        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:27.094        "is_configured": false,
00:15:27.094        "data_offset": 0,
00:15:27.094        "data_size": 0
00:15:27.094      },
00:15:27.094      {
00:15:27.094        "name": "BaseBdev2",
00:15:27.094        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:27.094        "is_configured": false,
00:15:27.094        "data_offset": 0,
00:15:27.094        "data_size": 0
00:15:27.094      },
00:15:27.094      {
00:15:27.094        "name": "BaseBdev3",
00:15:27.094        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:27.094        "is_configured": false,
00:15:27.094        "data_offset": 0,
00:15:27.094        "data_size": 0
00:15:27.094      }
00:15:27.094    ]
00:15:27.094  }'
00:15:27.094   16:59:19	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:15:27.094   16:59:19	-- common/autotest_common.sh@10 -- # set +x
00:15:27.662   16:59:20	-- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid
00:15:27.662  [2024-11-19 16:59:20.469448] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:15:27.662  [2024-11-19 16:59:20.469707] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring
00:15:27.662   16:59:20	-- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid
00:15:27.921  [2024-11-19 16:59:20.657523] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:15:27.921  [2024-11-19 16:59:20.657821] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:15:27.921  [2024-11-19 16:59:20.657922] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:15:27.921  [2024-11-19 16:59:20.657983] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:15:27.921  [2024-11-19 16:59:20.658013] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:15:27.921  [2024-11-19 16:59:20.658063] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:15:27.921   16:59:20	-- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1
00:15:28.180  [2024-11-19 16:59:20.859161] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:15:28.180  BaseBdev1
00:15:28.180   16:59:20	-- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1
00:15:28.180   16:59:20	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1
00:15:28.180   16:59:20	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:15:28.180   16:59:20	-- common/autotest_common.sh@899 -- # local i
00:15:28.180   16:59:20	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:15:28.180   16:59:20	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:15:28.180   16:59:20	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:15:28.439   16:59:21	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000
00:15:28.439  [
00:15:28.439    {
00:15:28.439      "name": "BaseBdev1",
00:15:28.439      "aliases": [
00:15:28.439        "d3091254-79d4-480c-b950-3ebccac3a26e"
00:15:28.439      ],
00:15:28.439      "product_name": "Malloc disk",
00:15:28.439      "block_size": 512,
00:15:28.439      "num_blocks": 65536,
00:15:28.439      "uuid": "d3091254-79d4-480c-b950-3ebccac3a26e",
00:15:28.439      "assigned_rate_limits": {
00:15:28.439        "rw_ios_per_sec": 0,
00:15:28.439        "rw_mbytes_per_sec": 0,
00:15:28.439        "r_mbytes_per_sec": 0,
00:15:28.439        "w_mbytes_per_sec": 0
00:15:28.439      },
00:15:28.439      "claimed": true,
00:15:28.439      "claim_type": "exclusive_write",
00:15:28.439      "zoned": false,
00:15:28.439      "supported_io_types": {
00:15:28.439        "read": true,
00:15:28.439        "write": true,
00:15:28.439        "unmap": true,
00:15:28.439        "write_zeroes": true,
00:15:28.439        "flush": true,
00:15:28.439        "reset": true,
00:15:28.439        "compare": false,
00:15:28.439        "compare_and_write": false,
00:15:28.439        "abort": true,
00:15:28.439        "nvme_admin": false,
00:15:28.439        "nvme_io": false
00:15:28.439      },
00:15:28.439      "memory_domains": [
00:15:28.439        {
00:15:28.439          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:15:28.439          "dma_device_type": 2
00:15:28.439        }
00:15:28.439      ],
00:15:28.439      "driver_specific": {}
00:15:28.439    }
00:15:28.439  ]
00:15:28.439   16:59:21	-- common/autotest_common.sh@905 -- # return 0
00:15:28.439   16:59:21	-- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3
00:15:28.439   16:59:21	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:15:28.439   16:59:21	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:15:28.439   16:59:21	-- bdev/bdev_raid.sh@119 -- # local raid_level=concat
00:15:28.439   16:59:21	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:15:28.439   16:59:21	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:15:28.439   16:59:21	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:15:28.439   16:59:21	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:15:28.439   16:59:21	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:15:28.439   16:59:21	-- bdev/bdev_raid.sh@125 -- # local tmp
00:15:28.439    16:59:21	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:15:28.439    16:59:21	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:15:28.699   16:59:21	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:15:28.699    "name": "Existed_Raid",
00:15:28.699    "uuid": "5e992388-7540-46a5-a8f2-a79ddfe819d9",
00:15:28.699    "strip_size_kb": 64,
00:15:28.699    "state": "configuring",
00:15:28.699    "raid_level": "concat",
00:15:28.699    "superblock": true,
00:15:28.699    "num_base_bdevs": 3,
00:15:28.699    "num_base_bdevs_discovered": 1,
00:15:28.699    "num_base_bdevs_operational": 3,
00:15:28.699    "base_bdevs_list": [
00:15:28.699      {
00:15:28.699        "name": "BaseBdev1",
00:15:28.699        "uuid": "d3091254-79d4-480c-b950-3ebccac3a26e",
00:15:28.699        "is_configured": true,
00:15:28.699        "data_offset": 2048,
00:15:28.699        "data_size": 63488
00:15:28.699      },
00:15:28.699      {
00:15:28.699        "name": "BaseBdev2",
00:15:28.699        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:28.699        "is_configured": false,
00:15:28.699        "data_offset": 0,
00:15:28.699        "data_size": 0
00:15:28.699      },
00:15:28.699      {
00:15:28.699        "name": "BaseBdev3",
00:15:28.699        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:28.699        "is_configured": false,
00:15:28.699        "data_offset": 0,
00:15:28.699        "data_size": 0
00:15:28.699      }
00:15:28.699    ]
00:15:28.699  }'
00:15:28.699   16:59:21	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:15:28.699   16:59:21	-- common/autotest_common.sh@10 -- # set +x
00:15:29.266   16:59:22	-- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid
00:15:29.526  [2024-11-19 16:59:22.275445] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:15:29.526  [2024-11-19 16:59:22.275710] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring
00:15:29.526   16:59:22	-- bdev/bdev_raid.sh@244 -- # '[' true = true ']'
00:15:29.526   16:59:22	-- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1
00:15:29.785   16:59:22	-- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1
00:15:30.044  BaseBdev1
00:15:30.044   16:59:22	-- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1
00:15:30.044   16:59:22	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1
00:15:30.044   16:59:22	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:15:30.044   16:59:22	-- common/autotest_common.sh@899 -- # local i
00:15:30.044   16:59:22	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:15:30.044   16:59:22	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:15:30.044   16:59:22	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:15:30.044   16:59:22	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000
00:15:30.304  [
00:15:30.304    {
00:15:30.304      "name": "BaseBdev1",
00:15:30.304      "aliases": [
00:15:30.304        "089b9b6a-68a0-4e11-a2aa-6f700337f381"
00:15:30.304      ],
00:15:30.304      "product_name": "Malloc disk",
00:15:30.304      "block_size": 512,
00:15:30.304      "num_blocks": 65536,
00:15:30.304      "uuid": "089b9b6a-68a0-4e11-a2aa-6f700337f381",
00:15:30.304      "assigned_rate_limits": {
00:15:30.304        "rw_ios_per_sec": 0,
00:15:30.304        "rw_mbytes_per_sec": 0,
00:15:30.304        "r_mbytes_per_sec": 0,
00:15:30.304        "w_mbytes_per_sec": 0
00:15:30.304      },
00:15:30.304      "claimed": false,
00:15:30.304      "zoned": false,
00:15:30.304      "supported_io_types": {
00:15:30.304        "read": true,
00:15:30.304        "write": true,
00:15:30.304        "unmap": true,
00:15:30.304        "write_zeroes": true,
00:15:30.304        "flush": true,
00:15:30.304        "reset": true,
00:15:30.304        "compare": false,
00:15:30.304        "compare_and_write": false,
00:15:30.304        "abort": true,
00:15:30.304        "nvme_admin": false,
00:15:30.304        "nvme_io": false
00:15:30.304      },
00:15:30.304      "memory_domains": [
00:15:30.304        {
00:15:30.304          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:15:30.304          "dma_device_type": 2
00:15:30.304        }
00:15:30.304      ],
00:15:30.304      "driver_specific": {}
00:15:30.304    }
00:15:30.304  ]
00:15:30.304   16:59:23	-- common/autotest_common.sh@905 -- # return 0
00:15:30.304   16:59:23	-- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid
00:15:30.565  [2024-11-19 16:59:23.273107] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:15:30.565  [2024-11-19 16:59:23.275413] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:15:30.565  [2024-11-19 16:59:23.275573] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:15:30.565  [2024-11-19 16:59:23.275724] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:15:30.565  [2024-11-19 16:59:23.275781] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:15:30.565   16:59:23	-- bdev/bdev_raid.sh@254 -- # (( i = 1 ))
00:15:30.565   16:59:23	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:15:30.565   16:59:23	-- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3
00:15:30.565   16:59:23	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:15:30.565   16:59:23	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:15:30.565   16:59:23	-- bdev/bdev_raid.sh@119 -- # local raid_level=concat
00:15:30.565   16:59:23	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:15:30.565   16:59:23	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:15:30.565   16:59:23	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:15:30.565   16:59:23	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:15:30.565   16:59:23	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:15:30.565   16:59:23	-- bdev/bdev_raid.sh@125 -- # local tmp
00:15:30.565    16:59:23	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:15:30.565    16:59:23	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:15:30.825   16:59:23	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:15:30.825    "name": "Existed_Raid",
00:15:30.825    "uuid": "5691f742-3638-4516-8e83-ba6e1f0dcef3",
00:15:30.825    "strip_size_kb": 64,
00:15:30.825    "state": "configuring",
00:15:30.825    "raid_level": "concat",
00:15:30.825    "superblock": true,
00:15:30.825    "num_base_bdevs": 3,
00:15:30.825    "num_base_bdevs_discovered": 1,
00:15:30.825    "num_base_bdevs_operational": 3,
00:15:30.825    "base_bdevs_list": [
00:15:30.825      {
00:15:30.825        "name": "BaseBdev1",
00:15:30.825        "uuid": "089b9b6a-68a0-4e11-a2aa-6f700337f381",
00:15:30.825        "is_configured": true,
00:15:30.825        "data_offset": 2048,
00:15:30.825        "data_size": 63488
00:15:30.825      },
00:15:30.825      {
00:15:30.825        "name": "BaseBdev2",
00:15:30.825        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:30.825        "is_configured": false,
00:15:30.825        "data_offset": 0,
00:15:30.825        "data_size": 0
00:15:30.825      },
00:15:30.825      {
00:15:30.825        "name": "BaseBdev3",
00:15:30.825        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:30.825        "is_configured": false,
00:15:30.825        "data_offset": 0,
00:15:30.825        "data_size": 0
00:15:30.825      }
00:15:30.825    ]
00:15:30.825  }'
00:15:30.825   16:59:23	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:15:30.825   16:59:23	-- common/autotest_common.sh@10 -- # set +x
00:15:31.393   16:59:24	-- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2
00:15:31.653  [2024-11-19 16:59:24.302111] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:15:31.653  BaseBdev2
00:15:31.653   16:59:24	-- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2
00:15:31.653   16:59:24	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2
00:15:31.653   16:59:24	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:15:31.653   16:59:24	-- common/autotest_common.sh@899 -- # local i
00:15:31.653   16:59:24	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:15:31.653   16:59:24	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:15:31.653   16:59:24	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:15:31.912   16:59:24	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000
00:15:32.172  [
00:15:32.172    {
00:15:32.172      "name": "BaseBdev2",
00:15:32.172      "aliases": [
00:15:32.172        "ca64e84a-c595-410f-a056-a0a78990a891"
00:15:32.172      ],
00:15:32.172      "product_name": "Malloc disk",
00:15:32.172      "block_size": 512,
00:15:32.172      "num_blocks": 65536,
00:15:32.172      "uuid": "ca64e84a-c595-410f-a056-a0a78990a891",
00:15:32.172      "assigned_rate_limits": {
00:15:32.172        "rw_ios_per_sec": 0,
00:15:32.172        "rw_mbytes_per_sec": 0,
00:15:32.172        "r_mbytes_per_sec": 0,
00:15:32.172        "w_mbytes_per_sec": 0
00:15:32.172      },
00:15:32.172      "claimed": true,
00:15:32.172      "claim_type": "exclusive_write",
00:15:32.172      "zoned": false,
00:15:32.172      "supported_io_types": {
00:15:32.172        "read": true,
00:15:32.172        "write": true,
00:15:32.172        "unmap": true,
00:15:32.172        "write_zeroes": true,
00:15:32.172        "flush": true,
00:15:32.172        "reset": true,
00:15:32.172        "compare": false,
00:15:32.172        "compare_and_write": false,
00:15:32.172        "abort": true,
00:15:32.172        "nvme_admin": false,
00:15:32.172        "nvme_io": false
00:15:32.172      },
00:15:32.172      "memory_domains": [
00:15:32.172        {
00:15:32.172          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:15:32.172          "dma_device_type": 2
00:15:32.172        }
00:15:32.172      ],
00:15:32.172      "driver_specific": {}
00:15:32.172    }
00:15:32.172  ]
00:15:32.172   16:59:24	-- common/autotest_common.sh@905 -- # return 0
00:15:32.172   16:59:24	-- bdev/bdev_raid.sh@254 -- # (( i++ ))
00:15:32.172   16:59:24	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:15:32.172   16:59:24	-- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3
00:15:32.172   16:59:24	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:15:32.172   16:59:24	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:15:32.172   16:59:24	-- bdev/bdev_raid.sh@119 -- # local raid_level=concat
00:15:32.172   16:59:24	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:15:32.172   16:59:24	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:15:32.172   16:59:24	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:15:32.172   16:59:24	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:15:32.172   16:59:24	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:15:32.172   16:59:24	-- bdev/bdev_raid.sh@125 -- # local tmp
00:15:32.172    16:59:24	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:15:32.172    16:59:24	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:15:32.172   16:59:24	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:15:32.172    "name": "Existed_Raid",
00:15:32.172    "uuid": "5691f742-3638-4516-8e83-ba6e1f0dcef3",
00:15:32.173    "strip_size_kb": 64,
00:15:32.173    "state": "configuring",
00:15:32.173    "raid_level": "concat",
00:15:32.173    "superblock": true,
00:15:32.173    "num_base_bdevs": 3,
00:15:32.173    "num_base_bdevs_discovered": 2,
00:15:32.173    "num_base_bdevs_operational": 3,
00:15:32.173    "base_bdevs_list": [
00:15:32.173      {
00:15:32.173        "name": "BaseBdev1",
00:15:32.173        "uuid": "089b9b6a-68a0-4e11-a2aa-6f700337f381",
00:15:32.173        "is_configured": true,
00:15:32.173        "data_offset": 2048,
00:15:32.173        "data_size": 63488
00:15:32.173      },
00:15:32.173      {
00:15:32.173        "name": "BaseBdev2",
00:15:32.173        "uuid": "ca64e84a-c595-410f-a056-a0a78990a891",
00:15:32.173        "is_configured": true,
00:15:32.173        "data_offset": 2048,
00:15:32.173        "data_size": 63488
00:15:32.173      },
00:15:32.173      {
00:15:32.173        "name": "BaseBdev3",
00:15:32.173        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:32.173        "is_configured": false,
00:15:32.173        "data_offset": 0,
00:15:32.173        "data_size": 0
00:15:32.173      }
00:15:32.173    ]
00:15:32.173  }'
00:15:32.173   16:59:24	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:15:32.173   16:59:24	-- common/autotest_common.sh@10 -- # set +x
00:15:32.741   16:59:25	-- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3
00:15:33.000  [2024-11-19 16:59:25.786160] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:15:33.000  [2024-11-19 16:59:25.786621] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006680
00:15:33.000  [2024-11-19 16:59:25.786745] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512
00:15:33.000  [2024-11-19 16:59:25.786977] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120
00:15:33.000  [2024-11-19 16:59:25.787552] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006680
00:15:33.000  [2024-11-19 16:59:25.787600] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006680
00:15:33.000  [2024-11-19 16:59:25.787851] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:15:33.000  BaseBdev3
00:15:33.000   16:59:25	-- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3
00:15:33.000   16:59:25	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3
00:15:33.000   16:59:25	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:15:33.000   16:59:25	-- common/autotest_common.sh@899 -- # local i
00:15:33.000   16:59:25	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:15:33.000   16:59:25	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:15:33.000   16:59:25	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:15:33.259   16:59:25	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000
00:15:33.518  [
00:15:33.518    {
00:15:33.518      "name": "BaseBdev3",
00:15:33.518      "aliases": [
00:15:33.518        "7f243fa6-1105-4f66-90a3-53f9cbce8091"
00:15:33.518      ],
00:15:33.518      "product_name": "Malloc disk",
00:15:33.518      "block_size": 512,
00:15:33.518      "num_blocks": 65536,
00:15:33.518      "uuid": "7f243fa6-1105-4f66-90a3-53f9cbce8091",
00:15:33.518      "assigned_rate_limits": {
00:15:33.518        "rw_ios_per_sec": 0,
00:15:33.518        "rw_mbytes_per_sec": 0,
00:15:33.518        "r_mbytes_per_sec": 0,
00:15:33.518        "w_mbytes_per_sec": 0
00:15:33.518      },
00:15:33.518      "claimed": true,
00:15:33.518      "claim_type": "exclusive_write",
00:15:33.518      "zoned": false,
00:15:33.518      "supported_io_types": {
00:15:33.518        "read": true,
00:15:33.518        "write": true,
00:15:33.518        "unmap": true,
00:15:33.518        "write_zeroes": true,
00:15:33.518        "flush": true,
00:15:33.518        "reset": true,
00:15:33.518        "compare": false,
00:15:33.518        "compare_and_write": false,
00:15:33.518        "abort": true,
00:15:33.518        "nvme_admin": false,
00:15:33.518        "nvme_io": false
00:15:33.518      },
00:15:33.518      "memory_domains": [
00:15:33.518        {
00:15:33.518          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:15:33.518          "dma_device_type": 2
00:15:33.518        }
00:15:33.518      ],
00:15:33.518      "driver_specific": {}
00:15:33.518    }
00:15:33.518  ]
00:15:33.518   16:59:26	-- common/autotest_common.sh@905 -- # return 0
00:15:33.518   16:59:26	-- bdev/bdev_raid.sh@254 -- # (( i++ ))
00:15:33.518   16:59:26	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:15:33.518   16:59:26	-- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 3
00:15:33.518   16:59:26	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:15:33.518   16:59:26	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:15:33.518   16:59:26	-- bdev/bdev_raid.sh@119 -- # local raid_level=concat
00:15:33.518   16:59:26	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:15:33.518   16:59:26	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:15:33.518   16:59:26	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:15:33.518   16:59:26	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:15:33.518   16:59:26	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:15:33.518   16:59:26	-- bdev/bdev_raid.sh@125 -- # local tmp
00:15:33.518    16:59:26	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:15:33.518    16:59:26	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:15:33.777   16:59:26	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:15:33.777    "name": "Existed_Raid",
00:15:33.777    "uuid": "5691f742-3638-4516-8e83-ba6e1f0dcef3",
00:15:33.777    "strip_size_kb": 64,
00:15:33.777    "state": "online",
00:15:33.777    "raid_level": "concat",
00:15:33.777    "superblock": true,
00:15:33.777    "num_base_bdevs": 3,
00:15:33.777    "num_base_bdevs_discovered": 3,
00:15:33.777    "num_base_bdevs_operational": 3,
00:15:33.777    "base_bdevs_list": [
00:15:33.777      {
00:15:33.777        "name": "BaseBdev1",
00:15:33.777        "uuid": "089b9b6a-68a0-4e11-a2aa-6f700337f381",
00:15:33.777        "is_configured": true,
00:15:33.777        "data_offset": 2048,
00:15:33.777        "data_size": 63488
00:15:33.777      },
00:15:33.777      {
00:15:33.777        "name": "BaseBdev2",
00:15:33.777        "uuid": "ca64e84a-c595-410f-a056-a0a78990a891",
00:15:33.777        "is_configured": true,
00:15:33.777        "data_offset": 2048,
00:15:33.777        "data_size": 63488
00:15:33.777      },
00:15:33.778      {
00:15:33.778        "name": "BaseBdev3",
00:15:33.778        "uuid": "7f243fa6-1105-4f66-90a3-53f9cbce8091",
00:15:33.778        "is_configured": true,
00:15:33.778        "data_offset": 2048,
00:15:33.778        "data_size": 63488
00:15:33.778      }
00:15:33.778    ]
00:15:33.778  }'
00:15:33.778   16:59:26	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:15:33.778   16:59:26	-- common/autotest_common.sh@10 -- # set +x
00:15:34.345   16:59:27	-- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1
00:15:34.630  [2024-11-19 16:59:27.282606] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:15:34.630  [2024-11-19 16:59:27.282887] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:15:34.630  [2024-11-19 16:59:27.283102] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:15:34.630   16:59:27	-- bdev/bdev_raid.sh@263 -- # local expected_state
00:15:34.630   16:59:27	-- bdev/bdev_raid.sh@264 -- # has_redundancy concat
00:15:34.630   16:59:27	-- bdev/bdev_raid.sh@195 -- # case $1 in
00:15:34.630   16:59:27	-- bdev/bdev_raid.sh@197 -- # return 1
00:15:34.630   16:59:27	-- bdev/bdev_raid.sh@265 -- # expected_state=offline
00:15:34.630   16:59:27	-- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2
00:15:34.630   16:59:27	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:15:34.630   16:59:27	-- bdev/bdev_raid.sh@118 -- # local expected_state=offline
00:15:34.630   16:59:27	-- bdev/bdev_raid.sh@119 -- # local raid_level=concat
00:15:34.630   16:59:27	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:15:34.630   16:59:27	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:15:34.630   16:59:27	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:15:34.630   16:59:27	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:15:34.630   16:59:27	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:15:34.630   16:59:27	-- bdev/bdev_raid.sh@125 -- # local tmp
00:15:34.630    16:59:27	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:15:34.630    16:59:27	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:15:34.907   16:59:27	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:15:34.907    "name": "Existed_Raid",
00:15:34.907    "uuid": "5691f742-3638-4516-8e83-ba6e1f0dcef3",
00:15:34.907    "strip_size_kb": 64,
00:15:34.907    "state": "offline",
00:15:34.907    "raid_level": "concat",
00:15:34.907    "superblock": true,
00:15:34.907    "num_base_bdevs": 3,
00:15:34.907    "num_base_bdevs_discovered": 2,
00:15:34.907    "num_base_bdevs_operational": 2,
00:15:34.907    "base_bdevs_list": [
00:15:34.907      {
00:15:34.907        "name": null,
00:15:34.907        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:34.907        "is_configured": false,
00:15:34.907        "data_offset": 2048,
00:15:34.907        "data_size": 63488
00:15:34.907      },
00:15:34.907      {
00:15:34.907        "name": "BaseBdev2",
00:15:34.907        "uuid": "ca64e84a-c595-410f-a056-a0a78990a891",
00:15:34.907        "is_configured": true,
00:15:34.907        "data_offset": 2048,
00:15:34.907        "data_size": 63488
00:15:34.907      },
00:15:34.907      {
00:15:34.907        "name": "BaseBdev3",
00:15:34.907        "uuid": "7f243fa6-1105-4f66-90a3-53f9cbce8091",
00:15:34.907        "is_configured": true,
00:15:34.907        "data_offset": 2048,
00:15:34.907        "data_size": 63488
00:15:34.907      }
00:15:34.907    ]
00:15:34.907  }'
00:15:34.907   16:59:27	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:15:34.907   16:59:27	-- common/autotest_common.sh@10 -- # set +x
00:15:35.476   16:59:28	-- bdev/bdev_raid.sh@273 -- # (( i = 1 ))
00:15:35.476   16:59:28	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:15:35.476    16:59:28	-- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:15:35.476    16:59:28	-- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]'
00:15:35.476   16:59:28	-- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid
00:15:35.476   16:59:28	-- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:15:35.476   16:59:28	-- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2
00:15:35.736  [2024-11-19 16:59:28.546043] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2
00:15:35.736   16:59:28	-- bdev/bdev_raid.sh@273 -- # (( i++ ))
00:15:35.736   16:59:28	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:15:35.736    16:59:28	-- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:15:35.736    16:59:28	-- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]'
00:15:35.995   16:59:28	-- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid
00:15:35.995   16:59:28	-- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:15:35.995   16:59:28	-- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3
00:15:36.255  [2024-11-19 16:59:28.994350] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3
00:15:36.255  [2024-11-19 16:59:28.994582] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state offline
00:15:36.255   16:59:29	-- bdev/bdev_raid.sh@273 -- # (( i++ ))
00:15:36.255   16:59:29	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:15:36.255    16:59:29	-- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:15:36.255    16:59:29	-- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)'
00:15:36.517   16:59:29	-- bdev/bdev_raid.sh@281 -- # raid_bdev=
00:15:36.517   16:59:29	-- bdev/bdev_raid.sh@282 -- # '[' -n '' ']'
00:15:36.517   16:59:29	-- bdev/bdev_raid.sh@287 -- # killprocess 126756
00:15:36.517   16:59:29	-- common/autotest_common.sh@936 -- # '[' -z 126756 ']'
00:15:36.517   16:59:29	-- common/autotest_common.sh@940 -- # kill -0 126756
00:15:36.517    16:59:29	-- common/autotest_common.sh@941 -- # uname
00:15:36.517   16:59:29	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:15:36.517    16:59:29	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 126756
00:15:36.517   16:59:29	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:15:36.517   16:59:29	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:15:36.517   16:59:29	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 126756'
00:15:36.517  killing process with pid 126756
00:15:36.517   16:59:29	-- common/autotest_common.sh@955 -- # kill 126756
00:15:36.517   16:59:29	-- common/autotest_common.sh@960 -- # wait 126756
00:15:36.517  [2024-11-19 16:59:29.314223] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:15:36.517  [2024-11-19 16:59:29.314363] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:15:37.085   16:59:29	-- bdev/bdev_raid.sh@289 -- # return 0
00:15:37.085  
00:15:37.085  real	0m11.357s
00:15:37.085  user	0m20.164s
00:15:37.085  sys	0m1.884s
00:15:37.085   16:59:29	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:15:37.085  ************************************
00:15:37.085   16:59:29	-- common/autotest_common.sh@10 -- # set +x
00:15:37.085  END TEST raid_state_function_test_sb
00:15:37.085  ************************************
00:15:37.085   16:59:29	-- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 3
00:15:37.085   16:59:29	-- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']'
00:15:37.085   16:59:29	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:15:37.085   16:59:29	-- common/autotest_common.sh@10 -- # set +x
00:15:37.085  ************************************
00:15:37.085  START TEST raid_superblock_test
00:15:37.085  ************************************
00:15:37.085   16:59:29	-- common/autotest_common.sh@1114 -- # raid_superblock_test concat 3
00:15:37.085   16:59:29	-- bdev/bdev_raid.sh@338 -- # local raid_level=concat
00:15:37.085   16:59:29	-- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3
00:15:37.085   16:59:29	-- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=()
00:15:37.085   16:59:29	-- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc
00:15:37.085   16:59:29	-- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=()
00:15:37.085   16:59:29	-- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt
00:15:37.085   16:59:29	-- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=()
00:15:37.085   16:59:29	-- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid
00:15:37.085   16:59:29	-- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1
00:15:37.085   16:59:29	-- bdev/bdev_raid.sh@344 -- # local strip_size
00:15:37.085   16:59:29	-- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg
00:15:37.085   16:59:29	-- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid
00:15:37.085   16:59:29	-- bdev/bdev_raid.sh@347 -- # local raid_bdev
00:15:37.085   16:59:29	-- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']'
00:15:37.085   16:59:29	-- bdev/bdev_raid.sh@350 -- # strip_size=64
00:15:37.085   16:59:29	-- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64'
00:15:37.085   16:59:29	-- bdev/bdev_raid.sh@357 -- # raid_pid=127129
00:15:37.085   16:59:29	-- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid
00:15:37.085   16:59:29	-- bdev/bdev_raid.sh@358 -- # waitforlisten 127129 /var/tmp/spdk-raid.sock
00:15:37.085   16:59:29	-- common/autotest_common.sh@829 -- # '[' -z 127129 ']'
00:15:37.085   16:59:29	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock
00:15:37.085   16:59:29	-- common/autotest_common.sh@834 -- # local max_retries=100
00:15:37.085   16:59:29	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...'
00:15:37.085  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...
00:15:37.085   16:59:29	-- common/autotest_common.sh@838 -- # xtrace_disable
00:15:37.085   16:59:29	-- common/autotest_common.sh@10 -- # set +x
00:15:37.085  [2024-11-19 16:59:29.839255] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:15:37.085  [2024-11-19 16:59:29.839641] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127129 ]
00:15:37.344  [2024-11-19 16:59:29.979732] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:15:37.344  [2024-11-19 16:59:30.031306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:15:37.344  [2024-11-19 16:59:30.074354] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:15:38.281   16:59:30	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:15:38.281   16:59:30	-- common/autotest_common.sh@862 -- # return 0
00:15:38.281   16:59:30	-- bdev/bdev_raid.sh@361 -- # (( i = 1 ))
00:15:38.281   16:59:30	-- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs ))
00:15:38.281   16:59:30	-- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1
00:15:38.281   16:59:30	-- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1
00:15:38.281   16:59:30	-- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001
00:15:38.281   16:59:30	-- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc)
00:15:38.281   16:59:30	-- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt)
00:15:38.281   16:59:30	-- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:15:38.281   16:59:30	-- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1
00:15:38.281  malloc1
00:15:38.282   16:59:31	-- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001
00:15:38.541  [2024-11-19 16:59:31.274978] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1
00:15:38.541  [2024-11-19 16:59:31.275181] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:15:38.541  [2024-11-19 16:59:31.275246] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80
00:15:38.541  [2024-11-19 16:59:31.275329] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:15:38.541  [2024-11-19 16:59:31.277902] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:15:38.541  [2024-11-19 16:59:31.278126] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1
00:15:38.541  pt1
00:15:38.541   16:59:31	-- bdev/bdev_raid.sh@361 -- # (( i++ ))
00:15:38.541   16:59:31	-- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs ))
00:15:38.541   16:59:31	-- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2
00:15:38.541   16:59:31	-- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2
00:15:38.541   16:59:31	-- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002
00:15:38.541   16:59:31	-- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc)
00:15:38.541   16:59:31	-- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt)
00:15:38.541   16:59:31	-- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:15:38.541   16:59:31	-- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2
00:15:38.800  malloc2
00:15:38.800   16:59:31	-- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:15:39.059  [2024-11-19 16:59:31.732425] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:15:39.059  [2024-11-19 16:59:31.732729] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:15:39.059  [2024-11-19 16:59:31.732825] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680
00:15:39.059  [2024-11-19 16:59:31.732996] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:15:39.059  [2024-11-19 16:59:31.735531] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:15:39.059  [2024-11-19 16:59:31.735719] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:15:39.059  pt2
00:15:39.059   16:59:31	-- bdev/bdev_raid.sh@361 -- # (( i++ ))
00:15:39.059   16:59:31	-- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs ))
00:15:39.059   16:59:31	-- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3
00:15:39.059   16:59:31	-- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3
00:15:39.059   16:59:31	-- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003
00:15:39.059   16:59:31	-- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc)
00:15:39.059   16:59:31	-- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt)
00:15:39.059   16:59:31	-- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:15:39.059   16:59:31	-- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3
00:15:39.319  malloc3
00:15:39.319   16:59:32	-- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003
00:15:39.578  [2024-11-19 16:59:32.230886] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3
00:15:39.578  [2024-11-19 16:59:32.231216] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:15:39.578  [2024-11-19 16:59:32.231294] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280
00:15:39.578  [2024-11-19 16:59:32.231426] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:15:39.578  [2024-11-19 16:59:32.233995] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:15:39.578  [2024-11-19 16:59:32.234173] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3
00:15:39.578  pt3
00:15:39.578   16:59:32	-- bdev/bdev_raid.sh@361 -- # (( i++ ))
00:15:39.578   16:59:32	-- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs ))
00:15:39.578   16:59:32	-- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3' -n raid_bdev1 -s
00:15:39.838  [2024-11-19 16:59:32.467090] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed
00:15:39.838  [2024-11-19 16:59:32.469529] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:15:39.838  [2024-11-19 16:59:32.469745] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed
00:15:39.838  [2024-11-19 16:59:32.469997] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880
00:15:39.838  [2024-11-19 16:59:32.470190] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512
00:15:39.838  [2024-11-19 16:59:32.470379] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0
00:15:39.838  [2024-11-19 16:59:32.471035] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880
00:15:39.838  [2024-11-19 16:59:32.471144] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007880
00:15:39.838  [2024-11-19 16:59:32.471420] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:15:39.838   16:59:32	-- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3
00:15:39.838   16:59:32	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:15:39.838   16:59:32	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:15:39.838   16:59:32	-- bdev/bdev_raid.sh@119 -- # local raid_level=concat
00:15:39.838   16:59:32	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:15:39.838   16:59:32	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:15:39.838   16:59:32	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:15:39.838   16:59:32	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:15:39.838   16:59:32	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:15:39.838   16:59:32	-- bdev/bdev_raid.sh@125 -- # local tmp
00:15:39.838    16:59:32	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:15:39.838    16:59:32	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:15:39.838   16:59:32	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:15:39.838    "name": "raid_bdev1",
00:15:39.838    "uuid": "3cfd6d3a-0fef-4895-ac3a-697a01b2486b",
00:15:39.838    "strip_size_kb": 64,
00:15:39.838    "state": "online",
00:15:39.838    "raid_level": "concat",
00:15:39.838    "superblock": true,
00:15:39.838    "num_base_bdevs": 3,
00:15:39.838    "num_base_bdevs_discovered": 3,
00:15:39.838    "num_base_bdevs_operational": 3,
00:15:39.838    "base_bdevs_list": [
00:15:39.838      {
00:15:39.838        "name": "pt1",
00:15:39.838        "uuid": "3a7e1ebb-86a9-5c33-a8e0-aa771ea45f2c",
00:15:39.838        "is_configured": true,
00:15:39.838        "data_offset": 2048,
00:15:39.838        "data_size": 63488
00:15:39.838      },
00:15:39.838      {
00:15:39.838        "name": "pt2",
00:15:39.838        "uuid": "900d26bb-d339-5b9c-a39d-60ec1695334f",
00:15:39.838        "is_configured": true,
00:15:39.838        "data_offset": 2048,
00:15:39.838        "data_size": 63488
00:15:39.838      },
00:15:39.838      {
00:15:39.838        "name": "pt3",
00:15:39.838        "uuid": "3ef837a4-4eff-5498-9c1c-246e8e87679b",
00:15:39.838        "is_configured": true,
00:15:39.838        "data_offset": 2048,
00:15:39.838        "data_size": 63488
00:15:39.838      }
00:15:39.838    ]
00:15:39.838  }'
00:15:39.838   16:59:32	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:15:39.838   16:59:32	-- common/autotest_common.sh@10 -- # set +x
00:15:40.776    16:59:33	-- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1
00:15:40.776    16:59:33	-- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid'
00:15:40.776  [2024-11-19 16:59:33.559817] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:15:40.776   16:59:33	-- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=3cfd6d3a-0fef-4895-ac3a-697a01b2486b
00:15:40.776   16:59:33	-- bdev/bdev_raid.sh@380 -- # '[' -z 3cfd6d3a-0fef-4895-ac3a-697a01b2486b ']'
00:15:40.776   16:59:33	-- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1
00:15:41.036  [2024-11-19 16:59:33.747649] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:15:41.036  [2024-11-19 16:59:33.747878] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:15:41.036  [2024-11-19 16:59:33.748141] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:15:41.036  [2024-11-19 16:59:33.748282] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:15:41.036  [2024-11-19 16:59:33.748528] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name raid_bdev1, state offline
00:15:41.036    16:59:33	-- bdev/bdev_raid.sh@386 -- # jq -r '.[]'
00:15:41.036    16:59:33	-- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:15:41.295   16:59:34	-- bdev/bdev_raid.sh@386 -- # raid_bdev=
00:15:41.295   16:59:34	-- bdev/bdev_raid.sh@387 -- # '[' -n '' ']'
00:15:41.295   16:59:34	-- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}"
00:15:41.295   16:59:34	-- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1
00:15:41.555   16:59:34	-- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}"
00:15:41.555   16:59:34	-- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2
00:15:41.814   16:59:34	-- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}"
00:15:41.814   16:59:34	-- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3
00:15:42.074    16:59:34	-- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs
00:15:42.074    16:59:34	-- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any'
00:15:42.333   16:59:34	-- bdev/bdev_raid.sh@395 -- # '[' false == true ']'
00:15:42.333   16:59:34	-- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1
00:15:42.333   16:59:34	-- common/autotest_common.sh@650 -- # local es=0
00:15:42.333   16:59:34	-- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1
00:15:42.333   16:59:34	-- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:15:42.333   16:59:34	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:15:42.333    16:59:34	-- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:15:42.333   16:59:34	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:15:42.333    16:59:34	-- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:15:42.333   16:59:34	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:15:42.333   16:59:34	-- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:15:42.333   16:59:34	-- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]]
00:15:42.333   16:59:34	-- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1
00:15:42.592  [2024-11-19 16:59:35.231921] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed
00:15:42.592  [2024-11-19 16:59:35.234257] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed
00:15:42.592  [2024-11-19 16:59:35.234449] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed
00:15:42.592  [2024-11-19 16:59:35.234596] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1
00:15:42.592  [2024-11-19 16:59:35.234757] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2
00:15:42.592  [2024-11-19 16:59:35.234899] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3
00:15:42.592  [2024-11-19 16:59:35.235046] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:15:42.592  [2024-11-19 16:59:35.235144] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state configuring
00:15:42.592  request:
00:15:42.592  {
00:15:42.592    "name": "raid_bdev1",
00:15:42.592    "raid_level": "concat",
00:15:42.592    "base_bdevs": [
00:15:42.592      "malloc1",
00:15:42.592      "malloc2",
00:15:42.592      "malloc3"
00:15:42.592    ],
00:15:42.592    "superblock": false,
00:15:42.592    "strip_size_kb": 64,
00:15:42.592    "method": "bdev_raid_create",
00:15:42.592    "req_id": 1
00:15:42.592  }
00:15:42.592  Got JSON-RPC error response
00:15:42.592  response:
00:15:42.592  {
00:15:42.592    "code": -17,
00:15:42.592    "message": "Failed to create RAID bdev raid_bdev1: File exists"
00:15:42.592  }
00:15:42.592   16:59:35	-- common/autotest_common.sh@653 -- # es=1
00:15:42.592   16:59:35	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:15:42.592   16:59:35	-- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:15:42.592   16:59:35	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:15:42.592    16:59:35	-- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:15:42.592    16:59:35	-- bdev/bdev_raid.sh@403 -- # jq -r '.[]'
00:15:42.852   16:59:35	-- bdev/bdev_raid.sh@403 -- # raid_bdev=
00:15:42.852   16:59:35	-- bdev/bdev_raid.sh@404 -- # '[' -n '' ']'
00:15:42.852   16:59:35	-- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001
00:15:42.852  [2024-11-19 16:59:35.679919] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1
00:15:42.852  [2024-11-19 16:59:35.680209] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:15:42.852  [2024-11-19 16:59:35.680299] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480
00:15:42.852  [2024-11-19 16:59:35.680430] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:15:42.852  [2024-11-19 16:59:35.682880] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:15:42.852  [2024-11-19 16:59:35.683078] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1
00:15:42.852  [2024-11-19 16:59:35.683263] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1
00:15:42.852  [2024-11-19 16:59:35.683425] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed
00:15:42.852  pt1
00:15:43.111   16:59:35	-- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3
00:15:43.111   16:59:35	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:15:43.111   16:59:35	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:15:43.111   16:59:35	-- bdev/bdev_raid.sh@119 -- # local raid_level=concat
00:15:43.111   16:59:35	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:15:43.111   16:59:35	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:15:43.111   16:59:35	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:15:43.111   16:59:35	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:15:43.111   16:59:35	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:15:43.111   16:59:35	-- bdev/bdev_raid.sh@125 -- # local tmp
00:15:43.111    16:59:35	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:15:43.111    16:59:35	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:15:43.370   16:59:35	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:15:43.370    "name": "raid_bdev1",
00:15:43.370    "uuid": "3cfd6d3a-0fef-4895-ac3a-697a01b2486b",
00:15:43.370    "strip_size_kb": 64,
00:15:43.370    "state": "configuring",
00:15:43.370    "raid_level": "concat",
00:15:43.370    "superblock": true,
00:15:43.370    "num_base_bdevs": 3,
00:15:43.370    "num_base_bdevs_discovered": 1,
00:15:43.370    "num_base_bdevs_operational": 3,
00:15:43.370    "base_bdevs_list": [
00:15:43.370      {
00:15:43.370        "name": "pt1",
00:15:43.370        "uuid": "3a7e1ebb-86a9-5c33-a8e0-aa771ea45f2c",
00:15:43.370        "is_configured": true,
00:15:43.370        "data_offset": 2048,
00:15:43.370        "data_size": 63488
00:15:43.370      },
00:15:43.370      {
00:15:43.370        "name": null,
00:15:43.370        "uuid": "900d26bb-d339-5b9c-a39d-60ec1695334f",
00:15:43.370        "is_configured": false,
00:15:43.370        "data_offset": 2048,
00:15:43.370        "data_size": 63488
00:15:43.370      },
00:15:43.370      {
00:15:43.370        "name": null,
00:15:43.370        "uuid": "3ef837a4-4eff-5498-9c1c-246e8e87679b",
00:15:43.370        "is_configured": false,
00:15:43.370        "data_offset": 2048,
00:15:43.370        "data_size": 63488
00:15:43.370      }
00:15:43.370    ]
00:15:43.370  }'
00:15:43.370   16:59:35	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:15:43.370   16:59:35	-- common/autotest_common.sh@10 -- # set +x
00:15:43.939   16:59:36	-- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']'
00:15:43.939   16:59:36	-- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:15:44.199  [2024-11-19 16:59:36.868237] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:15:44.199  [2024-11-19 16:59:36.868553] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:15:44.199  [2024-11-19 16:59:36.868635] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80
00:15:44.199  [2024-11-19 16:59:36.868763] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:15:44.199  [2024-11-19 16:59:36.869221] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:15:44.199  [2024-11-19 16:59:36.869371] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:15:44.199  [2024-11-19 16:59:36.869554] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2
00:15:44.199  [2024-11-19 16:59:36.869665] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:15:44.199  pt2
00:15:44.199   16:59:36	-- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2
00:15:44.458  [2024-11-19 16:59:37.140335] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2
00:15:44.458   16:59:37	-- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3
00:15:44.458   16:59:37	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:15:44.458   16:59:37	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:15:44.458   16:59:37	-- bdev/bdev_raid.sh@119 -- # local raid_level=concat
00:15:44.458   16:59:37	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:15:44.458   16:59:37	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:15:44.458   16:59:37	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:15:44.458   16:59:37	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:15:44.458   16:59:37	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:15:44.458   16:59:37	-- bdev/bdev_raid.sh@125 -- # local tmp
00:15:44.458    16:59:37	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:15:44.458    16:59:37	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:15:44.717   16:59:37	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:15:44.717    "name": "raid_bdev1",
00:15:44.717    "uuid": "3cfd6d3a-0fef-4895-ac3a-697a01b2486b",
00:15:44.717    "strip_size_kb": 64,
00:15:44.717    "state": "configuring",
00:15:44.717    "raid_level": "concat",
00:15:44.717    "superblock": true,
00:15:44.717    "num_base_bdevs": 3,
00:15:44.717    "num_base_bdevs_discovered": 1,
00:15:44.717    "num_base_bdevs_operational": 3,
00:15:44.717    "base_bdevs_list": [
00:15:44.717      {
00:15:44.717        "name": "pt1",
00:15:44.717        "uuid": "3a7e1ebb-86a9-5c33-a8e0-aa771ea45f2c",
00:15:44.717        "is_configured": true,
00:15:44.717        "data_offset": 2048,
00:15:44.717        "data_size": 63488
00:15:44.717      },
00:15:44.717      {
00:15:44.717        "name": null,
00:15:44.717        "uuid": "900d26bb-d339-5b9c-a39d-60ec1695334f",
00:15:44.717        "is_configured": false,
00:15:44.717        "data_offset": 2048,
00:15:44.717        "data_size": 63488
00:15:44.717      },
00:15:44.717      {
00:15:44.717        "name": null,
00:15:44.717        "uuid": "3ef837a4-4eff-5498-9c1c-246e8e87679b",
00:15:44.717        "is_configured": false,
00:15:44.717        "data_offset": 2048,
00:15:44.717        "data_size": 63488
00:15:44.717      }
00:15:44.717    ]
00:15:44.717  }'
00:15:44.717   16:59:37	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:15:44.717   16:59:37	-- common/autotest_common.sh@10 -- # set +x
00:15:45.286   16:59:37	-- bdev/bdev_raid.sh@422 -- # (( i = 1 ))
00:15:45.286   16:59:37	-- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs ))
00:15:45.286   16:59:37	-- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:15:45.286  [2024-11-19 16:59:38.120494] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:15:45.286  [2024-11-19 16:59:38.120797] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:15:45.286  [2024-11-19 16:59:38.120881] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080
00:15:45.286  [2024-11-19 16:59:38.121018] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:15:45.286  [2024-11-19 16:59:38.121493] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:15:45.286  [2024-11-19 16:59:38.121627] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:15:45.286  [2024-11-19 16:59:38.121792] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2
00:15:45.286  [2024-11-19 16:59:38.121841] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:15:45.286  pt2
00:15:45.286   16:59:38	-- bdev/bdev_raid.sh@422 -- # (( i++ ))
00:15:45.286   16:59:38	-- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs ))
00:15:45.286   16:59:38	-- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003
00:15:45.545  [2024-11-19 16:59:38.380597] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3
00:15:45.545  [2024-11-19 16:59:38.380916] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:15:45.545  [2024-11-19 16:59:38.380984] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380
00:15:45.545  [2024-11-19 16:59:38.381084] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:15:45.545  [2024-11-19 16:59:38.381550] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:15:45.545  [2024-11-19 16:59:38.381688] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3
00:15:45.545  [2024-11-19 16:59:38.381878] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3
00:15:45.545  [2024-11-19 16:59:38.381986] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed
00:15:45.545  [2024-11-19 16:59:38.382132] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80
00:15:45.545  [2024-11-19 16:59:38.382326] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512
00:15:45.545  [2024-11-19 16:59:38.382437] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0
00:15:45.545  [2024-11-19 16:59:38.382761] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80
00:15:45.545  [2024-11-19 16:59:38.382917] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80
00:15:45.545  [2024-11-19 16:59:38.383096] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:15:45.545  pt3
00:15:45.804   16:59:38	-- bdev/bdev_raid.sh@422 -- # (( i++ ))
00:15:45.804   16:59:38	-- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs ))
00:15:45.804   16:59:38	-- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3
00:15:45.804   16:59:38	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:15:45.804   16:59:38	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:15:45.805   16:59:38	-- bdev/bdev_raid.sh@119 -- # local raid_level=concat
00:15:45.805   16:59:38	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:15:45.805   16:59:38	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:15:45.805   16:59:38	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:15:45.805   16:59:38	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:15:45.805   16:59:38	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:15:45.805   16:59:38	-- bdev/bdev_raid.sh@125 -- # local tmp
00:15:45.805    16:59:38	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:15:45.805    16:59:38	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:15:45.805   16:59:38	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:15:45.805    "name": "raid_bdev1",
00:15:45.805    "uuid": "3cfd6d3a-0fef-4895-ac3a-697a01b2486b",
00:15:45.805    "strip_size_kb": 64,
00:15:45.805    "state": "online",
00:15:45.805    "raid_level": "concat",
00:15:45.805    "superblock": true,
00:15:45.805    "num_base_bdevs": 3,
00:15:45.805    "num_base_bdevs_discovered": 3,
00:15:45.805    "num_base_bdevs_operational": 3,
00:15:45.805    "base_bdevs_list": [
00:15:45.805      {
00:15:45.805        "name": "pt1",
00:15:45.805        "uuid": "3a7e1ebb-86a9-5c33-a8e0-aa771ea45f2c",
00:15:45.805        "is_configured": true,
00:15:45.805        "data_offset": 2048,
00:15:45.805        "data_size": 63488
00:15:45.805      },
00:15:45.805      {
00:15:45.805        "name": "pt2",
00:15:45.805        "uuid": "900d26bb-d339-5b9c-a39d-60ec1695334f",
00:15:45.805        "is_configured": true,
00:15:45.805        "data_offset": 2048,
00:15:45.805        "data_size": 63488
00:15:45.805      },
00:15:45.805      {
00:15:45.805        "name": "pt3",
00:15:45.805        "uuid": "3ef837a4-4eff-5498-9c1c-246e8e87679b",
00:15:45.805        "is_configured": true,
00:15:45.805        "data_offset": 2048,
00:15:45.805        "data_size": 63488
00:15:45.805      }
00:15:45.805    ]
00:15:45.805  }'
00:15:45.805   16:59:38	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:15:45.805   16:59:38	-- common/autotest_common.sh@10 -- # set +x
00:15:46.372    16:59:39	-- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1
00:15:46.372    16:59:39	-- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid'
00:15:46.631  [2024-11-19 16:59:39.465037] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:15:46.922   16:59:39	-- bdev/bdev_raid.sh@430 -- # '[' 3cfd6d3a-0fef-4895-ac3a-697a01b2486b '!=' 3cfd6d3a-0fef-4895-ac3a-697a01b2486b ']'
00:15:46.922   16:59:39	-- bdev/bdev_raid.sh@434 -- # has_redundancy concat
00:15:46.922   16:59:39	-- bdev/bdev_raid.sh@195 -- # case $1 in
00:15:46.922   16:59:39	-- bdev/bdev_raid.sh@197 -- # return 1
00:15:46.922   16:59:39	-- bdev/bdev_raid.sh@511 -- # killprocess 127129
00:15:46.922   16:59:39	-- common/autotest_common.sh@936 -- # '[' -z 127129 ']'
00:15:46.922   16:59:39	-- common/autotest_common.sh@940 -- # kill -0 127129
00:15:46.922    16:59:39	-- common/autotest_common.sh@941 -- # uname
00:15:46.922   16:59:39	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:15:46.922    16:59:39	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 127129
00:15:46.922   16:59:39	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:15:46.922   16:59:39	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:15:46.922   16:59:39	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 127129'
00:15:46.922  killing process with pid 127129
00:15:46.922   16:59:39	-- common/autotest_common.sh@955 -- # kill 127129
00:15:46.922  [2024-11-19 16:59:39.528042] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:15:46.922   16:59:39	-- common/autotest_common.sh@960 -- # wait 127129
00:15:46.922  [2024-11-19 16:59:39.528297] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:15:46.922  [2024-11-19 16:59:39.528592] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:15:46.922  [2024-11-19 16:59:39.528638] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline
00:15:46.922  [2024-11-19 16:59:39.564960] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:15:47.181   16:59:39	-- bdev/bdev_raid.sh@513 -- # return 0
00:15:47.181  
00:15:47.181  real	0m10.031s
00:15:47.181  user	0m17.810s
00:15:47.181  sys	0m1.612s
00:15:47.181   16:59:39	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:15:47.181   16:59:39	-- common/autotest_common.sh@10 -- # set +x
00:15:47.181  ************************************
00:15:47.181  END TEST raid_superblock_test
00:15:47.181  ************************************
00:15:47.181   16:59:39	-- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1
00:15:47.181   16:59:39	-- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false
00:15:47.181   16:59:39	-- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']'
00:15:47.181   16:59:39	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:15:47.181   16:59:39	-- common/autotest_common.sh@10 -- # set +x
00:15:47.181  ************************************
00:15:47.181  START TEST raid_state_function_test
00:15:47.181  ************************************
00:15:47.181   16:59:39	-- common/autotest_common.sh@1114 -- # raid_state_function_test raid1 3 false
00:15:47.181   16:59:39	-- bdev/bdev_raid.sh@202 -- # local raid_level=raid1
00:15:47.181   16:59:39	-- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3
00:15:47.181   16:59:39	-- bdev/bdev_raid.sh@204 -- # local superblock=false
00:15:47.181   16:59:39	-- bdev/bdev_raid.sh@205 -- # local raid_bdev
00:15:47.181    16:59:39	-- bdev/bdev_raid.sh@206 -- # (( i = 1 ))
00:15:47.181    16:59:39	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:15:47.181    16:59:39	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev1
00:15:47.181    16:59:39	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:15:47.181    16:59:39	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:15:47.181    16:59:39	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev2
00:15:47.181    16:59:39	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:15:47.181    16:59:39	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:15:47.181    16:59:39	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev3
00:15:47.181    16:59:39	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:15:47.181    16:59:39	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:15:47.181   16:59:39	-- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3')
00:15:47.181   16:59:39	-- bdev/bdev_raid.sh@206 -- # local base_bdevs
00:15:47.181   16:59:39	-- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid
00:15:47.181   16:59:39	-- bdev/bdev_raid.sh@208 -- # local strip_size
00:15:47.181   16:59:39	-- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg
00:15:47.181   16:59:39	-- bdev/bdev_raid.sh@210 -- # local superblock_create_arg
00:15:47.181   16:59:39	-- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']'
00:15:47.181   16:59:39	-- bdev/bdev_raid.sh@216 -- # strip_size=0
00:15:47.181   16:59:39	-- bdev/bdev_raid.sh@219 -- # '[' false = true ']'
00:15:47.181   16:59:39	-- bdev/bdev_raid.sh@222 -- # superblock_create_arg=
00:15:47.181   16:59:39	-- bdev/bdev_raid.sh@226 -- # raid_pid=127434
00:15:47.181   16:59:39	-- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 127434'
00:15:47.181  Process raid pid: 127434
00:15:47.181   16:59:39	-- bdev/bdev_raid.sh@228 -- # waitforlisten 127434 /var/tmp/spdk-raid.sock
00:15:47.181   16:59:39	-- common/autotest_common.sh@829 -- # '[' -z 127434 ']'
00:15:47.181   16:59:39	-- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid
00:15:47.181   16:59:39	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock
00:15:47.181   16:59:39	-- common/autotest_common.sh@834 -- # local max_retries=100
00:15:47.181   16:59:39	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...'
00:15:47.181  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...
00:15:47.181   16:59:39	-- common/autotest_common.sh@838 -- # xtrace_disable
00:15:47.181   16:59:39	-- common/autotest_common.sh@10 -- # set +x
00:15:47.181  [2024-11-19 16:59:39.963428] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:15:47.181  [2024-11-19 16:59:39.964615] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:15:47.439  [2024-11-19 16:59:40.122330] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:15:47.439  [2024-11-19 16:59:40.171273] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:15:47.439  [2024-11-19 16:59:40.214311] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:15:48.006   16:59:40	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:15:48.006   16:59:40	-- common/autotest_common.sh@862 -- # return 0
00:15:48.006   16:59:40	-- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid
00:15:48.266  [2024-11-19 16:59:41.013236] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:15:48.266  [2024-11-19 16:59:41.013585] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:15:48.266  [2024-11-19 16:59:41.013673] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:15:48.266  [2024-11-19 16:59:41.013728] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:15:48.266  [2024-11-19 16:59:41.013755] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:15:48.266  [2024-11-19 16:59:41.013822] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:15:48.266   16:59:41	-- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3
00:15:48.266   16:59:41	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:15:48.266   16:59:41	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:15:48.266   16:59:41	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:15:48.266   16:59:41	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:15:48.266   16:59:41	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:15:48.266   16:59:41	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:15:48.266   16:59:41	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:15:48.266   16:59:41	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:15:48.266   16:59:41	-- bdev/bdev_raid.sh@125 -- # local tmp
00:15:48.266    16:59:41	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:15:48.266    16:59:41	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:15:48.524   16:59:41	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:15:48.524    "name": "Existed_Raid",
00:15:48.524    "uuid": "00000000-0000-0000-0000-000000000000",
00:15:48.524    "strip_size_kb": 0,
00:15:48.524    "state": "configuring",
00:15:48.524    "raid_level": "raid1",
00:15:48.524    "superblock": false,
00:15:48.524    "num_base_bdevs": 3,
00:15:48.524    "num_base_bdevs_discovered": 0,
00:15:48.524    "num_base_bdevs_operational": 3,
00:15:48.524    "base_bdevs_list": [
00:15:48.524      {
00:15:48.524        "name": "BaseBdev1",
00:15:48.524        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:48.524        "is_configured": false,
00:15:48.524        "data_offset": 0,
00:15:48.524        "data_size": 0
00:15:48.524      },
00:15:48.524      {
00:15:48.524        "name": "BaseBdev2",
00:15:48.524        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:48.524        "is_configured": false,
00:15:48.524        "data_offset": 0,
00:15:48.524        "data_size": 0
00:15:48.524      },
00:15:48.524      {
00:15:48.524        "name": "BaseBdev3",
00:15:48.524        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:48.524        "is_configured": false,
00:15:48.524        "data_offset": 0,
00:15:48.524        "data_size": 0
00:15:48.524      }
00:15:48.524    ]
00:15:48.524  }'
00:15:48.524   16:59:41	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:15:48.524   16:59:41	-- common/autotest_common.sh@10 -- # set +x
00:15:49.091   16:59:41	-- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid
00:15:49.350  [2024-11-19 16:59:41.965345] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:15:49.350  [2024-11-19 16:59:41.965628] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring
00:15:49.350   16:59:41	-- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid
00:15:49.350  [2024-11-19 16:59:42.149410] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:15:49.350  [2024-11-19 16:59:42.149681] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:15:49.350  [2024-11-19 16:59:42.149792] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:15:49.350  [2024-11-19 16:59:42.149850] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:15:49.350  [2024-11-19 16:59:42.149878] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:15:49.350  [2024-11-19 16:59:42.149925] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:15:49.350   16:59:42	-- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1
00:15:49.609  [2024-11-19 16:59:42.346973] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:15:49.610  BaseBdev1
00:15:49.610   16:59:42	-- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1
00:15:49.610   16:59:42	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1
00:15:49.610   16:59:42	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:15:49.610   16:59:42	-- common/autotest_common.sh@899 -- # local i
00:15:49.610   16:59:42	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:15:49.610   16:59:42	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:15:49.610   16:59:42	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:15:49.869   16:59:42	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000
00:15:50.128  [
00:15:50.128    {
00:15:50.128      "name": "BaseBdev1",
00:15:50.128      "aliases": [
00:15:50.128        "5eedb6dd-67f8-4622-98fc-14a42ad40dcd"
00:15:50.128      ],
00:15:50.128      "product_name": "Malloc disk",
00:15:50.128      "block_size": 512,
00:15:50.128      "num_blocks": 65536,
00:15:50.128      "uuid": "5eedb6dd-67f8-4622-98fc-14a42ad40dcd",
00:15:50.128      "assigned_rate_limits": {
00:15:50.128        "rw_ios_per_sec": 0,
00:15:50.128        "rw_mbytes_per_sec": 0,
00:15:50.128        "r_mbytes_per_sec": 0,
00:15:50.128        "w_mbytes_per_sec": 0
00:15:50.128      },
00:15:50.128      "claimed": true,
00:15:50.128      "claim_type": "exclusive_write",
00:15:50.128      "zoned": false,
00:15:50.128      "supported_io_types": {
00:15:50.128        "read": true,
00:15:50.128        "write": true,
00:15:50.128        "unmap": true,
00:15:50.128        "write_zeroes": true,
00:15:50.128        "flush": true,
00:15:50.128        "reset": true,
00:15:50.128        "compare": false,
00:15:50.128        "compare_and_write": false,
00:15:50.128        "abort": true,
00:15:50.128        "nvme_admin": false,
00:15:50.128        "nvme_io": false
00:15:50.128      },
00:15:50.129      "memory_domains": [
00:15:50.129        {
00:15:50.129          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:15:50.129          "dma_device_type": 2
00:15:50.129        }
00:15:50.129      ],
00:15:50.129      "driver_specific": {}
00:15:50.129    }
00:15:50.129  ]
00:15:50.129   16:59:42	-- common/autotest_common.sh@905 -- # return 0
00:15:50.129   16:59:42	-- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3
00:15:50.129   16:59:42	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:15:50.129   16:59:42	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:15:50.129   16:59:42	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:15:50.129   16:59:42	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:15:50.129   16:59:42	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:15:50.129   16:59:42	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:15:50.129   16:59:42	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:15:50.129   16:59:42	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:15:50.129   16:59:42	-- bdev/bdev_raid.sh@125 -- # local tmp
00:15:50.129    16:59:42	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:15:50.129    16:59:42	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:15:50.388   16:59:42	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:15:50.388    "name": "Existed_Raid",
00:15:50.388    "uuid": "00000000-0000-0000-0000-000000000000",
00:15:50.388    "strip_size_kb": 0,
00:15:50.388    "state": "configuring",
00:15:50.388    "raid_level": "raid1",
00:15:50.388    "superblock": false,
00:15:50.388    "num_base_bdevs": 3,
00:15:50.388    "num_base_bdevs_discovered": 1,
00:15:50.388    "num_base_bdevs_operational": 3,
00:15:50.388    "base_bdevs_list": [
00:15:50.388      {
00:15:50.388        "name": "BaseBdev1",
00:15:50.388        "uuid": "5eedb6dd-67f8-4622-98fc-14a42ad40dcd",
00:15:50.388        "is_configured": true,
00:15:50.388        "data_offset": 0,
00:15:50.388        "data_size": 65536
00:15:50.388      },
00:15:50.388      {
00:15:50.388        "name": "BaseBdev2",
00:15:50.388        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:50.388        "is_configured": false,
00:15:50.388        "data_offset": 0,
00:15:50.388        "data_size": 0
00:15:50.388      },
00:15:50.388      {
00:15:50.388        "name": "BaseBdev3",
00:15:50.388        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:50.388        "is_configured": false,
00:15:50.388        "data_offset": 0,
00:15:50.388        "data_size": 0
00:15:50.388      }
00:15:50.388    ]
00:15:50.388  }'
00:15:50.388   16:59:42	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:15:50.388   16:59:42	-- common/autotest_common.sh@10 -- # set +x
00:15:50.956   16:59:43	-- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid
00:15:50.956  [2024-11-19 16:59:43.739337] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:15:50.956  [2024-11-19 16:59:43.739615] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring
00:15:50.956   16:59:43	-- bdev/bdev_raid.sh@244 -- # '[' false = true ']'
00:15:50.956   16:59:43	-- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid
00:15:51.215  [2024-11-19 16:59:43.931466] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:15:51.215  [2024-11-19 16:59:43.933841] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:15:51.215  [2024-11-19 16:59:43.934045] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:15:51.215  [2024-11-19 16:59:43.934145] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:15:51.215  [2024-11-19 16:59:43.934206] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:15:51.215   16:59:43	-- bdev/bdev_raid.sh@254 -- # (( i = 1 ))
00:15:51.215   16:59:43	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:15:51.215   16:59:43	-- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3
00:15:51.215   16:59:43	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:15:51.215   16:59:43	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:15:51.215   16:59:43	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:15:51.216   16:59:43	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:15:51.216   16:59:43	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:15:51.216   16:59:43	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:15:51.216   16:59:43	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:15:51.216   16:59:43	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:15:51.216   16:59:43	-- bdev/bdev_raid.sh@125 -- # local tmp
00:15:51.216    16:59:43	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:15:51.216    16:59:43	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:15:51.475   16:59:44	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:15:51.475    "name": "Existed_Raid",
00:15:51.475    "uuid": "00000000-0000-0000-0000-000000000000",
00:15:51.475    "strip_size_kb": 0,
00:15:51.475    "state": "configuring",
00:15:51.475    "raid_level": "raid1",
00:15:51.475    "superblock": false,
00:15:51.475    "num_base_bdevs": 3,
00:15:51.475    "num_base_bdevs_discovered": 1,
00:15:51.475    "num_base_bdevs_operational": 3,
00:15:51.475    "base_bdevs_list": [
00:15:51.475      {
00:15:51.475        "name": "BaseBdev1",
00:15:51.475        "uuid": "5eedb6dd-67f8-4622-98fc-14a42ad40dcd",
00:15:51.475        "is_configured": true,
00:15:51.475        "data_offset": 0,
00:15:51.475        "data_size": 65536
00:15:51.475      },
00:15:51.475      {
00:15:51.475        "name": "BaseBdev2",
00:15:51.475        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:51.475        "is_configured": false,
00:15:51.475        "data_offset": 0,
00:15:51.475        "data_size": 0
00:15:51.475      },
00:15:51.475      {
00:15:51.475        "name": "BaseBdev3",
00:15:51.475        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:51.475        "is_configured": false,
00:15:51.475        "data_offset": 0,
00:15:51.475        "data_size": 0
00:15:51.475      }
00:15:51.475    ]
00:15:51.475  }'
00:15:51.475   16:59:44	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:15:51.475   16:59:44	-- common/autotest_common.sh@10 -- # set +x
00:15:52.051   16:59:44	-- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2
00:15:52.051  [2024-11-19 16:59:44.874325] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:15:52.051  BaseBdev2
00:15:52.051   16:59:44	-- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2
00:15:52.051   16:59:44	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2
00:15:52.051   16:59:44	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:15:52.051   16:59:44	-- common/autotest_common.sh@899 -- # local i
00:15:52.051   16:59:44	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:15:52.051   16:59:44	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:15:52.051   16:59:44	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:15:52.314   16:59:45	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000
00:15:52.574  [
00:15:52.574    {
00:15:52.574      "name": "BaseBdev2",
00:15:52.574      "aliases": [
00:15:52.574        "3098e6a7-2192-40e1-a4bd-326811d7370a"
00:15:52.574      ],
00:15:52.574      "product_name": "Malloc disk",
00:15:52.574      "block_size": 512,
00:15:52.574      "num_blocks": 65536,
00:15:52.574      "uuid": "3098e6a7-2192-40e1-a4bd-326811d7370a",
00:15:52.574      "assigned_rate_limits": {
00:15:52.574        "rw_ios_per_sec": 0,
00:15:52.574        "rw_mbytes_per_sec": 0,
00:15:52.574        "r_mbytes_per_sec": 0,
00:15:52.574        "w_mbytes_per_sec": 0
00:15:52.574      },
00:15:52.574      "claimed": true,
00:15:52.574      "claim_type": "exclusive_write",
00:15:52.574      "zoned": false,
00:15:52.574      "supported_io_types": {
00:15:52.574        "read": true,
00:15:52.574        "write": true,
00:15:52.574        "unmap": true,
00:15:52.574        "write_zeroes": true,
00:15:52.574        "flush": true,
00:15:52.574        "reset": true,
00:15:52.574        "compare": false,
00:15:52.574        "compare_and_write": false,
00:15:52.574        "abort": true,
00:15:52.574        "nvme_admin": false,
00:15:52.574        "nvme_io": false
00:15:52.574      },
00:15:52.574      "memory_domains": [
00:15:52.574        {
00:15:52.574          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:15:52.574          "dma_device_type": 2
00:15:52.574        }
00:15:52.574      ],
00:15:52.574      "driver_specific": {}
00:15:52.574    }
00:15:52.574  ]
00:15:52.574   16:59:45	-- common/autotest_common.sh@905 -- # return 0
00:15:52.574   16:59:45	-- bdev/bdev_raid.sh@254 -- # (( i++ ))
00:15:52.574   16:59:45	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:15:52.574   16:59:45	-- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3
00:15:52.574   16:59:45	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:15:52.574   16:59:45	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:15:52.574   16:59:45	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:15:52.574   16:59:45	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:15:52.574   16:59:45	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:15:52.574   16:59:45	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:15:52.574   16:59:45	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:15:52.574   16:59:45	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:15:52.574   16:59:45	-- bdev/bdev_raid.sh@125 -- # local tmp
00:15:52.574    16:59:45	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:15:52.574    16:59:45	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:15:52.833   16:59:45	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:15:52.833    "name": "Existed_Raid",
00:15:52.833    "uuid": "00000000-0000-0000-0000-000000000000",
00:15:52.833    "strip_size_kb": 0,
00:15:52.833    "state": "configuring",
00:15:52.833    "raid_level": "raid1",
00:15:52.833    "superblock": false,
00:15:52.833    "num_base_bdevs": 3,
00:15:52.833    "num_base_bdevs_discovered": 2,
00:15:52.833    "num_base_bdevs_operational": 3,
00:15:52.833    "base_bdevs_list": [
00:15:52.833      {
00:15:52.833        "name": "BaseBdev1",
00:15:52.833        "uuid": "5eedb6dd-67f8-4622-98fc-14a42ad40dcd",
00:15:52.833        "is_configured": true,
00:15:52.833        "data_offset": 0,
00:15:52.833        "data_size": 65536
00:15:52.833      },
00:15:52.833      {
00:15:52.833        "name": "BaseBdev2",
00:15:52.833        "uuid": "3098e6a7-2192-40e1-a4bd-326811d7370a",
00:15:52.833        "is_configured": true,
00:15:52.833        "data_offset": 0,
00:15:52.833        "data_size": 65536
00:15:52.833      },
00:15:52.833      {
00:15:52.833        "name": "BaseBdev3",
00:15:52.833        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:52.833        "is_configured": false,
00:15:52.833        "data_offset": 0,
00:15:52.833        "data_size": 0
00:15:52.833      }
00:15:52.833    ]
00:15:52.833  }'
00:15:52.833   16:59:45	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:15:52.833   16:59:45	-- common/autotest_common.sh@10 -- # set +x
00:15:53.401   16:59:46	-- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3
00:15:53.401  [2024-11-19 16:59:46.248815] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:15:53.401  [2024-11-19 16:59:46.248893] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080
00:15:53.401  [2024-11-19 16:59:46.248902] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512
00:15:53.401  [2024-11-19 16:59:46.249069] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002050
00:15:53.401  [2024-11-19 16:59:46.249497] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080
00:15:53.401  [2024-11-19 16:59:46.249515] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080
00:15:53.401  [2024-11-19 16:59:46.249770] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:15:53.401  BaseBdev3
00:15:53.660   16:59:46	-- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3
00:15:53.660   16:59:46	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3
00:15:53.660   16:59:46	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:15:53.660   16:59:46	-- common/autotest_common.sh@899 -- # local i
00:15:53.660   16:59:46	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:15:53.660   16:59:46	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:15:53.660   16:59:46	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:15:53.918   16:59:46	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000
00:15:54.177  [
00:15:54.177    {
00:15:54.177      "name": "BaseBdev3",
00:15:54.177      "aliases": [
00:15:54.177        "f630f9e0-da01-45a1-9962-92c24e87cd65"
00:15:54.177      ],
00:15:54.177      "product_name": "Malloc disk",
00:15:54.177      "block_size": 512,
00:15:54.177      "num_blocks": 65536,
00:15:54.177      "uuid": "f630f9e0-da01-45a1-9962-92c24e87cd65",
00:15:54.177      "assigned_rate_limits": {
00:15:54.177        "rw_ios_per_sec": 0,
00:15:54.177        "rw_mbytes_per_sec": 0,
00:15:54.177        "r_mbytes_per_sec": 0,
00:15:54.177        "w_mbytes_per_sec": 0
00:15:54.177      },
00:15:54.177      "claimed": true,
00:15:54.177      "claim_type": "exclusive_write",
00:15:54.177      "zoned": false,
00:15:54.177      "supported_io_types": {
00:15:54.177        "read": true,
00:15:54.177        "write": true,
00:15:54.177        "unmap": true,
00:15:54.177        "write_zeroes": true,
00:15:54.177        "flush": true,
00:15:54.177        "reset": true,
00:15:54.177        "compare": false,
00:15:54.177        "compare_and_write": false,
00:15:54.177        "abort": true,
00:15:54.177        "nvme_admin": false,
00:15:54.177        "nvme_io": false
00:15:54.177      },
00:15:54.177      "memory_domains": [
00:15:54.177        {
00:15:54.177          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:15:54.177          "dma_device_type": 2
00:15:54.177        }
00:15:54.177      ],
00:15:54.177      "driver_specific": {}
00:15:54.177    }
00:15:54.177  ]
00:15:54.177   16:59:46	-- common/autotest_common.sh@905 -- # return 0
00:15:54.177   16:59:46	-- bdev/bdev_raid.sh@254 -- # (( i++ ))
00:15:54.177   16:59:46	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:15:54.177   16:59:46	-- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3
00:15:54.177   16:59:46	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:15:54.177   16:59:46	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:15:54.177   16:59:46	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:15:54.177   16:59:46	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:15:54.177   16:59:46	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:15:54.177   16:59:46	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:15:54.177   16:59:46	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:15:54.177   16:59:46	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:15:54.177   16:59:46	-- bdev/bdev_raid.sh@125 -- # local tmp
00:15:54.177    16:59:46	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:15:54.177    16:59:46	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:15:54.177   16:59:46	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:15:54.177    "name": "Existed_Raid",
00:15:54.177    "uuid": "97e92f7f-fe3b-4248-9233-46cd04b020dc",
00:15:54.177    "strip_size_kb": 0,
00:15:54.177    "state": "online",
00:15:54.177    "raid_level": "raid1",
00:15:54.177    "superblock": false,
00:15:54.177    "num_base_bdevs": 3,
00:15:54.177    "num_base_bdevs_discovered": 3,
00:15:54.177    "num_base_bdevs_operational": 3,
00:15:54.177    "base_bdevs_list": [
00:15:54.177      {
00:15:54.177        "name": "BaseBdev1",
00:15:54.177        "uuid": "5eedb6dd-67f8-4622-98fc-14a42ad40dcd",
00:15:54.177        "is_configured": true,
00:15:54.177        "data_offset": 0,
00:15:54.177        "data_size": 65536
00:15:54.177      },
00:15:54.177      {
00:15:54.177        "name": "BaseBdev2",
00:15:54.177        "uuid": "3098e6a7-2192-40e1-a4bd-326811d7370a",
00:15:54.177        "is_configured": true,
00:15:54.177        "data_offset": 0,
00:15:54.177        "data_size": 65536
00:15:54.177      },
00:15:54.177      {
00:15:54.177        "name": "BaseBdev3",
00:15:54.177        "uuid": "f630f9e0-da01-45a1-9962-92c24e87cd65",
00:15:54.177        "is_configured": true,
00:15:54.177        "data_offset": 0,
00:15:54.177        "data_size": 65536
00:15:54.177      }
00:15:54.177    ]
00:15:54.177  }'
00:15:54.177   16:59:46	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:15:54.177   16:59:46	-- common/autotest_common.sh@10 -- # set +x
00:15:54.744   16:59:47	-- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1
00:15:55.005  [2024-11-19 16:59:47.817355] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:15:55.005   16:59:47	-- bdev/bdev_raid.sh@263 -- # local expected_state
00:15:55.005   16:59:47	-- bdev/bdev_raid.sh@264 -- # has_redundancy raid1
00:15:55.005   16:59:47	-- bdev/bdev_raid.sh@195 -- # case $1 in
00:15:55.005   16:59:47	-- bdev/bdev_raid.sh@196 -- # return 0
00:15:55.005   16:59:47	-- bdev/bdev_raid.sh@267 -- # expected_state=online
00:15:55.005   16:59:47	-- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2
00:15:55.005   16:59:47	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:15:55.005   16:59:47	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:15:55.005   16:59:47	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:15:55.005   16:59:47	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:15:55.005   16:59:47	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:15:55.005   16:59:47	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:15:55.005   16:59:47	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:15:55.005   16:59:47	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:15:55.005   16:59:47	-- bdev/bdev_raid.sh@125 -- # local tmp
00:15:55.005    16:59:47	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:15:55.005    16:59:47	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:15:55.271   16:59:48	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:15:55.271    "name": "Existed_Raid",
00:15:55.271    "uuid": "97e92f7f-fe3b-4248-9233-46cd04b020dc",
00:15:55.271    "strip_size_kb": 0,
00:15:55.271    "state": "online",
00:15:55.271    "raid_level": "raid1",
00:15:55.271    "superblock": false,
00:15:55.271    "num_base_bdevs": 3,
00:15:55.271    "num_base_bdevs_discovered": 2,
00:15:55.271    "num_base_bdevs_operational": 2,
00:15:55.271    "base_bdevs_list": [
00:15:55.271      {
00:15:55.271        "name": null,
00:15:55.271        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:55.271        "is_configured": false,
00:15:55.271        "data_offset": 0,
00:15:55.271        "data_size": 65536
00:15:55.271      },
00:15:55.271      {
00:15:55.271        "name": "BaseBdev2",
00:15:55.271        "uuid": "3098e6a7-2192-40e1-a4bd-326811d7370a",
00:15:55.271        "is_configured": true,
00:15:55.271        "data_offset": 0,
00:15:55.271        "data_size": 65536
00:15:55.271      },
00:15:55.271      {
00:15:55.271        "name": "BaseBdev3",
00:15:55.271        "uuid": "f630f9e0-da01-45a1-9962-92c24e87cd65",
00:15:55.271        "is_configured": true,
00:15:55.271        "data_offset": 0,
00:15:55.271        "data_size": 65536
00:15:55.271      }
00:15:55.271    ]
00:15:55.271  }'
00:15:55.271   16:59:48	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:15:55.271   16:59:48	-- common/autotest_common.sh@10 -- # set +x
00:15:55.855   16:59:48	-- bdev/bdev_raid.sh@273 -- # (( i = 1 ))
00:15:55.855   16:59:48	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:15:55.855    16:59:48	-- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:15:55.855    16:59:48	-- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]'
00:15:56.114   16:59:48	-- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid
00:15:56.114   16:59:48	-- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:15:56.114   16:59:48	-- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2
00:15:56.114  [2024-11-19 16:59:48.949821] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2
00:15:56.372   16:59:48	-- bdev/bdev_raid.sh@273 -- # (( i++ ))
00:15:56.372   16:59:48	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:15:56.372    16:59:48	-- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:15:56.372    16:59:48	-- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]'
00:15:56.372   16:59:49	-- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid
00:15:56.372   16:59:49	-- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:15:56.372   16:59:49	-- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3
00:15:56.629  [2024-11-19 16:59:49.330240] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3
00:15:56.629  [2024-11-19 16:59:49.330289] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:15:56.629  [2024-11-19 16:59:49.330365] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:15:56.629  [2024-11-19 16:59:49.342554] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:15:56.629  [2024-11-19 16:59:49.342583] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline
00:15:56.629   16:59:49	-- bdev/bdev_raid.sh@273 -- # (( i++ ))
00:15:56.629   16:59:49	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:15:56.629    16:59:49	-- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:15:56.629    16:59:49	-- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)'
00:15:56.887   16:59:49	-- bdev/bdev_raid.sh@281 -- # raid_bdev=
00:15:56.887   16:59:49	-- bdev/bdev_raid.sh@282 -- # '[' -n '' ']'
00:15:56.887   16:59:49	-- bdev/bdev_raid.sh@287 -- # killprocess 127434
00:15:56.887   16:59:49	-- common/autotest_common.sh@936 -- # '[' -z 127434 ']'
00:15:56.887   16:59:49	-- common/autotest_common.sh@940 -- # kill -0 127434
00:15:56.887    16:59:49	-- common/autotest_common.sh@941 -- # uname
00:15:56.887   16:59:49	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:15:56.887    16:59:49	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 127434
00:15:56.887   16:59:49	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:15:56.887   16:59:49	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:15:56.887  killing process with pid 127434
00:15:56.887   16:59:49	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 127434'
00:15:56.887   16:59:49	-- common/autotest_common.sh@955 -- # kill 127434
00:15:56.887  [2024-11-19 16:59:49.578577] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:15:56.887  [2024-11-19 16:59:49.578676] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:15:56.887   16:59:49	-- common/autotest_common.sh@960 -- # wait 127434
00:15:57.147   16:59:49	-- bdev/bdev_raid.sh@289 -- # return 0
00:15:57.147  
00:15:57.147  real	0m9.943s
00:15:57.147  user	0m17.733s
00:15:57.147  sys	0m1.727s
00:15:57.147   16:59:49	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:15:57.147   16:59:49	-- common/autotest_common.sh@10 -- # set +x
00:15:57.147  ************************************
00:15:57.147  END TEST raid_state_function_test
00:15:57.147  ************************************
00:15:57.147   16:59:49	-- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true
00:15:57.147   16:59:49	-- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']'
00:15:57.147   16:59:49	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:15:57.147   16:59:49	-- common/autotest_common.sh@10 -- # set +x
00:15:57.147  ************************************
00:15:57.147  START TEST raid_state_function_test_sb
00:15:57.147  ************************************
00:15:57.147   16:59:49	-- common/autotest_common.sh@1114 -- # raid_state_function_test raid1 3 true
00:15:57.147   16:59:49	-- bdev/bdev_raid.sh@202 -- # local raid_level=raid1
00:15:57.147   16:59:49	-- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3
00:15:57.147   16:59:49	-- bdev/bdev_raid.sh@204 -- # local superblock=true
00:15:57.147   16:59:49	-- bdev/bdev_raid.sh@205 -- # local raid_bdev
00:15:57.147    16:59:49	-- bdev/bdev_raid.sh@206 -- # (( i = 1 ))
00:15:57.147    16:59:49	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:15:57.147    16:59:49	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev1
00:15:57.147    16:59:49	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:15:57.147    16:59:49	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:15:57.147    16:59:49	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev2
00:15:57.147    16:59:49	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:15:57.147    16:59:49	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:15:57.147    16:59:49	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev3
00:15:57.147    16:59:49	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:15:57.147    16:59:49	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:15:57.147   16:59:49	-- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3')
00:15:57.147   16:59:49	-- bdev/bdev_raid.sh@206 -- # local base_bdevs
00:15:57.147   16:59:49	-- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid
00:15:57.147   16:59:49	-- bdev/bdev_raid.sh@208 -- # local strip_size
00:15:57.147   16:59:49	-- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg
00:15:57.147   16:59:49	-- bdev/bdev_raid.sh@210 -- # local superblock_create_arg
00:15:57.147   16:59:49	-- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']'
00:15:57.147   16:59:49	-- bdev/bdev_raid.sh@216 -- # strip_size=0
00:15:57.147   16:59:49	-- bdev/bdev_raid.sh@219 -- # '[' true = true ']'
00:15:57.147   16:59:49	-- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s
00:15:57.147   16:59:49	-- bdev/bdev_raid.sh@226 -- # raid_pid=127792
00:15:57.147   16:59:49	-- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid
00:15:57.147  Process raid pid: 127792
00:15:57.147   16:59:49	-- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 127792'
00:15:57.147   16:59:49	-- bdev/bdev_raid.sh@228 -- # waitforlisten 127792 /var/tmp/spdk-raid.sock
00:15:57.147   16:59:49	-- common/autotest_common.sh@829 -- # '[' -z 127792 ']'
00:15:57.147   16:59:49	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock
00:15:57.147   16:59:49	-- common/autotest_common.sh@834 -- # local max_retries=100
00:15:57.147  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...
00:15:57.147   16:59:49	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...'
00:15:57.147   16:59:49	-- common/autotest_common.sh@838 -- # xtrace_disable
00:15:57.147   16:59:49	-- common/autotest_common.sh@10 -- # set +x
00:15:57.147  [2024-11-19 16:59:49.985797] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:15:57.147  [2024-11-19 16:59:49.986080] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:15:57.406  [2024-11-19 16:59:50.148713] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:15:57.406  [2024-11-19 16:59:50.203424] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:15:57.406  [2024-11-19 16:59:50.251923] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:15:57.974   16:59:50	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:15:57.974   16:59:50	-- common/autotest_common.sh@862 -- # return 0
00:15:57.974   16:59:50	-- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid
00:15:58.233  [2024-11-19 16:59:50.980701] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:15:58.233  [2024-11-19 16:59:50.980795] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:15:58.233  [2024-11-19 16:59:50.980807] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:15:58.233  [2024-11-19 16:59:50.980827] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:15:58.233  [2024-11-19 16:59:50.980835] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:15:58.233  [2024-11-19 16:59:50.980880] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:15:58.233   16:59:50	-- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3
00:15:58.233   16:59:50	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:15:58.233   16:59:50	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:15:58.233   16:59:50	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:15:58.233   16:59:50	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:15:58.233   16:59:50	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:15:58.233   16:59:50	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:15:58.233   16:59:50	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:15:58.233   16:59:50	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:15:58.233   16:59:50	-- bdev/bdev_raid.sh@125 -- # local tmp
00:15:58.233    16:59:50	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:15:58.233    16:59:50	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:15:58.492   16:59:51	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:15:58.492    "name": "Existed_Raid",
00:15:58.492    "uuid": "20814c0b-8ff1-48cb-bd74-1988be9df84d",
00:15:58.492    "strip_size_kb": 0,
00:15:58.492    "state": "configuring",
00:15:58.492    "raid_level": "raid1",
00:15:58.492    "superblock": true,
00:15:58.492    "num_base_bdevs": 3,
00:15:58.492    "num_base_bdevs_discovered": 0,
00:15:58.492    "num_base_bdevs_operational": 3,
00:15:58.492    "base_bdevs_list": [
00:15:58.492      {
00:15:58.492        "name": "BaseBdev1",
00:15:58.492        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:58.492        "is_configured": false,
00:15:58.492        "data_offset": 0,
00:15:58.492        "data_size": 0
00:15:58.492      },
00:15:58.492      {
00:15:58.492        "name": "BaseBdev2",
00:15:58.492        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:58.492        "is_configured": false,
00:15:58.492        "data_offset": 0,
00:15:58.492        "data_size": 0
00:15:58.492      },
00:15:58.492      {
00:15:58.492        "name": "BaseBdev3",
00:15:58.492        "uuid": "00000000-0000-0000-0000-000000000000",
00:15:58.492        "is_configured": false,
00:15:58.492        "data_offset": 0,
00:15:58.492        "data_size": 0
00:15:58.492      }
00:15:58.492    ]
00:15:58.492  }'
00:15:58.492   16:59:51	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:15:58.492   16:59:51	-- common/autotest_common.sh@10 -- # set +x
00:15:59.058   16:59:51	-- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid
00:15:59.316  [2024-11-19 16:59:52.000781] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:15:59.316  [2024-11-19 16:59:52.000833] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring
00:15:59.316   16:59:52	-- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid
00:15:59.574  [2024-11-19 16:59:52.296891] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:15:59.574  [2024-11-19 16:59:52.296980] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:15:59.574  [2024-11-19 16:59:52.296991] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:15:59.574  [2024-11-19 16:59:52.297013] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:15:59.574  [2024-11-19 16:59:52.297020] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:15:59.574  [2024-11-19 16:59:52.297046] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:15:59.574   16:59:52	-- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1
00:15:59.833  [2024-11-19 16:59:52.502750] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:15:59.833  BaseBdev1
00:15:59.833   16:59:52	-- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1
00:15:59.833   16:59:52	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1
00:15:59.833   16:59:52	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:15:59.833   16:59:52	-- common/autotest_common.sh@899 -- # local i
00:15:59.833   16:59:52	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:15:59.833   16:59:52	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:15:59.833   16:59:52	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:16:00.091   16:59:52	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000
00:16:00.091  [
00:16:00.091    {
00:16:00.091      "name": "BaseBdev1",
00:16:00.091      "aliases": [
00:16:00.091        "b035f71a-8c53-40ff-a4a4-1b3154e0d29f"
00:16:00.091      ],
00:16:00.091      "product_name": "Malloc disk",
00:16:00.091      "block_size": 512,
00:16:00.091      "num_blocks": 65536,
00:16:00.091      "uuid": "b035f71a-8c53-40ff-a4a4-1b3154e0d29f",
00:16:00.091      "assigned_rate_limits": {
00:16:00.091        "rw_ios_per_sec": 0,
00:16:00.091        "rw_mbytes_per_sec": 0,
00:16:00.091        "r_mbytes_per_sec": 0,
00:16:00.091        "w_mbytes_per_sec": 0
00:16:00.091      },
00:16:00.091      "claimed": true,
00:16:00.091      "claim_type": "exclusive_write",
00:16:00.091      "zoned": false,
00:16:00.091      "supported_io_types": {
00:16:00.091        "read": true,
00:16:00.091        "write": true,
00:16:00.091        "unmap": true,
00:16:00.091        "write_zeroes": true,
00:16:00.091        "flush": true,
00:16:00.091        "reset": true,
00:16:00.091        "compare": false,
00:16:00.091        "compare_and_write": false,
00:16:00.091        "abort": true,
00:16:00.091        "nvme_admin": false,
00:16:00.091        "nvme_io": false
00:16:00.091      },
00:16:00.091      "memory_domains": [
00:16:00.091        {
00:16:00.091          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:16:00.091          "dma_device_type": 2
00:16:00.091        }
00:16:00.091      ],
00:16:00.091      "driver_specific": {}
00:16:00.091    }
00:16:00.091  ]
00:16:00.350   16:59:52	-- common/autotest_common.sh@905 -- # return 0
00:16:00.350   16:59:52	-- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3
00:16:00.350   16:59:52	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:16:00.350   16:59:52	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:16:00.350   16:59:52	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:16:00.350   16:59:52	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:16:00.350   16:59:52	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:16:00.350   16:59:52	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:16:00.350   16:59:52	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:16:00.350   16:59:52	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:16:00.350   16:59:52	-- bdev/bdev_raid.sh@125 -- # local tmp
00:16:00.350    16:59:52	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:16:00.350    16:59:52	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:16:00.350   16:59:53	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:16:00.350    "name": "Existed_Raid",
00:16:00.350    "uuid": "e6c855a9-d993-403c-924e-96b2bd5a58cb",
00:16:00.350    "strip_size_kb": 0,
00:16:00.350    "state": "configuring",
00:16:00.350    "raid_level": "raid1",
00:16:00.350    "superblock": true,
00:16:00.350    "num_base_bdevs": 3,
00:16:00.350    "num_base_bdevs_discovered": 1,
00:16:00.350    "num_base_bdevs_operational": 3,
00:16:00.350    "base_bdevs_list": [
00:16:00.350      {
00:16:00.350        "name": "BaseBdev1",
00:16:00.350        "uuid": "b035f71a-8c53-40ff-a4a4-1b3154e0d29f",
00:16:00.350        "is_configured": true,
00:16:00.350        "data_offset": 2048,
00:16:00.350        "data_size": 63488
00:16:00.350      },
00:16:00.350      {
00:16:00.350        "name": "BaseBdev2",
00:16:00.350        "uuid": "00000000-0000-0000-0000-000000000000",
00:16:00.350        "is_configured": false,
00:16:00.350        "data_offset": 0,
00:16:00.350        "data_size": 0
00:16:00.350      },
00:16:00.350      {
00:16:00.350        "name": "BaseBdev3",
00:16:00.350        "uuid": "00000000-0000-0000-0000-000000000000",
00:16:00.350        "is_configured": false,
00:16:00.350        "data_offset": 0,
00:16:00.350        "data_size": 0
00:16:00.350      }
00:16:00.350    ]
00:16:00.350  }'
00:16:00.350   16:59:53	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:16:00.350   16:59:53	-- common/autotest_common.sh@10 -- # set +x
00:16:00.918   16:59:53	-- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid
00:16:01.177  [2024-11-19 16:59:53.867173] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:16:01.177  [2024-11-19 16:59:53.867248] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring
00:16:01.177   16:59:53	-- bdev/bdev_raid.sh@244 -- # '[' true = true ']'
00:16:01.177   16:59:53	-- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1
00:16:01.435   16:59:54	-- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1
00:16:01.693  BaseBdev1
00:16:01.693   16:59:54	-- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1
00:16:01.693   16:59:54	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1
00:16:01.693   16:59:54	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:16:01.693   16:59:54	-- common/autotest_common.sh@899 -- # local i
00:16:01.693   16:59:54	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:16:01.693   16:59:54	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:16:01.693   16:59:54	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:16:01.951   16:59:54	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000
00:16:01.951  [
00:16:01.951    {
00:16:01.951      "name": "BaseBdev1",
00:16:01.951      "aliases": [
00:16:01.951        "7657689b-b73c-44d4-9efd-6e197d6aa46d"
00:16:01.951      ],
00:16:01.951      "product_name": "Malloc disk",
00:16:01.951      "block_size": 512,
00:16:01.951      "num_blocks": 65536,
00:16:01.951      "uuid": "7657689b-b73c-44d4-9efd-6e197d6aa46d",
00:16:01.951      "assigned_rate_limits": {
00:16:01.951        "rw_ios_per_sec": 0,
00:16:01.951        "rw_mbytes_per_sec": 0,
00:16:01.951        "r_mbytes_per_sec": 0,
00:16:01.951        "w_mbytes_per_sec": 0
00:16:01.951      },
00:16:01.951      "claimed": false,
00:16:01.951      "zoned": false,
00:16:01.951      "supported_io_types": {
00:16:01.951        "read": true,
00:16:01.951        "write": true,
00:16:01.951        "unmap": true,
00:16:01.951        "write_zeroes": true,
00:16:01.951        "flush": true,
00:16:01.951        "reset": true,
00:16:01.951        "compare": false,
00:16:01.951        "compare_and_write": false,
00:16:01.951        "abort": true,
00:16:01.951        "nvme_admin": false,
00:16:01.951        "nvme_io": false
00:16:01.951      },
00:16:01.951      "memory_domains": [
00:16:01.951        {
00:16:01.951          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:16:01.951          "dma_device_type": 2
00:16:01.951        }
00:16:01.951      ],
00:16:01.951      "driver_specific": {}
00:16:01.951    }
00:16:01.951  ]
00:16:01.951   16:59:54	-- common/autotest_common.sh@905 -- # return 0
00:16:01.951   16:59:54	-- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid
00:16:02.210  [2024-11-19 16:59:54.965163] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:16:02.210  [2024-11-19 16:59:54.967369] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:16:02.210  [2024-11-19 16:59:54.967444] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:16:02.210  [2024-11-19 16:59:54.967455] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:16:02.210  [2024-11-19 16:59:54.967480] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:16:02.210   16:59:54	-- bdev/bdev_raid.sh@254 -- # (( i = 1 ))
00:16:02.210   16:59:54	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:16:02.210   16:59:54	-- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3
00:16:02.210   16:59:54	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:16:02.210   16:59:54	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:16:02.210   16:59:54	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:16:02.210   16:59:54	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:16:02.210   16:59:54	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:16:02.210   16:59:54	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:16:02.210   16:59:54	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:16:02.210   16:59:54	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:16:02.210   16:59:54	-- bdev/bdev_raid.sh@125 -- # local tmp
00:16:02.210    16:59:54	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:16:02.210    16:59:54	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:16:02.470   16:59:55	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:16:02.470    "name": "Existed_Raid",
00:16:02.470    "uuid": "21003a33-e01b-4c58-91e9-09831dde4d73",
00:16:02.470    "strip_size_kb": 0,
00:16:02.470    "state": "configuring",
00:16:02.470    "raid_level": "raid1",
00:16:02.470    "superblock": true,
00:16:02.470    "num_base_bdevs": 3,
00:16:02.470    "num_base_bdevs_discovered": 1,
00:16:02.470    "num_base_bdevs_operational": 3,
00:16:02.470    "base_bdevs_list": [
00:16:02.470      {
00:16:02.470        "name": "BaseBdev1",
00:16:02.470        "uuid": "7657689b-b73c-44d4-9efd-6e197d6aa46d",
00:16:02.470        "is_configured": true,
00:16:02.470        "data_offset": 2048,
00:16:02.470        "data_size": 63488
00:16:02.470      },
00:16:02.470      {
00:16:02.470        "name": "BaseBdev2",
00:16:02.470        "uuid": "00000000-0000-0000-0000-000000000000",
00:16:02.470        "is_configured": false,
00:16:02.470        "data_offset": 0,
00:16:02.470        "data_size": 0
00:16:02.470      },
00:16:02.470      {
00:16:02.470        "name": "BaseBdev3",
00:16:02.470        "uuid": "00000000-0000-0000-0000-000000000000",
00:16:02.470        "is_configured": false,
00:16:02.470        "data_offset": 0,
00:16:02.470        "data_size": 0
00:16:02.470      }
00:16:02.470    ]
00:16:02.470  }'
00:16:02.470   16:59:55	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:16:02.470   16:59:55	-- common/autotest_common.sh@10 -- # set +x
00:16:03.038   16:59:55	-- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2
00:16:03.296  [2024-11-19 16:59:55.994189] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:16:03.296  BaseBdev2
00:16:03.296   16:59:56	-- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2
00:16:03.296   16:59:56	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2
00:16:03.296   16:59:56	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:16:03.296   16:59:56	-- common/autotest_common.sh@899 -- # local i
00:16:03.296   16:59:56	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:16:03.296   16:59:56	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:16:03.296   16:59:56	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:16:03.554   16:59:56	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000
00:16:03.554  [
00:16:03.554    {
00:16:03.554      "name": "BaseBdev2",
00:16:03.554      "aliases": [
00:16:03.554        "5a050b90-d6f8-4c8d-819e-6b317cc6d6d3"
00:16:03.554      ],
00:16:03.554      "product_name": "Malloc disk",
00:16:03.554      "block_size": 512,
00:16:03.554      "num_blocks": 65536,
00:16:03.554      "uuid": "5a050b90-d6f8-4c8d-819e-6b317cc6d6d3",
00:16:03.554      "assigned_rate_limits": {
00:16:03.554        "rw_ios_per_sec": 0,
00:16:03.554        "rw_mbytes_per_sec": 0,
00:16:03.554        "r_mbytes_per_sec": 0,
00:16:03.554        "w_mbytes_per_sec": 0
00:16:03.554      },
00:16:03.554      "claimed": true,
00:16:03.554      "claim_type": "exclusive_write",
00:16:03.554      "zoned": false,
00:16:03.554      "supported_io_types": {
00:16:03.554        "read": true,
00:16:03.554        "write": true,
00:16:03.554        "unmap": true,
00:16:03.554        "write_zeroes": true,
00:16:03.554        "flush": true,
00:16:03.554        "reset": true,
00:16:03.554        "compare": false,
00:16:03.554        "compare_and_write": false,
00:16:03.554        "abort": true,
00:16:03.554        "nvme_admin": false,
00:16:03.554        "nvme_io": false
00:16:03.554      },
00:16:03.554      "memory_domains": [
00:16:03.554        {
00:16:03.554          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:16:03.554          "dma_device_type": 2
00:16:03.554        }
00:16:03.554      ],
00:16:03.555      "driver_specific": {}
00:16:03.555    }
00:16:03.555  ]
00:16:03.555   16:59:56	-- common/autotest_common.sh@905 -- # return 0
00:16:03.555   16:59:56	-- bdev/bdev_raid.sh@254 -- # (( i++ ))
00:16:03.555   16:59:56	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:16:03.555   16:59:56	-- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3
00:16:03.555   16:59:56	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:16:03.555   16:59:56	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:16:03.555   16:59:56	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:16:03.555   16:59:56	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:16:03.555   16:59:56	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:16:03.555   16:59:56	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:16:03.555   16:59:56	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:16:03.555   16:59:56	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:16:03.555   16:59:56	-- bdev/bdev_raid.sh@125 -- # local tmp
00:16:03.555    16:59:56	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:16:03.813    16:59:56	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:16:03.813   16:59:56	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:16:03.813    "name": "Existed_Raid",
00:16:03.813    "uuid": "21003a33-e01b-4c58-91e9-09831dde4d73",
00:16:03.813    "strip_size_kb": 0,
00:16:03.813    "state": "configuring",
00:16:03.813    "raid_level": "raid1",
00:16:03.813    "superblock": true,
00:16:03.813    "num_base_bdevs": 3,
00:16:03.813    "num_base_bdevs_discovered": 2,
00:16:03.813    "num_base_bdevs_operational": 3,
00:16:03.813    "base_bdevs_list": [
00:16:03.813      {
00:16:03.813        "name": "BaseBdev1",
00:16:03.813        "uuid": "7657689b-b73c-44d4-9efd-6e197d6aa46d",
00:16:03.813        "is_configured": true,
00:16:03.813        "data_offset": 2048,
00:16:03.813        "data_size": 63488
00:16:03.813      },
00:16:03.813      {
00:16:03.813        "name": "BaseBdev2",
00:16:03.813        "uuid": "5a050b90-d6f8-4c8d-819e-6b317cc6d6d3",
00:16:03.813        "is_configured": true,
00:16:03.813        "data_offset": 2048,
00:16:03.813        "data_size": 63488
00:16:03.813      },
00:16:03.813      {
00:16:03.813        "name": "BaseBdev3",
00:16:03.813        "uuid": "00000000-0000-0000-0000-000000000000",
00:16:03.813        "is_configured": false,
00:16:03.813        "data_offset": 0,
00:16:03.813        "data_size": 0
00:16:03.813      }
00:16:03.813    ]
00:16:03.813  }'
00:16:03.813   16:59:56	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:16:03.813   16:59:56	-- common/autotest_common.sh@10 -- # set +x
00:16:04.381   16:59:57	-- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3
00:16:04.640  [2024-11-19 16:59:57.393755] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:16:04.640  [2024-11-19 16:59:57.393985] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006680
00:16:04.640  [2024-11-19 16:59:57.393999] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512
00:16:04.640  [2024-11-19 16:59:57.394138] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120
00:16:04.640  [2024-11-19 16:59:57.394533] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006680
00:16:04.640  [2024-11-19 16:59:57.394553] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006680
00:16:04.640  BaseBdev3
00:16:04.640  [2024-11-19 16:59:57.394714] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:16:04.640   16:59:57	-- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3
00:16:04.640   16:59:57	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3
00:16:04.640   16:59:57	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:16:04.640   16:59:57	-- common/autotest_common.sh@899 -- # local i
00:16:04.640   16:59:57	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:16:04.640   16:59:57	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:16:04.640   16:59:57	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:16:04.898   16:59:57	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000
00:16:05.157  [
00:16:05.157    {
00:16:05.157      "name": "BaseBdev3",
00:16:05.157      "aliases": [
00:16:05.157        "543dc7c9-f366-4737-8129-ed6ad7a3aef6"
00:16:05.157      ],
00:16:05.157      "product_name": "Malloc disk",
00:16:05.157      "block_size": 512,
00:16:05.157      "num_blocks": 65536,
00:16:05.157      "uuid": "543dc7c9-f366-4737-8129-ed6ad7a3aef6",
00:16:05.157      "assigned_rate_limits": {
00:16:05.157        "rw_ios_per_sec": 0,
00:16:05.157        "rw_mbytes_per_sec": 0,
00:16:05.157        "r_mbytes_per_sec": 0,
00:16:05.157        "w_mbytes_per_sec": 0
00:16:05.157      },
00:16:05.157      "claimed": true,
00:16:05.157      "claim_type": "exclusive_write",
00:16:05.157      "zoned": false,
00:16:05.157      "supported_io_types": {
00:16:05.157        "read": true,
00:16:05.157        "write": true,
00:16:05.157        "unmap": true,
00:16:05.157        "write_zeroes": true,
00:16:05.157        "flush": true,
00:16:05.157        "reset": true,
00:16:05.157        "compare": false,
00:16:05.157        "compare_and_write": false,
00:16:05.157        "abort": true,
00:16:05.157        "nvme_admin": false,
00:16:05.157        "nvme_io": false
00:16:05.157      },
00:16:05.157      "memory_domains": [
00:16:05.157        {
00:16:05.157          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:16:05.157          "dma_device_type": 2
00:16:05.157        }
00:16:05.157      ],
00:16:05.157      "driver_specific": {}
00:16:05.157    }
00:16:05.157  ]
00:16:05.157   16:59:57	-- common/autotest_common.sh@905 -- # return 0
00:16:05.157   16:59:57	-- bdev/bdev_raid.sh@254 -- # (( i++ ))
00:16:05.157   16:59:57	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:16:05.158   16:59:57	-- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3
00:16:05.158   16:59:57	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:16:05.158   16:59:57	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:16:05.158   16:59:57	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:16:05.158   16:59:57	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:16:05.158   16:59:57	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:16:05.158   16:59:57	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:16:05.158   16:59:57	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:16:05.158   16:59:57	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:16:05.158   16:59:57	-- bdev/bdev_raid.sh@125 -- # local tmp
00:16:05.158    16:59:57	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:16:05.158    16:59:57	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:16:05.416   16:59:58	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:16:05.416    "name": "Existed_Raid",
00:16:05.416    "uuid": "21003a33-e01b-4c58-91e9-09831dde4d73",
00:16:05.416    "strip_size_kb": 0,
00:16:05.416    "state": "online",
00:16:05.416    "raid_level": "raid1",
00:16:05.416    "superblock": true,
00:16:05.417    "num_base_bdevs": 3,
00:16:05.417    "num_base_bdevs_discovered": 3,
00:16:05.417    "num_base_bdevs_operational": 3,
00:16:05.417    "base_bdevs_list": [
00:16:05.417      {
00:16:05.417        "name": "BaseBdev1",
00:16:05.417        "uuid": "7657689b-b73c-44d4-9efd-6e197d6aa46d",
00:16:05.417        "is_configured": true,
00:16:05.417        "data_offset": 2048,
00:16:05.417        "data_size": 63488
00:16:05.417      },
00:16:05.417      {
00:16:05.417        "name": "BaseBdev2",
00:16:05.417        "uuid": "5a050b90-d6f8-4c8d-819e-6b317cc6d6d3",
00:16:05.417        "is_configured": true,
00:16:05.417        "data_offset": 2048,
00:16:05.417        "data_size": 63488
00:16:05.417      },
00:16:05.417      {
00:16:05.417        "name": "BaseBdev3",
00:16:05.417        "uuid": "543dc7c9-f366-4737-8129-ed6ad7a3aef6",
00:16:05.417        "is_configured": true,
00:16:05.417        "data_offset": 2048,
00:16:05.417        "data_size": 63488
00:16:05.417      }
00:16:05.417    ]
00:16:05.417  }'
00:16:05.417   16:59:58	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:16:05.417   16:59:58	-- common/autotest_common.sh@10 -- # set +x
00:16:05.983   16:59:58	-- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1
00:16:06.242  [2024-11-19 16:59:58.926307] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:16:06.242   16:59:58	-- bdev/bdev_raid.sh@263 -- # local expected_state
00:16:06.242   16:59:58	-- bdev/bdev_raid.sh@264 -- # has_redundancy raid1
00:16:06.242   16:59:58	-- bdev/bdev_raid.sh@195 -- # case $1 in
00:16:06.242   16:59:58	-- bdev/bdev_raid.sh@196 -- # return 0
00:16:06.242   16:59:58	-- bdev/bdev_raid.sh@267 -- # expected_state=online
00:16:06.242   16:59:58	-- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2
00:16:06.242   16:59:58	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:16:06.242   16:59:58	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:16:06.242   16:59:58	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:16:06.242   16:59:58	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:16:06.242   16:59:58	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:16:06.242   16:59:58	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:16:06.242   16:59:58	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:16:06.242   16:59:58	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:16:06.242   16:59:58	-- bdev/bdev_raid.sh@125 -- # local tmp
00:16:06.242    16:59:58	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:16:06.242    16:59:58	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:16:06.501   16:59:59	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:16:06.501    "name": "Existed_Raid",
00:16:06.501    "uuid": "21003a33-e01b-4c58-91e9-09831dde4d73",
00:16:06.501    "strip_size_kb": 0,
00:16:06.501    "state": "online",
00:16:06.501    "raid_level": "raid1",
00:16:06.501    "superblock": true,
00:16:06.501    "num_base_bdevs": 3,
00:16:06.501    "num_base_bdevs_discovered": 2,
00:16:06.501    "num_base_bdevs_operational": 2,
00:16:06.501    "base_bdevs_list": [
00:16:06.501      {
00:16:06.501        "name": null,
00:16:06.501        "uuid": "00000000-0000-0000-0000-000000000000",
00:16:06.501        "is_configured": false,
00:16:06.501        "data_offset": 2048,
00:16:06.501        "data_size": 63488
00:16:06.501      },
00:16:06.501      {
00:16:06.501        "name": "BaseBdev2",
00:16:06.501        "uuid": "5a050b90-d6f8-4c8d-819e-6b317cc6d6d3",
00:16:06.501        "is_configured": true,
00:16:06.501        "data_offset": 2048,
00:16:06.501        "data_size": 63488
00:16:06.501      },
00:16:06.501      {
00:16:06.501        "name": "BaseBdev3",
00:16:06.501        "uuid": "543dc7c9-f366-4737-8129-ed6ad7a3aef6",
00:16:06.501        "is_configured": true,
00:16:06.501        "data_offset": 2048,
00:16:06.501        "data_size": 63488
00:16:06.501      }
00:16:06.501    ]
00:16:06.501  }'
00:16:06.501   16:59:59	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:16:06.501   16:59:59	-- common/autotest_common.sh@10 -- # set +x
00:16:07.067   16:59:59	-- bdev/bdev_raid.sh@273 -- # (( i = 1 ))
00:16:07.067   16:59:59	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:16:07.067    16:59:59	-- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:16:07.067    16:59:59	-- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]'
00:16:07.326   17:00:00	-- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid
00:16:07.326   17:00:00	-- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:16:07.326   17:00:00	-- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2
00:16:07.584  [2024-11-19 17:00:00.261688] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2
00:16:07.584   17:00:00	-- bdev/bdev_raid.sh@273 -- # (( i++ ))
00:16:07.584   17:00:00	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:16:07.584    17:00:00	-- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:16:07.584    17:00:00	-- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]'
00:16:07.843   17:00:00	-- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid
00:16:07.843   17:00:00	-- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:16:07.843   17:00:00	-- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3
00:16:08.102  [2024-11-19 17:00:00.754541] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3
00:16:08.102  [2024-11-19 17:00:00.754605] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:16:08.102  [2024-11-19 17:00:00.754665] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:16:08.102  [2024-11-19 17:00:00.767137] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:16:08.103  [2024-11-19 17:00:00.767179] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state offline
00:16:08.103   17:00:00	-- bdev/bdev_raid.sh@273 -- # (( i++ ))
00:16:08.103   17:00:00	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:16:08.103    17:00:00	-- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:16:08.103    17:00:00	-- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)'
00:16:08.360   17:00:00	-- bdev/bdev_raid.sh@281 -- # raid_bdev=
00:16:08.360   17:00:00	-- bdev/bdev_raid.sh@282 -- # '[' -n '' ']'
00:16:08.360   17:00:00	-- bdev/bdev_raid.sh@287 -- # killprocess 127792
00:16:08.361   17:00:00	-- common/autotest_common.sh@936 -- # '[' -z 127792 ']'
00:16:08.361   17:00:00	-- common/autotest_common.sh@940 -- # kill -0 127792
00:16:08.361    17:00:00	-- common/autotest_common.sh@941 -- # uname
00:16:08.361   17:00:00	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:16:08.361    17:00:01	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 127792
00:16:08.361  killing process with pid 127792
00:16:08.361   17:00:01	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:16:08.361   17:00:01	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:16:08.361   17:00:01	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 127792'
00:16:08.361   17:00:01	-- common/autotest_common.sh@955 -- # kill 127792
00:16:08.361   17:00:01	-- common/autotest_common.sh@960 -- # wait 127792
00:16:08.361  [2024-11-19 17:00:01.029605] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:16:08.361  [2024-11-19 17:00:01.029739] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:16:08.619  ************************************
00:16:08.619  END TEST raid_state_function_test_sb
00:16:08.619  ************************************
00:16:08.619   17:00:01	-- bdev/bdev_raid.sh@289 -- # return 0
00:16:08.619  
00:16:08.619  real	0m11.399s
00:16:08.619  user	0m20.345s
00:16:08.619  sys	0m1.950s
00:16:08.619   17:00:01	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:16:08.619   17:00:01	-- common/autotest_common.sh@10 -- # set +x
00:16:08.619   17:00:01	-- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 3
00:16:08.619   17:00:01	-- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']'
00:16:08.619   17:00:01	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:16:08.619   17:00:01	-- common/autotest_common.sh@10 -- # set +x
00:16:08.619  ************************************
00:16:08.619  START TEST raid_superblock_test
00:16:08.619  ************************************
00:16:08.619   17:00:01	-- common/autotest_common.sh@1114 -- # raid_superblock_test raid1 3
00:16:08.619   17:00:01	-- bdev/bdev_raid.sh@338 -- # local raid_level=raid1
00:16:08.619   17:00:01	-- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3
00:16:08.619   17:00:01	-- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=()
00:16:08.619   17:00:01	-- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc
00:16:08.619   17:00:01	-- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=()
00:16:08.619   17:00:01	-- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt
00:16:08.619   17:00:01	-- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=()
00:16:08.619   17:00:01	-- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid
00:16:08.619   17:00:01	-- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1
00:16:08.619   17:00:01	-- bdev/bdev_raid.sh@344 -- # local strip_size
00:16:08.619   17:00:01	-- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg
00:16:08.619   17:00:01	-- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid
00:16:08.619   17:00:01	-- bdev/bdev_raid.sh@347 -- # local raid_bdev
00:16:08.619   17:00:01	-- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']'
00:16:08.619   17:00:01	-- bdev/bdev_raid.sh@353 -- # strip_size=0
00:16:08.619   17:00:01	-- bdev/bdev_raid.sh@357 -- # raid_pid=128165
00:16:08.619   17:00:01	-- bdev/bdev_raid.sh@358 -- # waitforlisten 128165 /var/tmp/spdk-raid.sock
00:16:08.619   17:00:01	-- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid
00:16:08.619   17:00:01	-- common/autotest_common.sh@829 -- # '[' -z 128165 ']'
00:16:08.619   17:00:01	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock
00:16:08.619   17:00:01	-- common/autotest_common.sh@834 -- # local max_retries=100
00:16:08.619  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...
00:16:08.619   17:00:01	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...'
00:16:08.619   17:00:01	-- common/autotest_common.sh@838 -- # xtrace_disable
00:16:08.619   17:00:01	-- common/autotest_common.sh@10 -- # set +x
00:16:08.619  [2024-11-19 17:00:01.447134] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:16:08.619  [2024-11-19 17:00:01.447349] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128165 ]
00:16:08.878  [2024-11-19 17:00:01.592363] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:16:08.878  [2024-11-19 17:00:01.648706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:16:08.878  [2024-11-19 17:00:01.693721] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:16:09.814   17:00:02	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:16:09.814   17:00:02	-- common/autotest_common.sh@862 -- # return 0
00:16:09.814   17:00:02	-- bdev/bdev_raid.sh@361 -- # (( i = 1 ))
00:16:09.814   17:00:02	-- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs ))
00:16:09.814   17:00:02	-- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1
00:16:09.814   17:00:02	-- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1
00:16:09.814   17:00:02	-- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001
00:16:09.814   17:00:02	-- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc)
00:16:09.814   17:00:02	-- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt)
00:16:09.814   17:00:02	-- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:16:09.814   17:00:02	-- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1
00:16:09.814  malloc1
00:16:09.814   17:00:02	-- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001
00:16:10.073  [2024-11-19 17:00:02.707790] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1
00:16:10.073  [2024-11-19 17:00:02.707926] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:16:10.073  [2024-11-19 17:00:02.707976] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80
00:16:10.073  [2024-11-19 17:00:02.708048] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:16:10.073  [2024-11-19 17:00:02.710852] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:16:10.073  [2024-11-19 17:00:02.710964] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1
00:16:10.073  pt1
00:16:10.073   17:00:02	-- bdev/bdev_raid.sh@361 -- # (( i++ ))
00:16:10.073   17:00:02	-- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs ))
00:16:10.073   17:00:02	-- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2
00:16:10.073   17:00:02	-- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2
00:16:10.073   17:00:02	-- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002
00:16:10.073   17:00:02	-- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc)
00:16:10.073   17:00:02	-- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt)
00:16:10.073   17:00:02	-- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:16:10.073   17:00:02	-- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2
00:16:10.332  malloc2
00:16:10.332   17:00:02	-- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:16:10.590  [2024-11-19 17:00:03.197479] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:16:10.590  [2024-11-19 17:00:03.197578] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:16:10.590  [2024-11-19 17:00:03.197622] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680
00:16:10.590  [2024-11-19 17:00:03.197668] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:16:10.590  [2024-11-19 17:00:03.200349] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:16:10.590  [2024-11-19 17:00:03.200441] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:16:10.590  pt2
00:16:10.590   17:00:03	-- bdev/bdev_raid.sh@361 -- # (( i++ ))
00:16:10.590   17:00:03	-- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs ))
00:16:10.590   17:00:03	-- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3
00:16:10.590   17:00:03	-- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3
00:16:10.590   17:00:03	-- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003
00:16:10.591   17:00:03	-- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc)
00:16:10.591   17:00:03	-- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt)
00:16:10.591   17:00:03	-- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:16:10.591   17:00:03	-- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3
00:16:10.591  malloc3
00:16:10.591   17:00:03	-- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003
00:16:10.849  [2024-11-19 17:00:03.670595] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3
00:16:10.849  [2024-11-19 17:00:03.670699] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:16:10.849  [2024-11-19 17:00:03.670741] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280
00:16:10.849  [2024-11-19 17:00:03.670783] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:16:10.849  [2024-11-19 17:00:03.673484] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:16:10.849  [2024-11-19 17:00:03.673549] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3
00:16:10.849  pt3
00:16:10.849   17:00:03	-- bdev/bdev_raid.sh@361 -- # (( i++ ))
00:16:10.849   17:00:03	-- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs ))
00:16:10.849   17:00:03	-- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3' -n raid_bdev1 -s
00:16:11.108  [2024-11-19 17:00:03.866765] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed
00:16:11.108  [2024-11-19 17:00:03.869216] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:16:11.108  [2024-11-19 17:00:03.869286] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed
00:16:11.108  [2024-11-19 17:00:03.869517] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880
00:16:11.108  [2024-11-19 17:00:03.869531] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512
00:16:11.108  [2024-11-19 17:00:03.869724] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0
00:16:11.108  [2024-11-19 17:00:03.870186] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880
00:16:11.108  [2024-11-19 17:00:03.870200] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007880
00:16:11.108  [2024-11-19 17:00:03.870370] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:16:11.108   17:00:03	-- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3
00:16:11.108   17:00:03	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:16:11.108   17:00:03	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:16:11.108   17:00:03	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:16:11.108   17:00:03	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:16:11.108   17:00:03	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:16:11.108   17:00:03	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:16:11.108   17:00:03	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:16:11.108   17:00:03	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:16:11.108   17:00:03	-- bdev/bdev_raid.sh@125 -- # local tmp
00:16:11.108    17:00:03	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:16:11.108    17:00:03	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:16:11.367   17:00:04	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:16:11.367    "name": "raid_bdev1",
00:16:11.367    "uuid": "362a787b-0266-49ca-8def-f5e3fb4c816a",
00:16:11.367    "strip_size_kb": 0,
00:16:11.367    "state": "online",
00:16:11.367    "raid_level": "raid1",
00:16:11.367    "superblock": true,
00:16:11.367    "num_base_bdevs": 3,
00:16:11.367    "num_base_bdevs_discovered": 3,
00:16:11.367    "num_base_bdevs_operational": 3,
00:16:11.367    "base_bdevs_list": [
00:16:11.367      {
00:16:11.367        "name": "pt1",
00:16:11.367        "uuid": "e22e36c4-4c4a-58ef-a8a7-822fd2865876",
00:16:11.367        "is_configured": true,
00:16:11.367        "data_offset": 2048,
00:16:11.367        "data_size": 63488
00:16:11.367      },
00:16:11.367      {
00:16:11.367        "name": "pt2",
00:16:11.367        "uuid": "1b8c3ef5-88e4-5f26-9762-811ebe5a819f",
00:16:11.367        "is_configured": true,
00:16:11.367        "data_offset": 2048,
00:16:11.367        "data_size": 63488
00:16:11.367      },
00:16:11.367      {
00:16:11.367        "name": "pt3",
00:16:11.367        "uuid": "94c51c29-a6fe-5759-b064-36a46d64f4f5",
00:16:11.367        "is_configured": true,
00:16:11.367        "data_offset": 2048,
00:16:11.367        "data_size": 63488
00:16:11.367      }
00:16:11.367    ]
00:16:11.367  }'
00:16:11.367   17:00:04	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:16:11.367   17:00:04	-- common/autotest_common.sh@10 -- # set +x
00:16:12.304    17:00:04	-- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1
00:16:12.304    17:00:04	-- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid'
00:16:12.304  [2024-11-19 17:00:05.039096] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:16:12.304   17:00:05	-- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=362a787b-0266-49ca-8def-f5e3fb4c816a
00:16:12.304   17:00:05	-- bdev/bdev_raid.sh@380 -- # '[' -z 362a787b-0266-49ca-8def-f5e3fb4c816a ']'
00:16:12.304   17:00:05	-- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1
00:16:12.563  [2024-11-19 17:00:05.286921] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:16:12.563  [2024-11-19 17:00:05.286965] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:16:12.563  [2024-11-19 17:00:05.287063] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:16:12.563  [2024-11-19 17:00:05.287149] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:16:12.563  [2024-11-19 17:00:05.287159] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name raid_bdev1, state offline
00:16:12.563    17:00:05	-- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:16:12.563    17:00:05	-- bdev/bdev_raid.sh@386 -- # jq -r '.[]'
00:16:12.822   17:00:05	-- bdev/bdev_raid.sh@386 -- # raid_bdev=
00:16:12.822   17:00:05	-- bdev/bdev_raid.sh@387 -- # '[' -n '' ']'
00:16:12.822   17:00:05	-- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}"
00:16:12.822   17:00:05	-- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1
00:16:13.080   17:00:05	-- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}"
00:16:13.080   17:00:05	-- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2
00:16:13.338   17:00:05	-- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}"
00:16:13.338   17:00:05	-- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3
00:16:13.338    17:00:06	-- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs
00:16:13.338    17:00:06	-- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any'
00:16:13.596   17:00:06	-- bdev/bdev_raid.sh@395 -- # '[' false == true ']'
00:16:13.596   17:00:06	-- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1
00:16:13.596   17:00:06	-- common/autotest_common.sh@650 -- # local es=0
00:16:13.596   17:00:06	-- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1
00:16:13.596   17:00:06	-- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:16:13.596   17:00:06	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:16:13.596    17:00:06	-- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:16:13.596   17:00:06	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:16:13.596    17:00:06	-- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:16:13.596   17:00:06	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:16:13.596   17:00:06	-- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:16:13.596   17:00:06	-- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]]
00:16:13.596   17:00:06	-- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1
00:16:13.855  [2024-11-19 17:00:06.623215] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed
00:16:13.855  [2024-11-19 17:00:06.625470] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed
00:16:13.855  [2024-11-19 17:00:06.625527] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed
00:16:13.855  [2024-11-19 17:00:06.625577] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1
00:16:13.855  [2024-11-19 17:00:06.625653] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2
00:16:13.855  [2024-11-19 17:00:06.625682] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3
00:16:13.855  [2024-11-19 17:00:06.625728] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:16:13.855  [2024-11-19 17:00:06.625739] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state configuring
00:16:13.855  request:
00:16:13.855  {
00:16:13.855    "name": "raid_bdev1",
00:16:13.855    "raid_level": "raid1",
00:16:13.855    "base_bdevs": [
00:16:13.855      "malloc1",
00:16:13.855      "malloc2",
00:16:13.855      "malloc3"
00:16:13.855    ],
00:16:13.855    "superblock": false,
00:16:13.855    "method": "bdev_raid_create",
00:16:13.855    "req_id": 1
00:16:13.855  }
00:16:13.855  Got JSON-RPC error response
00:16:13.855  response:
00:16:13.855  {
00:16:13.855    "code": -17,
00:16:13.855    "message": "Failed to create RAID bdev raid_bdev1: File exists"
00:16:13.855  }
00:16:13.855   17:00:06	-- common/autotest_common.sh@653 -- # es=1
00:16:13.855   17:00:06	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:16:13.855   17:00:06	-- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:16:13.855   17:00:06	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:16:13.855    17:00:06	-- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:16:13.855    17:00:06	-- bdev/bdev_raid.sh@403 -- # jq -r '.[]'
00:16:14.113   17:00:06	-- bdev/bdev_raid.sh@403 -- # raid_bdev=
00:16:14.113   17:00:06	-- bdev/bdev_raid.sh@404 -- # '[' -n '' ']'
00:16:14.113   17:00:06	-- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001
00:16:14.372  [2024-11-19 17:00:07.039197] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1
00:16:14.372  [2024-11-19 17:00:07.039288] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:16:14.372  [2024-11-19 17:00:07.039326] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480
00:16:14.372  [2024-11-19 17:00:07.039350] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:16:14.372  [2024-11-19 17:00:07.041962] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:16:14.372  [2024-11-19 17:00:07.042016] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1
00:16:14.372  [2024-11-19 17:00:07.042118] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1
00:16:14.372  [2024-11-19 17:00:07.042179] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed
00:16:14.372  pt1
00:16:14.372   17:00:07	-- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3
00:16:14.372   17:00:07	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:16:14.372   17:00:07	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:16:14.372   17:00:07	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:16:14.372   17:00:07	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:16:14.372   17:00:07	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:16:14.372   17:00:07	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:16:14.372   17:00:07	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:16:14.372   17:00:07	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:16:14.372   17:00:07	-- bdev/bdev_raid.sh@125 -- # local tmp
00:16:14.372    17:00:07	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:16:14.372    17:00:07	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:16:14.630   17:00:07	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:16:14.630    "name": "raid_bdev1",
00:16:14.630    "uuid": "362a787b-0266-49ca-8def-f5e3fb4c816a",
00:16:14.630    "strip_size_kb": 0,
00:16:14.630    "state": "configuring",
00:16:14.630    "raid_level": "raid1",
00:16:14.630    "superblock": true,
00:16:14.630    "num_base_bdevs": 3,
00:16:14.630    "num_base_bdevs_discovered": 1,
00:16:14.630    "num_base_bdevs_operational": 3,
00:16:14.630    "base_bdevs_list": [
00:16:14.630      {
00:16:14.630        "name": "pt1",
00:16:14.630        "uuid": "e22e36c4-4c4a-58ef-a8a7-822fd2865876",
00:16:14.630        "is_configured": true,
00:16:14.630        "data_offset": 2048,
00:16:14.630        "data_size": 63488
00:16:14.630      },
00:16:14.630      {
00:16:14.630        "name": null,
00:16:14.630        "uuid": "1b8c3ef5-88e4-5f26-9762-811ebe5a819f",
00:16:14.630        "is_configured": false,
00:16:14.630        "data_offset": 2048,
00:16:14.630        "data_size": 63488
00:16:14.630      },
00:16:14.630      {
00:16:14.630        "name": null,
00:16:14.630        "uuid": "94c51c29-a6fe-5759-b064-36a46d64f4f5",
00:16:14.630        "is_configured": false,
00:16:14.630        "data_offset": 2048,
00:16:14.630        "data_size": 63488
00:16:14.630      }
00:16:14.630    ]
00:16:14.630  }'
00:16:14.630   17:00:07	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:16:14.630   17:00:07	-- common/autotest_common.sh@10 -- # set +x
00:16:15.198   17:00:07	-- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']'
00:16:15.198   17:00:07	-- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:16:15.457  [2024-11-19 17:00:08.067419] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:16:15.457  [2024-11-19 17:00:08.067555] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:16:15.457  [2024-11-19 17:00:08.067605] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80
00:16:15.457  [2024-11-19 17:00:08.067644] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:16:15.457  [2024-11-19 17:00:08.068077] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:16:15.457  [2024-11-19 17:00:08.068156] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:16:15.457  [2024-11-19 17:00:08.068260] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2
00:16:15.457  [2024-11-19 17:00:08.068283] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:16:15.457  pt2
00:16:15.457   17:00:08	-- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2
00:16:15.715  [2024-11-19 17:00:08.339518] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2
00:16:15.715   17:00:08	-- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3
00:16:15.715   17:00:08	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:16:15.715   17:00:08	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:16:15.715   17:00:08	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:16:15.715   17:00:08	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:16:15.715   17:00:08	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:16:15.715   17:00:08	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:16:15.715   17:00:08	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:16:15.715   17:00:08	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:16:15.715   17:00:08	-- bdev/bdev_raid.sh@125 -- # local tmp
00:16:15.715    17:00:08	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:16:15.715    17:00:08	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:16:15.715   17:00:08	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:16:15.715    "name": "raid_bdev1",
00:16:15.715    "uuid": "362a787b-0266-49ca-8def-f5e3fb4c816a",
00:16:15.715    "strip_size_kb": 0,
00:16:15.715    "state": "configuring",
00:16:15.715    "raid_level": "raid1",
00:16:15.715    "superblock": true,
00:16:15.715    "num_base_bdevs": 3,
00:16:15.715    "num_base_bdevs_discovered": 1,
00:16:15.715    "num_base_bdevs_operational": 3,
00:16:15.715    "base_bdevs_list": [
00:16:15.715      {
00:16:15.715        "name": "pt1",
00:16:15.715        "uuid": "e22e36c4-4c4a-58ef-a8a7-822fd2865876",
00:16:15.715        "is_configured": true,
00:16:15.715        "data_offset": 2048,
00:16:15.715        "data_size": 63488
00:16:15.715      },
00:16:15.715      {
00:16:15.715        "name": null,
00:16:15.715        "uuid": "1b8c3ef5-88e4-5f26-9762-811ebe5a819f",
00:16:15.715        "is_configured": false,
00:16:15.715        "data_offset": 2048,
00:16:15.715        "data_size": 63488
00:16:15.715      },
00:16:15.715      {
00:16:15.715        "name": null,
00:16:15.715        "uuid": "94c51c29-a6fe-5759-b064-36a46d64f4f5",
00:16:15.716        "is_configured": false,
00:16:15.716        "data_offset": 2048,
00:16:15.716        "data_size": 63488
00:16:15.716      }
00:16:15.716    ]
00:16:15.716  }'
00:16:15.716   17:00:08	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:16:15.716   17:00:08	-- common/autotest_common.sh@10 -- # set +x
00:16:16.310   17:00:09	-- bdev/bdev_raid.sh@422 -- # (( i = 1 ))
00:16:16.310   17:00:09	-- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs ))
00:16:16.310   17:00:09	-- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:16:16.568  [2024-11-19 17:00:09.311424] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:16:16.568  [2024-11-19 17:00:09.311534] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:16:16.568  [2024-11-19 17:00:09.311568] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080
00:16:16.568  [2024-11-19 17:00:09.311595] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:16:16.568  [2024-11-19 17:00:09.312075] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:16:16.568  [2024-11-19 17:00:09.312111] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:16:16.568  [2024-11-19 17:00:09.312204] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2
00:16:16.568  [2024-11-19 17:00:09.312225] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:16:16.568  pt2
00:16:16.568   17:00:09	-- bdev/bdev_raid.sh@422 -- # (( i++ ))
00:16:16.568   17:00:09	-- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs ))
00:16:16.568   17:00:09	-- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003
00:16:16.827  [2024-11-19 17:00:09.499500] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3
00:16:16.827  [2024-11-19 17:00:09.499601] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:16:16.827  [2024-11-19 17:00:09.499638] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380
00:16:16.827  [2024-11-19 17:00:09.499665] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:16:16.827  [2024-11-19 17:00:09.500092] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:16:16.827  [2024-11-19 17:00:09.500126] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3
00:16:16.827  [2024-11-19 17:00:09.500215] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3
00:16:16.827  [2024-11-19 17:00:09.500235] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed
00:16:16.827  [2024-11-19 17:00:09.500362] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80
00:16:16.827  [2024-11-19 17:00:09.500372] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512
00:16:16.827  [2024-11-19 17:00:09.500442] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0
00:16:16.827  [2024-11-19 17:00:09.500726] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80
00:16:16.827  [2024-11-19 17:00:09.500737] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80
00:16:16.827  [2024-11-19 17:00:09.500830] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:16:16.827  pt3
00:16:16.827   17:00:09	-- bdev/bdev_raid.sh@422 -- # (( i++ ))
00:16:16.827   17:00:09	-- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs ))
00:16:16.827   17:00:09	-- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3
00:16:16.827   17:00:09	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:16:16.827   17:00:09	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:16:16.827   17:00:09	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:16:16.827   17:00:09	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:16:16.827   17:00:09	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:16:16.827   17:00:09	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:16:16.827   17:00:09	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:16:16.827   17:00:09	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:16:16.827   17:00:09	-- bdev/bdev_raid.sh@125 -- # local tmp
00:16:16.827    17:00:09	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:16:16.827    17:00:09	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:16:17.086   17:00:09	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:16:17.086    "name": "raid_bdev1",
00:16:17.086    "uuid": "362a787b-0266-49ca-8def-f5e3fb4c816a",
00:16:17.086    "strip_size_kb": 0,
00:16:17.086    "state": "online",
00:16:17.086    "raid_level": "raid1",
00:16:17.086    "superblock": true,
00:16:17.086    "num_base_bdevs": 3,
00:16:17.086    "num_base_bdevs_discovered": 3,
00:16:17.086    "num_base_bdevs_operational": 3,
00:16:17.086    "base_bdevs_list": [
00:16:17.086      {
00:16:17.086        "name": "pt1",
00:16:17.087        "uuid": "e22e36c4-4c4a-58ef-a8a7-822fd2865876",
00:16:17.087        "is_configured": true,
00:16:17.087        "data_offset": 2048,
00:16:17.087        "data_size": 63488
00:16:17.087      },
00:16:17.087      {
00:16:17.087        "name": "pt2",
00:16:17.087        "uuid": "1b8c3ef5-88e4-5f26-9762-811ebe5a819f",
00:16:17.087        "is_configured": true,
00:16:17.087        "data_offset": 2048,
00:16:17.087        "data_size": 63488
00:16:17.087      },
00:16:17.087      {
00:16:17.087        "name": "pt3",
00:16:17.087        "uuid": "94c51c29-a6fe-5759-b064-36a46d64f4f5",
00:16:17.087        "is_configured": true,
00:16:17.087        "data_offset": 2048,
00:16:17.087        "data_size": 63488
00:16:17.087      }
00:16:17.087    ]
00:16:17.087  }'
00:16:17.087   17:00:09	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:16:17.087   17:00:09	-- common/autotest_common.sh@10 -- # set +x
00:16:17.654    17:00:10	-- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1
00:16:17.654    17:00:10	-- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid'
00:16:17.912  [2024-11-19 17:00:10.627875] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:16:17.912   17:00:10	-- bdev/bdev_raid.sh@430 -- # '[' 362a787b-0266-49ca-8def-f5e3fb4c816a '!=' 362a787b-0266-49ca-8def-f5e3fb4c816a ']'
00:16:17.912   17:00:10	-- bdev/bdev_raid.sh@434 -- # has_redundancy raid1
00:16:17.912   17:00:10	-- bdev/bdev_raid.sh@195 -- # case $1 in
00:16:17.912   17:00:10	-- bdev/bdev_raid.sh@196 -- # return 0
00:16:17.912   17:00:10	-- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1
00:16:18.170  [2024-11-19 17:00:10.859732] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1
00:16:18.170   17:00:10	-- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2
00:16:18.170   17:00:10	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:16:18.170   17:00:10	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:16:18.170   17:00:10	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:16:18.170   17:00:10	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:16:18.170   17:00:10	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:16:18.171   17:00:10	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:16:18.171   17:00:10	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:16:18.171   17:00:10	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:16:18.171   17:00:10	-- bdev/bdev_raid.sh@125 -- # local tmp
00:16:18.171    17:00:10	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:16:18.171    17:00:10	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:16:18.429   17:00:11	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:16:18.429    "name": "raid_bdev1",
00:16:18.429    "uuid": "362a787b-0266-49ca-8def-f5e3fb4c816a",
00:16:18.429    "strip_size_kb": 0,
00:16:18.429    "state": "online",
00:16:18.429    "raid_level": "raid1",
00:16:18.429    "superblock": true,
00:16:18.429    "num_base_bdevs": 3,
00:16:18.429    "num_base_bdevs_discovered": 2,
00:16:18.429    "num_base_bdevs_operational": 2,
00:16:18.429    "base_bdevs_list": [
00:16:18.429      {
00:16:18.429        "name": null,
00:16:18.429        "uuid": "00000000-0000-0000-0000-000000000000",
00:16:18.429        "is_configured": false,
00:16:18.429        "data_offset": 2048,
00:16:18.429        "data_size": 63488
00:16:18.429      },
00:16:18.429      {
00:16:18.429        "name": "pt2",
00:16:18.429        "uuid": "1b8c3ef5-88e4-5f26-9762-811ebe5a819f",
00:16:18.429        "is_configured": true,
00:16:18.429        "data_offset": 2048,
00:16:18.429        "data_size": 63488
00:16:18.429      },
00:16:18.429      {
00:16:18.429        "name": "pt3",
00:16:18.429        "uuid": "94c51c29-a6fe-5759-b064-36a46d64f4f5",
00:16:18.429        "is_configured": true,
00:16:18.429        "data_offset": 2048,
00:16:18.429        "data_size": 63488
00:16:18.429      }
00:16:18.429    ]
00:16:18.429  }'
00:16:18.429   17:00:11	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:16:18.429   17:00:11	-- common/autotest_common.sh@10 -- # set +x
00:16:18.996   17:00:11	-- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1
00:16:19.254  [2024-11-19 17:00:12.055957] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:16:19.254  [2024-11-19 17:00:12.055999] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:16:19.254  [2024-11-19 17:00:12.056069] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:16:19.254  [2024-11-19 17:00:12.056136] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:16:19.254  [2024-11-19 17:00:12.056146] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline
00:16:19.254    17:00:12	-- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:16:19.254    17:00:12	-- bdev/bdev_raid.sh@443 -- # jq -r '.[]'
00:16:19.513   17:00:12	-- bdev/bdev_raid.sh@443 -- # raid_bdev=
00:16:19.513   17:00:12	-- bdev/bdev_raid.sh@444 -- # '[' -n '' ']'
00:16:19.513   17:00:12	-- bdev/bdev_raid.sh@449 -- # (( i = 1 ))
00:16:19.513   17:00:12	-- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs ))
00:16:19.513   17:00:12	-- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2
00:16:19.771   17:00:12	-- bdev/bdev_raid.sh@449 -- # (( i++ ))
00:16:19.771   17:00:12	-- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs ))
00:16:19.771   17:00:12	-- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3
00:16:20.030   17:00:12	-- bdev/bdev_raid.sh@449 -- # (( i++ ))
00:16:20.030   17:00:12	-- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs ))
00:16:20.030   17:00:12	-- bdev/bdev_raid.sh@454 -- # (( i = 1 ))
00:16:20.030   17:00:12	-- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 ))
00:16:20.030   17:00:12	-- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:16:20.288  [2024-11-19 17:00:13.020164] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:16:20.288  [2024-11-19 17:00:13.020255] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:16:20.288  [2024-11-19 17:00:13.020294] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680
00:16:20.288  [2024-11-19 17:00:13.020316] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:16:20.288  [2024-11-19 17:00:13.023027] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:16:20.288  [2024-11-19 17:00:13.023087] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:16:20.288  [2024-11-19 17:00:13.023186] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2
00:16:20.288  [2024-11-19 17:00:13.023230] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:16:20.288  pt2
00:16:20.288   17:00:13	-- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2
00:16:20.288   17:00:13	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:16:20.288   17:00:13	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:16:20.288   17:00:13	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:16:20.288   17:00:13	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:16:20.288   17:00:13	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:16:20.288   17:00:13	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:16:20.288   17:00:13	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:16:20.289   17:00:13	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:16:20.289   17:00:13	-- bdev/bdev_raid.sh@125 -- # local tmp
00:16:20.289    17:00:13	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:16:20.289    17:00:13	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:16:20.547   17:00:13	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:16:20.547    "name": "raid_bdev1",
00:16:20.547    "uuid": "362a787b-0266-49ca-8def-f5e3fb4c816a",
00:16:20.547    "strip_size_kb": 0,
00:16:20.547    "state": "configuring",
00:16:20.547    "raid_level": "raid1",
00:16:20.547    "superblock": true,
00:16:20.547    "num_base_bdevs": 3,
00:16:20.547    "num_base_bdevs_discovered": 1,
00:16:20.547    "num_base_bdevs_operational": 2,
00:16:20.547    "base_bdevs_list": [
00:16:20.547      {
00:16:20.547        "name": null,
00:16:20.547        "uuid": "00000000-0000-0000-0000-000000000000",
00:16:20.547        "is_configured": false,
00:16:20.547        "data_offset": 2048,
00:16:20.547        "data_size": 63488
00:16:20.547      },
00:16:20.547      {
00:16:20.547        "name": "pt2",
00:16:20.547        "uuid": "1b8c3ef5-88e4-5f26-9762-811ebe5a819f",
00:16:20.547        "is_configured": true,
00:16:20.547        "data_offset": 2048,
00:16:20.547        "data_size": 63488
00:16:20.547      },
00:16:20.547      {
00:16:20.547        "name": null,
00:16:20.547        "uuid": "94c51c29-a6fe-5759-b064-36a46d64f4f5",
00:16:20.547        "is_configured": false,
00:16:20.547        "data_offset": 2048,
00:16:20.547        "data_size": 63488
00:16:20.547      }
00:16:20.547    ]
00:16:20.547  }'
00:16:20.547   17:00:13	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:16:20.547   17:00:13	-- common/autotest_common.sh@10 -- # set +x
00:16:21.116   17:00:13	-- bdev/bdev_raid.sh@454 -- # (( i++ ))
00:16:21.116   17:00:13	-- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 ))
00:16:21.116   17:00:13	-- bdev/bdev_raid.sh@462 -- # i=2
00:16:21.116   17:00:13	-- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003
00:16:21.375  [2024-11-19 17:00:14.048383] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3
00:16:21.375  [2024-11-19 17:00:14.048476] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:16:21.375  [2024-11-19 17:00:14.048533] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80
00:16:21.375  [2024-11-19 17:00:14.048556] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:16:21.375  [2024-11-19 17:00:14.048985] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:16:21.375  [2024-11-19 17:00:14.049030] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3
00:16:21.375  [2024-11-19 17:00:14.049118] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3
00:16:21.375  [2024-11-19 17:00:14.049138] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed
00:16:21.375  [2024-11-19 17:00:14.049230] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80
00:16:21.375  [2024-11-19 17:00:14.049238] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512
00:16:21.375  [2024-11-19 17:00:14.049299] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002940
00:16:21.375  [2024-11-19 17:00:14.049570] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80
00:16:21.375  [2024-11-19 17:00:14.049589] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80
00:16:21.375  [2024-11-19 17:00:14.049681] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:16:21.375  pt3
00:16:21.375   17:00:14	-- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2
00:16:21.375   17:00:14	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:16:21.375   17:00:14	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:16:21.375   17:00:14	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:16:21.375   17:00:14	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:16:21.375   17:00:14	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:16:21.375   17:00:14	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:16:21.375   17:00:14	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:16:21.375   17:00:14	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:16:21.375   17:00:14	-- bdev/bdev_raid.sh@125 -- # local tmp
00:16:21.375    17:00:14	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:16:21.375    17:00:14	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:16:21.635   17:00:14	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:16:21.635    "name": "raid_bdev1",
00:16:21.635    "uuid": "362a787b-0266-49ca-8def-f5e3fb4c816a",
00:16:21.635    "strip_size_kb": 0,
00:16:21.635    "state": "online",
00:16:21.635    "raid_level": "raid1",
00:16:21.635    "superblock": true,
00:16:21.635    "num_base_bdevs": 3,
00:16:21.635    "num_base_bdevs_discovered": 2,
00:16:21.635    "num_base_bdevs_operational": 2,
00:16:21.635    "base_bdevs_list": [
00:16:21.635      {
00:16:21.635        "name": null,
00:16:21.635        "uuid": "00000000-0000-0000-0000-000000000000",
00:16:21.635        "is_configured": false,
00:16:21.635        "data_offset": 2048,
00:16:21.635        "data_size": 63488
00:16:21.635      },
00:16:21.635      {
00:16:21.635        "name": "pt2",
00:16:21.635        "uuid": "1b8c3ef5-88e4-5f26-9762-811ebe5a819f",
00:16:21.635        "is_configured": true,
00:16:21.635        "data_offset": 2048,
00:16:21.635        "data_size": 63488
00:16:21.635      },
00:16:21.635      {
00:16:21.635        "name": "pt3",
00:16:21.635        "uuid": "94c51c29-a6fe-5759-b064-36a46d64f4f5",
00:16:21.635        "is_configured": true,
00:16:21.635        "data_offset": 2048,
00:16:21.635        "data_size": 63488
00:16:21.635      }
00:16:21.635    ]
00:16:21.635  }'
00:16:21.635   17:00:14	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:16:21.635   17:00:14	-- common/autotest_common.sh@10 -- # set +x
00:16:22.203   17:00:14	-- bdev/bdev_raid.sh@468 -- # '[' 3 -gt 2 ']'
00:16:22.203   17:00:14	-- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1
00:16:22.462  [2024-11-19 17:00:15.224620] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:16:22.463  [2024-11-19 17:00:15.224663] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:16:22.463  [2024-11-19 17:00:15.224731] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:16:22.463  [2024-11-19 17:00:15.224794] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:16:22.463  [2024-11-19 17:00:15.224804] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline
00:16:22.463    17:00:15	-- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:16:22.463    17:00:15	-- bdev/bdev_raid.sh@471 -- # jq -r '.[]'
00:16:22.734   17:00:15	-- bdev/bdev_raid.sh@471 -- # raid_bdev=
00:16:22.735   17:00:15	-- bdev/bdev_raid.sh@472 -- # '[' -n '' ']'
00:16:22.735   17:00:15	-- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001
00:16:23.002  [2024-11-19 17:00:15.748675] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1
00:16:23.002  [2024-11-19 17:00:15.748762] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:16:23.002  [2024-11-19 17:00:15.748802] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280
00:16:23.002  [2024-11-19 17:00:15.748823] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:16:23.002  [2024-11-19 17:00:15.751345] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:16:23.002  [2024-11-19 17:00:15.751404] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1
00:16:23.002  [2024-11-19 17:00:15.751505] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1
00:16:23.002  [2024-11-19 17:00:15.751541] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed
00:16:23.002  pt1
00:16:23.002   17:00:15	-- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3
00:16:23.002   17:00:15	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:16:23.002   17:00:15	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:16:23.002   17:00:15	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:16:23.002   17:00:15	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:16:23.002   17:00:15	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:16:23.002   17:00:15	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:16:23.002   17:00:15	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:16:23.002   17:00:15	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:16:23.002   17:00:15	-- bdev/bdev_raid.sh@125 -- # local tmp
00:16:23.002    17:00:15	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:16:23.002    17:00:15	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:16:23.316   17:00:16	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:16:23.316    "name": "raid_bdev1",
00:16:23.316    "uuid": "362a787b-0266-49ca-8def-f5e3fb4c816a",
00:16:23.316    "strip_size_kb": 0,
00:16:23.316    "state": "configuring",
00:16:23.316    "raid_level": "raid1",
00:16:23.316    "superblock": true,
00:16:23.316    "num_base_bdevs": 3,
00:16:23.316    "num_base_bdevs_discovered": 1,
00:16:23.316    "num_base_bdevs_operational": 3,
00:16:23.316    "base_bdevs_list": [
00:16:23.316      {
00:16:23.316        "name": "pt1",
00:16:23.316        "uuid": "e22e36c4-4c4a-58ef-a8a7-822fd2865876",
00:16:23.316        "is_configured": true,
00:16:23.316        "data_offset": 2048,
00:16:23.316        "data_size": 63488
00:16:23.316      },
00:16:23.316      {
00:16:23.316        "name": null,
00:16:23.316        "uuid": "1b8c3ef5-88e4-5f26-9762-811ebe5a819f",
00:16:23.316        "is_configured": false,
00:16:23.316        "data_offset": 2048,
00:16:23.316        "data_size": 63488
00:16:23.316      },
00:16:23.316      {
00:16:23.316        "name": null,
00:16:23.316        "uuid": "94c51c29-a6fe-5759-b064-36a46d64f4f5",
00:16:23.316        "is_configured": false,
00:16:23.316        "data_offset": 2048,
00:16:23.316        "data_size": 63488
00:16:23.316      }
00:16:23.316    ]
00:16:23.316  }'
00:16:23.317   17:00:16	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:16:23.317   17:00:16	-- common/autotest_common.sh@10 -- # set +x
00:16:23.885   17:00:16	-- bdev/bdev_raid.sh@484 -- # (( i = 1 ))
00:16:23.885   17:00:16	-- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs ))
00:16:23.885   17:00:16	-- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2
00:16:24.144   17:00:16	-- bdev/bdev_raid.sh@484 -- # (( i++ ))
00:16:24.144   17:00:16	-- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs ))
00:16:24.144   17:00:16	-- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3
00:16:24.403   17:00:17	-- bdev/bdev_raid.sh@484 -- # (( i++ ))
00:16:24.403   17:00:17	-- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs ))
00:16:24.403   17:00:17	-- bdev/bdev_raid.sh@489 -- # i=2
00:16:24.403   17:00:17	-- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003
00:16:24.662  [2024-11-19 17:00:17.349013] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3
00:16:24.662  [2024-11-19 17:00:17.349102] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:16:24.662  [2024-11-19 17:00:17.349133] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80
00:16:24.662  [2024-11-19 17:00:17.349159] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:16:24.662  [2024-11-19 17:00:17.349561] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:16:24.662  [2024-11-19 17:00:17.349593] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3
00:16:24.662  [2024-11-19 17:00:17.349677] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3
00:16:24.662  [2024-11-19 17:00:17.349706] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt3 (4) greater than existing raid bdev raid_bdev1 (2)
00:16:24.662  [2024-11-19 17:00:17.349714] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:16:24.662  [2024-11-19 17:00:17.349743] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state configuring
00:16:24.662  [2024-11-19 17:00:17.349799] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed
00:16:24.662  pt3
00:16:24.662   17:00:17	-- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2
00:16:24.662   17:00:17	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:16:24.662   17:00:17	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:16:24.662   17:00:17	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:16:24.662   17:00:17	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:16:24.662   17:00:17	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:16:24.662   17:00:17	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:16:24.662   17:00:17	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:16:24.662   17:00:17	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:16:24.662   17:00:17	-- bdev/bdev_raid.sh@125 -- # local tmp
00:16:24.662    17:00:17	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:16:24.662    17:00:17	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:16:24.921   17:00:17	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:16:24.921    "name": "raid_bdev1",
00:16:24.921    "uuid": "362a787b-0266-49ca-8def-f5e3fb4c816a",
00:16:24.921    "strip_size_kb": 0,
00:16:24.921    "state": "configuring",
00:16:24.921    "raid_level": "raid1",
00:16:24.921    "superblock": true,
00:16:24.921    "num_base_bdevs": 3,
00:16:24.922    "num_base_bdevs_discovered": 1,
00:16:24.922    "num_base_bdevs_operational": 2,
00:16:24.922    "base_bdevs_list": [
00:16:24.922      {
00:16:24.922        "name": null,
00:16:24.922        "uuid": "00000000-0000-0000-0000-000000000000",
00:16:24.922        "is_configured": false,
00:16:24.922        "data_offset": 2048,
00:16:24.922        "data_size": 63488
00:16:24.922      },
00:16:24.922      {
00:16:24.922        "name": null,
00:16:24.922        "uuid": "1b8c3ef5-88e4-5f26-9762-811ebe5a819f",
00:16:24.922        "is_configured": false,
00:16:24.922        "data_offset": 2048,
00:16:24.922        "data_size": 63488
00:16:24.922      },
00:16:24.922      {
00:16:24.922        "name": "pt3",
00:16:24.922        "uuid": "94c51c29-a6fe-5759-b064-36a46d64f4f5",
00:16:24.922        "is_configured": true,
00:16:24.922        "data_offset": 2048,
00:16:24.922        "data_size": 63488
00:16:24.922      }
00:16:24.922    ]
00:16:24.922  }'
00:16:24.922   17:00:17	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:16:24.922   17:00:17	-- common/autotest_common.sh@10 -- # set +x
00:16:25.490   17:00:18	-- bdev/bdev_raid.sh@497 -- # (( i = 1 ))
00:16:25.490   17:00:18	-- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 ))
00:16:25.490   17:00:18	-- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:16:25.749  [2024-11-19 17:00:18.501305] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:16:25.749  [2024-11-19 17:00:18.501397] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:16:25.749  [2024-11-19 17:00:18.501429] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180
00:16:25.749  [2024-11-19 17:00:18.501455] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:16:25.749  [2024-11-19 17:00:18.501865] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:16:25.749  [2024-11-19 17:00:18.501898] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:16:25.749  [2024-11-19 17:00:18.501973] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2
00:16:25.749  [2024-11-19 17:00:18.502003] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:16:25.749  [2024-11-19 17:00:18.502094] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ae80
00:16:25.749  [2024-11-19 17:00:18.502103] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512
00:16:25.749  [2024-11-19 17:00:18.502161] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002c80
00:16:25.749  [2024-11-19 17:00:18.502433] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ae80
00:16:25.749  [2024-11-19 17:00:18.502452] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ae80
00:16:25.749  [2024-11-19 17:00:18.502542] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:16:25.749  pt2
00:16:25.749   17:00:18	-- bdev/bdev_raid.sh@497 -- # (( i++ ))
00:16:25.749   17:00:18	-- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 ))
00:16:25.749   17:00:18	-- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2
00:16:25.749   17:00:18	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:16:25.749   17:00:18	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:16:25.749   17:00:18	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:16:25.749   17:00:18	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:16:25.749   17:00:18	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:16:25.749   17:00:18	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:16:25.749   17:00:18	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:16:25.749   17:00:18	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:16:25.749   17:00:18	-- bdev/bdev_raid.sh@125 -- # local tmp
00:16:25.749    17:00:18	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:16:25.749    17:00:18	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:16:26.009   17:00:18	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:16:26.009    "name": "raid_bdev1",
00:16:26.009    "uuid": "362a787b-0266-49ca-8def-f5e3fb4c816a",
00:16:26.009    "strip_size_kb": 0,
00:16:26.009    "state": "online",
00:16:26.009    "raid_level": "raid1",
00:16:26.009    "superblock": true,
00:16:26.009    "num_base_bdevs": 3,
00:16:26.009    "num_base_bdevs_discovered": 2,
00:16:26.009    "num_base_bdevs_operational": 2,
00:16:26.009    "base_bdevs_list": [
00:16:26.009      {
00:16:26.009        "name": null,
00:16:26.009        "uuid": "00000000-0000-0000-0000-000000000000",
00:16:26.009        "is_configured": false,
00:16:26.009        "data_offset": 2048,
00:16:26.009        "data_size": 63488
00:16:26.009      },
00:16:26.009      {
00:16:26.009        "name": "pt2",
00:16:26.009        "uuid": "1b8c3ef5-88e4-5f26-9762-811ebe5a819f",
00:16:26.009        "is_configured": true,
00:16:26.009        "data_offset": 2048,
00:16:26.009        "data_size": 63488
00:16:26.009      },
00:16:26.009      {
00:16:26.009        "name": "pt3",
00:16:26.009        "uuid": "94c51c29-a6fe-5759-b064-36a46d64f4f5",
00:16:26.009        "is_configured": true,
00:16:26.009        "data_offset": 2048,
00:16:26.009        "data_size": 63488
00:16:26.009      }
00:16:26.009    ]
00:16:26.009  }'
00:16:26.009   17:00:18	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:16:26.009   17:00:18	-- common/autotest_common.sh@10 -- # set +x
00:16:26.576    17:00:19	-- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1
00:16:26.576    17:00:19	-- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid'
00:16:26.835  [2024-11-19 17:00:19.499544] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:16:26.835   17:00:19	-- bdev/bdev_raid.sh@506 -- # '[' 362a787b-0266-49ca-8def-f5e3fb4c816a '!=' 362a787b-0266-49ca-8def-f5e3fb4c816a ']'
00:16:26.835   17:00:19	-- bdev/bdev_raid.sh@511 -- # killprocess 128165
00:16:26.835   17:00:19	-- common/autotest_common.sh@936 -- # '[' -z 128165 ']'
00:16:26.835   17:00:19	-- common/autotest_common.sh@940 -- # kill -0 128165
00:16:26.835    17:00:19	-- common/autotest_common.sh@941 -- # uname
00:16:26.835   17:00:19	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:16:26.835    17:00:19	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 128165
00:16:26.835  killing process with pid 128165
00:16:26.835   17:00:19	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:16:26.835   17:00:19	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:16:26.835   17:00:19	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 128165'
00:16:26.835   17:00:19	-- common/autotest_common.sh@955 -- # kill 128165
00:16:26.835   17:00:19	-- common/autotest_common.sh@960 -- # wait 128165
00:16:26.835  [2024-11-19 17:00:19.545958] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:16:26.835  [2024-11-19 17:00:19.546034] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:16:26.835  [2024-11-19 17:00:19.546096] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:16:26.835  [2024-11-19 17:00:19.546105] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ae80 name raid_bdev1, state offline
00:16:26.835  [2024-11-19 17:00:19.582048] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:16:27.095   17:00:19	-- bdev/bdev_raid.sh@513 -- # return 0
00:16:27.095  
00:16:27.095  real	0m18.443s
00:16:27.095  user	0m33.926s
00:16:27.095  ************************************
00:16:27.095  END TEST raid_superblock_test
00:16:27.095  ************************************
00:16:27.095  sys	0m2.933s
00:16:27.095   17:00:19	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:16:27.095   17:00:19	-- common/autotest_common.sh@10 -- # set +x
00:16:27.095   17:00:19	-- bdev/bdev_raid.sh@725 -- # for n in {2..4}
00:16:27.095   17:00:19	-- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1
00:16:27.095   17:00:19	-- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false
00:16:27.095   17:00:19	-- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']'
00:16:27.095   17:00:19	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:16:27.095   17:00:19	-- common/autotest_common.sh@10 -- # set +x
00:16:27.095  ************************************
00:16:27.095  START TEST raid_state_function_test
00:16:27.095  ************************************
00:16:27.095   17:00:19	-- common/autotest_common.sh@1114 -- # raid_state_function_test raid0 4 false
00:16:27.095   17:00:19	-- bdev/bdev_raid.sh@202 -- # local raid_level=raid0
00:16:27.095   17:00:19	-- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4
00:16:27.095   17:00:19	-- bdev/bdev_raid.sh@204 -- # local superblock=false
00:16:27.095   17:00:19	-- bdev/bdev_raid.sh@205 -- # local raid_bdev
00:16:27.095    17:00:19	-- bdev/bdev_raid.sh@206 -- # (( i = 1 ))
00:16:27.095    17:00:19	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:16:27.095    17:00:19	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev1
00:16:27.095    17:00:19	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:16:27.095    17:00:19	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:16:27.095    17:00:19	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev2
00:16:27.095    17:00:19	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:16:27.095    17:00:19	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:16:27.095    17:00:19	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev3
00:16:27.095    17:00:19	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:16:27.095    17:00:19	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:16:27.095    17:00:19	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev4
00:16:27.095    17:00:19	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:16:27.095    17:00:19	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:16:27.095   17:00:19	-- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4')
00:16:27.095   17:00:19	-- bdev/bdev_raid.sh@206 -- # local base_bdevs
00:16:27.095   17:00:19	-- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid
00:16:27.095   17:00:19	-- bdev/bdev_raid.sh@208 -- # local strip_size
00:16:27.095   17:00:19	-- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg
00:16:27.095   17:00:19	-- bdev/bdev_raid.sh@210 -- # local superblock_create_arg
00:16:27.095   17:00:19	-- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']'
00:16:27.095   17:00:19	-- bdev/bdev_raid.sh@213 -- # strip_size=64
00:16:27.095   17:00:19	-- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64'
00:16:27.095   17:00:19	-- bdev/bdev_raid.sh@219 -- # '[' false = true ']'
00:16:27.095   17:00:19	-- bdev/bdev_raid.sh@222 -- # superblock_create_arg=
00:16:27.095   17:00:19	-- bdev/bdev_raid.sh@226 -- # raid_pid=128767
00:16:27.095  Process raid pid: 128767
00:16:27.095   17:00:19	-- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 128767'
00:16:27.095   17:00:19	-- bdev/bdev_raid.sh@228 -- # waitforlisten 128767 /var/tmp/spdk-raid.sock
00:16:27.095   17:00:19	-- common/autotest_common.sh@829 -- # '[' -z 128767 ']'
00:16:27.095   17:00:19	-- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid
00:16:27.095   17:00:19	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock
00:16:27.095   17:00:19	-- common/autotest_common.sh@834 -- # local max_retries=100
00:16:27.095  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...
00:16:27.095   17:00:19	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...'
00:16:27.095   17:00:19	-- common/autotest_common.sh@838 -- # xtrace_disable
00:16:27.095   17:00:19	-- common/autotest_common.sh@10 -- # set +x
00:16:27.354  [2024-11-19 17:00:19.987400] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:16:27.354  [2024-11-19 17:00:19.987729] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:16:27.354  [2024-11-19 17:00:20.150996] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:16:27.354  [2024-11-19 17:00:20.204404] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:16:27.614  [2024-11-19 17:00:20.251961] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:16:28.182   17:00:20	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:16:28.182   17:00:20	-- common/autotest_common.sh@862 -- # return 0
00:16:28.182   17:00:20	-- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid
00:16:28.441  [2024-11-19 17:00:21.168208] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:16:28.441  [2024-11-19 17:00:21.168310] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:16:28.441  [2024-11-19 17:00:21.168322] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:16:28.441  [2024-11-19 17:00:21.168341] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:16:28.441  [2024-11-19 17:00:21.168349] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:16:28.441  [2024-11-19 17:00:21.168392] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:16:28.441  [2024-11-19 17:00:21.168400] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4
00:16:28.441  [2024-11-19 17:00:21.168426] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now
00:16:28.441   17:00:21	-- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4
00:16:28.441   17:00:21	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:16:28.441   17:00:21	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:16:28.441   17:00:21	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid0
00:16:28.441   17:00:21	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:16:28.441   17:00:21	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:16:28.441   17:00:21	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:16:28.441   17:00:21	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:16:28.441   17:00:21	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:16:28.441   17:00:21	-- bdev/bdev_raid.sh@125 -- # local tmp
00:16:28.441    17:00:21	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:16:28.441    17:00:21	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:16:28.700   17:00:21	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:16:28.700    "name": "Existed_Raid",
00:16:28.700    "uuid": "00000000-0000-0000-0000-000000000000",
00:16:28.700    "strip_size_kb": 64,
00:16:28.700    "state": "configuring",
00:16:28.700    "raid_level": "raid0",
00:16:28.700    "superblock": false,
00:16:28.700    "num_base_bdevs": 4,
00:16:28.700    "num_base_bdevs_discovered": 0,
00:16:28.700    "num_base_bdevs_operational": 4,
00:16:28.700    "base_bdevs_list": [
00:16:28.700      {
00:16:28.700        "name": "BaseBdev1",
00:16:28.700        "uuid": "00000000-0000-0000-0000-000000000000",
00:16:28.700        "is_configured": false,
00:16:28.700        "data_offset": 0,
00:16:28.700        "data_size": 0
00:16:28.700      },
00:16:28.700      {
00:16:28.700        "name": "BaseBdev2",
00:16:28.700        "uuid": "00000000-0000-0000-0000-000000000000",
00:16:28.700        "is_configured": false,
00:16:28.700        "data_offset": 0,
00:16:28.700        "data_size": 0
00:16:28.700      },
00:16:28.700      {
00:16:28.700        "name": "BaseBdev3",
00:16:28.700        "uuid": "00000000-0000-0000-0000-000000000000",
00:16:28.700        "is_configured": false,
00:16:28.700        "data_offset": 0,
00:16:28.700        "data_size": 0
00:16:28.700      },
00:16:28.700      {
00:16:28.700        "name": "BaseBdev4",
00:16:28.700        "uuid": "00000000-0000-0000-0000-000000000000",
00:16:28.700        "is_configured": false,
00:16:28.700        "data_offset": 0,
00:16:28.700        "data_size": 0
00:16:28.700      }
00:16:28.700    ]
00:16:28.700  }'
00:16:28.700   17:00:21	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:16:28.700   17:00:21	-- common/autotest_common.sh@10 -- # set +x
00:16:29.268   17:00:22	-- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid
00:16:29.528  [2024-11-19 17:00:22.256280] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:16:29.528  [2024-11-19 17:00:22.256569] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring
00:16:29.528   17:00:22	-- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid
00:16:29.788  [2024-11-19 17:00:22.516363] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:16:29.788  [2024-11-19 17:00:22.516681] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:16:29.788  [2024-11-19 17:00:22.516778] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:16:29.788  [2024-11-19 17:00:22.516838] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:16:29.788  [2024-11-19 17:00:22.516867] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:16:29.788  [2024-11-19 17:00:22.516950] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:16:29.788  [2024-11-19 17:00:22.516995] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4
00:16:29.788  [2024-11-19 17:00:22.517040] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now
00:16:29.788   17:00:22	-- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1
00:16:30.047  [2024-11-19 17:00:22.713694] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:16:30.047  BaseBdev1
00:16:30.047   17:00:22	-- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1
00:16:30.047   17:00:22	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1
00:16:30.047   17:00:22	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:16:30.047   17:00:22	-- common/autotest_common.sh@899 -- # local i
00:16:30.047   17:00:22	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:16:30.047   17:00:22	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:16:30.047   17:00:22	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:16:30.306   17:00:22	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000
00:16:30.306  [
00:16:30.306    {
00:16:30.306      "name": "BaseBdev1",
00:16:30.306      "aliases": [
00:16:30.306        "fbb9e12e-9ef6-4e78-a47d-fb000212bdac"
00:16:30.306      ],
00:16:30.306      "product_name": "Malloc disk",
00:16:30.306      "block_size": 512,
00:16:30.306      "num_blocks": 65536,
00:16:30.306      "uuid": "fbb9e12e-9ef6-4e78-a47d-fb000212bdac",
00:16:30.306      "assigned_rate_limits": {
00:16:30.306        "rw_ios_per_sec": 0,
00:16:30.306        "rw_mbytes_per_sec": 0,
00:16:30.306        "r_mbytes_per_sec": 0,
00:16:30.306        "w_mbytes_per_sec": 0
00:16:30.306      },
00:16:30.306      "claimed": true,
00:16:30.306      "claim_type": "exclusive_write",
00:16:30.306      "zoned": false,
00:16:30.306      "supported_io_types": {
00:16:30.306        "read": true,
00:16:30.306        "write": true,
00:16:30.306        "unmap": true,
00:16:30.307        "write_zeroes": true,
00:16:30.307        "flush": true,
00:16:30.307        "reset": true,
00:16:30.307        "compare": false,
00:16:30.307        "compare_and_write": false,
00:16:30.307        "abort": true,
00:16:30.307        "nvme_admin": false,
00:16:30.307        "nvme_io": false
00:16:30.307      },
00:16:30.307      "memory_domains": [
00:16:30.307        {
00:16:30.307          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:16:30.307          "dma_device_type": 2
00:16:30.307        }
00:16:30.307      ],
00:16:30.307      "driver_specific": {}
00:16:30.307    }
00:16:30.307  ]
00:16:30.307   17:00:23	-- common/autotest_common.sh@905 -- # return 0
00:16:30.307   17:00:23	-- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4
00:16:30.307   17:00:23	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:16:30.307   17:00:23	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:16:30.307   17:00:23	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid0
00:16:30.307   17:00:23	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:16:30.307   17:00:23	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:16:30.307   17:00:23	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:16:30.307   17:00:23	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:16:30.307   17:00:23	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:16:30.307   17:00:23	-- bdev/bdev_raid.sh@125 -- # local tmp
00:16:30.307    17:00:23	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:16:30.307    17:00:23	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:16:30.566   17:00:23	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:16:30.566    "name": "Existed_Raid",
00:16:30.566    "uuid": "00000000-0000-0000-0000-000000000000",
00:16:30.566    "strip_size_kb": 64,
00:16:30.566    "state": "configuring",
00:16:30.566    "raid_level": "raid0",
00:16:30.566    "superblock": false,
00:16:30.566    "num_base_bdevs": 4,
00:16:30.566    "num_base_bdevs_discovered": 1,
00:16:30.566    "num_base_bdevs_operational": 4,
00:16:30.566    "base_bdevs_list": [
00:16:30.566      {
00:16:30.566        "name": "BaseBdev1",
00:16:30.566        "uuid": "fbb9e12e-9ef6-4e78-a47d-fb000212bdac",
00:16:30.566        "is_configured": true,
00:16:30.566        "data_offset": 0,
00:16:30.566        "data_size": 65536
00:16:30.566      },
00:16:30.566      {
00:16:30.566        "name": "BaseBdev2",
00:16:30.566        "uuid": "00000000-0000-0000-0000-000000000000",
00:16:30.566        "is_configured": false,
00:16:30.566        "data_offset": 0,
00:16:30.566        "data_size": 0
00:16:30.566      },
00:16:30.566      {
00:16:30.566        "name": "BaseBdev3",
00:16:30.566        "uuid": "00000000-0000-0000-0000-000000000000",
00:16:30.566        "is_configured": false,
00:16:30.566        "data_offset": 0,
00:16:30.566        "data_size": 0
00:16:30.566      },
00:16:30.566      {
00:16:30.566        "name": "BaseBdev4",
00:16:30.566        "uuid": "00000000-0000-0000-0000-000000000000",
00:16:30.566        "is_configured": false,
00:16:30.566        "data_offset": 0,
00:16:30.566        "data_size": 0
00:16:30.566      }
00:16:30.566    ]
00:16:30.566  }'
00:16:30.566   17:00:23	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:16:30.566   17:00:23	-- common/autotest_common.sh@10 -- # set +x
00:16:31.134   17:00:23	-- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid
00:16:31.403  [2024-11-19 17:00:24.090001] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:16:31.403  [2024-11-19 17:00:24.090290] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring
00:16:31.403   17:00:24	-- bdev/bdev_raid.sh@244 -- # '[' false = true ']'
00:16:31.403   17:00:24	-- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid
00:16:31.664  [2024-11-19 17:00:24.350162] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:16:31.665  [2024-11-19 17:00:24.352715] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:16:31.665  [2024-11-19 17:00:24.352934] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:16:31.665  [2024-11-19 17:00:24.353047] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:16:31.665  [2024-11-19 17:00:24.353109] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:16:31.665  [2024-11-19 17:00:24.353139] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4
00:16:31.665  [2024-11-19 17:00:24.353178] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now
00:16:31.665   17:00:24	-- bdev/bdev_raid.sh@254 -- # (( i = 1 ))
00:16:31.665   17:00:24	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:16:31.665   17:00:24	-- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4
00:16:31.665   17:00:24	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:16:31.665   17:00:24	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:16:31.665   17:00:24	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid0
00:16:31.665   17:00:24	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:16:31.665   17:00:24	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:16:31.665   17:00:24	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:16:31.665   17:00:24	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:16:31.665   17:00:24	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:16:31.665   17:00:24	-- bdev/bdev_raid.sh@125 -- # local tmp
00:16:31.665    17:00:24	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:16:31.665    17:00:24	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:16:31.924   17:00:24	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:16:31.924    "name": "Existed_Raid",
00:16:31.924    "uuid": "00000000-0000-0000-0000-000000000000",
00:16:31.924    "strip_size_kb": 64,
00:16:31.924    "state": "configuring",
00:16:31.924    "raid_level": "raid0",
00:16:31.924    "superblock": false,
00:16:31.924    "num_base_bdevs": 4,
00:16:31.924    "num_base_bdevs_discovered": 1,
00:16:31.924    "num_base_bdevs_operational": 4,
00:16:31.924    "base_bdevs_list": [
00:16:31.924      {
00:16:31.924        "name": "BaseBdev1",
00:16:31.924        "uuid": "fbb9e12e-9ef6-4e78-a47d-fb000212bdac",
00:16:31.924        "is_configured": true,
00:16:31.924        "data_offset": 0,
00:16:31.924        "data_size": 65536
00:16:31.924      },
00:16:31.924      {
00:16:31.924        "name": "BaseBdev2",
00:16:31.924        "uuid": "00000000-0000-0000-0000-000000000000",
00:16:31.924        "is_configured": false,
00:16:31.924        "data_offset": 0,
00:16:31.924        "data_size": 0
00:16:31.924      },
00:16:31.924      {
00:16:31.924        "name": "BaseBdev3",
00:16:31.924        "uuid": "00000000-0000-0000-0000-000000000000",
00:16:31.924        "is_configured": false,
00:16:31.924        "data_offset": 0,
00:16:31.924        "data_size": 0
00:16:31.924      },
00:16:31.924      {
00:16:31.924        "name": "BaseBdev4",
00:16:31.924        "uuid": "00000000-0000-0000-0000-000000000000",
00:16:31.924        "is_configured": false,
00:16:31.924        "data_offset": 0,
00:16:31.924        "data_size": 0
00:16:31.924      }
00:16:31.924    ]
00:16:31.924  }'
00:16:31.924   17:00:24	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:16:31.924   17:00:24	-- common/autotest_common.sh@10 -- # set +x
00:16:32.493   17:00:25	-- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2
00:16:32.493  [2024-11-19 17:00:25.345952] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:16:32.493  BaseBdev2
00:16:32.752   17:00:25	-- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2
00:16:32.752   17:00:25	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2
00:16:32.752   17:00:25	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:16:32.752   17:00:25	-- common/autotest_common.sh@899 -- # local i
00:16:32.752   17:00:25	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:16:32.752   17:00:25	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:16:32.752   17:00:25	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:16:33.012   17:00:25	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000
00:16:33.012  [
00:16:33.012    {
00:16:33.012      "name": "BaseBdev2",
00:16:33.012      "aliases": [
00:16:33.012        "a6a76a9d-7a48-4ac5-ad94-4a751e04635d"
00:16:33.012      ],
00:16:33.012      "product_name": "Malloc disk",
00:16:33.012      "block_size": 512,
00:16:33.012      "num_blocks": 65536,
00:16:33.012      "uuid": "a6a76a9d-7a48-4ac5-ad94-4a751e04635d",
00:16:33.012      "assigned_rate_limits": {
00:16:33.012        "rw_ios_per_sec": 0,
00:16:33.012        "rw_mbytes_per_sec": 0,
00:16:33.012        "r_mbytes_per_sec": 0,
00:16:33.012        "w_mbytes_per_sec": 0
00:16:33.012      },
00:16:33.012      "claimed": true,
00:16:33.012      "claim_type": "exclusive_write",
00:16:33.012      "zoned": false,
00:16:33.012      "supported_io_types": {
00:16:33.012        "read": true,
00:16:33.012        "write": true,
00:16:33.012        "unmap": true,
00:16:33.012        "write_zeroes": true,
00:16:33.012        "flush": true,
00:16:33.012        "reset": true,
00:16:33.012        "compare": false,
00:16:33.012        "compare_and_write": false,
00:16:33.012        "abort": true,
00:16:33.012        "nvme_admin": false,
00:16:33.012        "nvme_io": false
00:16:33.012      },
00:16:33.012      "memory_domains": [
00:16:33.012        {
00:16:33.012          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:16:33.012          "dma_device_type": 2
00:16:33.012        }
00:16:33.012      ],
00:16:33.012      "driver_specific": {}
00:16:33.012    }
00:16:33.012  ]
00:16:33.012   17:00:25	-- common/autotest_common.sh@905 -- # return 0
00:16:33.012   17:00:25	-- bdev/bdev_raid.sh@254 -- # (( i++ ))
00:16:33.012   17:00:25	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:16:33.012   17:00:25	-- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4
00:16:33.012   17:00:25	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:16:33.012   17:00:25	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:16:33.012   17:00:25	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid0
00:16:33.012   17:00:25	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:16:33.012   17:00:25	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:16:33.012   17:00:25	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:16:33.012   17:00:25	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:16:33.012   17:00:25	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:16:33.012   17:00:25	-- bdev/bdev_raid.sh@125 -- # local tmp
00:16:33.012    17:00:25	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:16:33.012    17:00:25	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:16:33.271   17:00:26	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:16:33.271    "name": "Existed_Raid",
00:16:33.271    "uuid": "00000000-0000-0000-0000-000000000000",
00:16:33.271    "strip_size_kb": 64,
00:16:33.271    "state": "configuring",
00:16:33.271    "raid_level": "raid0",
00:16:33.271    "superblock": false,
00:16:33.271    "num_base_bdevs": 4,
00:16:33.271    "num_base_bdevs_discovered": 2,
00:16:33.271    "num_base_bdevs_operational": 4,
00:16:33.271    "base_bdevs_list": [
00:16:33.271      {
00:16:33.271        "name": "BaseBdev1",
00:16:33.272        "uuid": "fbb9e12e-9ef6-4e78-a47d-fb000212bdac",
00:16:33.272        "is_configured": true,
00:16:33.272        "data_offset": 0,
00:16:33.272        "data_size": 65536
00:16:33.272      },
00:16:33.272      {
00:16:33.272        "name": "BaseBdev2",
00:16:33.272        "uuid": "a6a76a9d-7a48-4ac5-ad94-4a751e04635d",
00:16:33.272        "is_configured": true,
00:16:33.272        "data_offset": 0,
00:16:33.272        "data_size": 65536
00:16:33.272      },
00:16:33.272      {
00:16:33.272        "name": "BaseBdev3",
00:16:33.272        "uuid": "00000000-0000-0000-0000-000000000000",
00:16:33.272        "is_configured": false,
00:16:33.272        "data_offset": 0,
00:16:33.272        "data_size": 0
00:16:33.272      },
00:16:33.272      {
00:16:33.272        "name": "BaseBdev4",
00:16:33.272        "uuid": "00000000-0000-0000-0000-000000000000",
00:16:33.272        "is_configured": false,
00:16:33.272        "data_offset": 0,
00:16:33.272        "data_size": 0
00:16:33.272      }
00:16:33.272    ]
00:16:33.272  }'
00:16:33.272   17:00:26	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:16:33.272   17:00:26	-- common/autotest_common.sh@10 -- # set +x
00:16:33.840   17:00:26	-- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3
00:16:34.099  [2024-11-19 17:00:26.769463] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:16:34.099  BaseBdev3
00:16:34.099   17:00:26	-- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3
00:16:34.099   17:00:26	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3
00:16:34.099   17:00:26	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:16:34.099   17:00:26	-- common/autotest_common.sh@899 -- # local i
00:16:34.099   17:00:26	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:16:34.099   17:00:26	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:16:34.099   17:00:26	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:16:34.359   17:00:27	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000
00:16:34.618  [
00:16:34.618    {
00:16:34.618      "name": "BaseBdev3",
00:16:34.618      "aliases": [
00:16:34.618        "c372af57-6ef1-4f67-bf1c-7235308a774a"
00:16:34.618      ],
00:16:34.618      "product_name": "Malloc disk",
00:16:34.618      "block_size": 512,
00:16:34.618      "num_blocks": 65536,
00:16:34.618      "uuid": "c372af57-6ef1-4f67-bf1c-7235308a774a",
00:16:34.618      "assigned_rate_limits": {
00:16:34.618        "rw_ios_per_sec": 0,
00:16:34.618        "rw_mbytes_per_sec": 0,
00:16:34.618        "r_mbytes_per_sec": 0,
00:16:34.618        "w_mbytes_per_sec": 0
00:16:34.618      },
00:16:34.618      "claimed": true,
00:16:34.618      "claim_type": "exclusive_write",
00:16:34.618      "zoned": false,
00:16:34.618      "supported_io_types": {
00:16:34.618        "read": true,
00:16:34.618        "write": true,
00:16:34.618        "unmap": true,
00:16:34.618        "write_zeroes": true,
00:16:34.618        "flush": true,
00:16:34.618        "reset": true,
00:16:34.618        "compare": false,
00:16:34.618        "compare_and_write": false,
00:16:34.618        "abort": true,
00:16:34.618        "nvme_admin": false,
00:16:34.618        "nvme_io": false
00:16:34.618      },
00:16:34.618      "memory_domains": [
00:16:34.618        {
00:16:34.618          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:16:34.618          "dma_device_type": 2
00:16:34.618        }
00:16:34.618      ],
00:16:34.618      "driver_specific": {}
00:16:34.618    }
00:16:34.618  ]
00:16:34.618   17:00:27	-- common/autotest_common.sh@905 -- # return 0
00:16:34.618   17:00:27	-- bdev/bdev_raid.sh@254 -- # (( i++ ))
00:16:34.618   17:00:27	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:16:34.618   17:00:27	-- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4
00:16:34.619   17:00:27	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:16:34.619   17:00:27	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:16:34.619   17:00:27	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid0
00:16:34.619   17:00:27	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:16:34.619   17:00:27	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:16:34.619   17:00:27	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:16:34.619   17:00:27	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:16:34.619   17:00:27	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:16:34.619   17:00:27	-- bdev/bdev_raid.sh@125 -- # local tmp
00:16:34.619    17:00:27	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:16:34.619    17:00:27	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:16:34.878   17:00:27	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:16:34.878    "name": "Existed_Raid",
00:16:34.878    "uuid": "00000000-0000-0000-0000-000000000000",
00:16:34.878    "strip_size_kb": 64,
00:16:34.878    "state": "configuring",
00:16:34.878    "raid_level": "raid0",
00:16:34.878    "superblock": false,
00:16:34.878    "num_base_bdevs": 4,
00:16:34.878    "num_base_bdevs_discovered": 3,
00:16:34.878    "num_base_bdevs_operational": 4,
00:16:34.878    "base_bdevs_list": [
00:16:34.878      {
00:16:34.878        "name": "BaseBdev1",
00:16:34.878        "uuid": "fbb9e12e-9ef6-4e78-a47d-fb000212bdac",
00:16:34.878        "is_configured": true,
00:16:34.878        "data_offset": 0,
00:16:34.878        "data_size": 65536
00:16:34.878      },
00:16:34.878      {
00:16:34.878        "name": "BaseBdev2",
00:16:34.878        "uuid": "a6a76a9d-7a48-4ac5-ad94-4a751e04635d",
00:16:34.878        "is_configured": true,
00:16:34.878        "data_offset": 0,
00:16:34.878        "data_size": 65536
00:16:34.878      },
00:16:34.878      {
00:16:34.878        "name": "BaseBdev3",
00:16:34.878        "uuid": "c372af57-6ef1-4f67-bf1c-7235308a774a",
00:16:34.878        "is_configured": true,
00:16:34.878        "data_offset": 0,
00:16:34.878        "data_size": 65536
00:16:34.878      },
00:16:34.878      {
00:16:34.878        "name": "BaseBdev4",
00:16:34.878        "uuid": "00000000-0000-0000-0000-000000000000",
00:16:34.878        "is_configured": false,
00:16:34.878        "data_offset": 0,
00:16:34.878        "data_size": 0
00:16:34.878      }
00:16:34.878    ]
00:16:34.878  }'
00:16:34.878   17:00:27	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:16:34.878   17:00:27	-- common/autotest_common.sh@10 -- # set +x
00:16:35.446   17:00:27	-- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4
00:16:35.446  [2024-11-19 17:00:28.261129] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed
00:16:35.446  [2024-11-19 17:00:28.261427] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080
00:16:35.446  [2024-11-19 17:00:28.261480] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512
00:16:35.446  [2024-11-19 17:00:28.261735] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120
00:16:35.446  [2024-11-19 17:00:28.262267] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080
00:16:35.446  [2024-11-19 17:00:28.262399] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080
00:16:35.446  [2024-11-19 17:00:28.262769] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:16:35.446  BaseBdev4
00:16:35.446   17:00:28	-- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4
00:16:35.446   17:00:28	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4
00:16:35.446   17:00:28	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:16:35.446   17:00:28	-- common/autotest_common.sh@899 -- # local i
00:16:35.446   17:00:28	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:16:35.446   17:00:28	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:16:35.446   17:00:28	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:16:35.705   17:00:28	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000
00:16:35.963  [
00:16:35.963    {
00:16:35.963      "name": "BaseBdev4",
00:16:35.963      "aliases": [
00:16:35.964        "7691cfc4-3462-4c8b-9a46-6ab8284c8388"
00:16:35.964      ],
00:16:35.964      "product_name": "Malloc disk",
00:16:35.964      "block_size": 512,
00:16:35.964      "num_blocks": 65536,
00:16:35.964      "uuid": "7691cfc4-3462-4c8b-9a46-6ab8284c8388",
00:16:35.964      "assigned_rate_limits": {
00:16:35.964        "rw_ios_per_sec": 0,
00:16:35.964        "rw_mbytes_per_sec": 0,
00:16:35.964        "r_mbytes_per_sec": 0,
00:16:35.964        "w_mbytes_per_sec": 0
00:16:35.964      },
00:16:35.964      "claimed": true,
00:16:35.964      "claim_type": "exclusive_write",
00:16:35.964      "zoned": false,
00:16:35.964      "supported_io_types": {
00:16:35.964        "read": true,
00:16:35.964        "write": true,
00:16:35.964        "unmap": true,
00:16:35.964        "write_zeroes": true,
00:16:35.964        "flush": true,
00:16:35.964        "reset": true,
00:16:35.964        "compare": false,
00:16:35.964        "compare_and_write": false,
00:16:35.964        "abort": true,
00:16:35.964        "nvme_admin": false,
00:16:35.964        "nvme_io": false
00:16:35.964      },
00:16:35.964      "memory_domains": [
00:16:35.964        {
00:16:35.964          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:16:35.964          "dma_device_type": 2
00:16:35.964        }
00:16:35.964      ],
00:16:35.964      "driver_specific": {}
00:16:35.964    }
00:16:35.964  ]
00:16:35.964   17:00:28	-- common/autotest_common.sh@905 -- # return 0
00:16:35.964   17:00:28	-- bdev/bdev_raid.sh@254 -- # (( i++ ))
00:16:35.964   17:00:28	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:16:35.964   17:00:28	-- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4
00:16:35.964   17:00:28	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:16:35.964   17:00:28	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:16:35.964   17:00:28	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid0
00:16:35.964   17:00:28	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:16:35.964   17:00:28	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:16:35.964   17:00:28	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:16:35.964   17:00:28	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:16:35.964   17:00:28	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:16:35.964   17:00:28	-- bdev/bdev_raid.sh@125 -- # local tmp
00:16:35.964    17:00:28	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:16:35.964    17:00:28	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:16:36.222   17:00:29	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:16:36.222    "name": "Existed_Raid",
00:16:36.222    "uuid": "6bea8053-6f49-4305-bea0-0ccb55bfe8c5",
00:16:36.222    "strip_size_kb": 64,
00:16:36.222    "state": "online",
00:16:36.222    "raid_level": "raid0",
00:16:36.222    "superblock": false,
00:16:36.222    "num_base_bdevs": 4,
00:16:36.222    "num_base_bdevs_discovered": 4,
00:16:36.222    "num_base_bdevs_operational": 4,
00:16:36.222    "base_bdevs_list": [
00:16:36.222      {
00:16:36.222        "name": "BaseBdev1",
00:16:36.222        "uuid": "fbb9e12e-9ef6-4e78-a47d-fb000212bdac",
00:16:36.222        "is_configured": true,
00:16:36.222        "data_offset": 0,
00:16:36.222        "data_size": 65536
00:16:36.222      },
00:16:36.222      {
00:16:36.222        "name": "BaseBdev2",
00:16:36.222        "uuid": "a6a76a9d-7a48-4ac5-ad94-4a751e04635d",
00:16:36.222        "is_configured": true,
00:16:36.222        "data_offset": 0,
00:16:36.222        "data_size": 65536
00:16:36.222      },
00:16:36.222      {
00:16:36.222        "name": "BaseBdev3",
00:16:36.222        "uuid": "c372af57-6ef1-4f67-bf1c-7235308a774a",
00:16:36.222        "is_configured": true,
00:16:36.222        "data_offset": 0,
00:16:36.222        "data_size": 65536
00:16:36.222      },
00:16:36.222      {
00:16:36.222        "name": "BaseBdev4",
00:16:36.222        "uuid": "7691cfc4-3462-4c8b-9a46-6ab8284c8388",
00:16:36.222        "is_configured": true,
00:16:36.222        "data_offset": 0,
00:16:36.222        "data_size": 65536
00:16:36.222      }
00:16:36.222    ]
00:16:36.222  }'
00:16:36.222   17:00:29	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:16:36.222   17:00:29	-- common/autotest_common.sh@10 -- # set +x
00:16:36.805   17:00:29	-- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1
00:16:37.064  [2024-11-19 17:00:29.725593] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:16:37.064  [2024-11-19 17:00:29.725795] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:16:37.064  [2024-11-19 17:00:29.725987] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:16:37.064   17:00:29	-- bdev/bdev_raid.sh@263 -- # local expected_state
00:16:37.064   17:00:29	-- bdev/bdev_raid.sh@264 -- # has_redundancy raid0
00:16:37.064   17:00:29	-- bdev/bdev_raid.sh@195 -- # case $1 in
00:16:37.064   17:00:29	-- bdev/bdev_raid.sh@197 -- # return 1
00:16:37.064   17:00:29	-- bdev/bdev_raid.sh@265 -- # expected_state=offline
00:16:37.064   17:00:29	-- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3
00:16:37.064   17:00:29	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:16:37.064   17:00:29	-- bdev/bdev_raid.sh@118 -- # local expected_state=offline
00:16:37.064   17:00:29	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid0
00:16:37.064   17:00:29	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:16:37.064   17:00:29	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:16:37.064   17:00:29	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:16:37.064   17:00:29	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:16:37.064   17:00:29	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:16:37.064   17:00:29	-- bdev/bdev_raid.sh@125 -- # local tmp
00:16:37.064    17:00:29	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:16:37.064    17:00:29	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:16:37.323   17:00:29	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:16:37.323    "name": "Existed_Raid",
00:16:37.323    "uuid": "6bea8053-6f49-4305-bea0-0ccb55bfe8c5",
00:16:37.323    "strip_size_kb": 64,
00:16:37.323    "state": "offline",
00:16:37.323    "raid_level": "raid0",
00:16:37.323    "superblock": false,
00:16:37.323    "num_base_bdevs": 4,
00:16:37.323    "num_base_bdevs_discovered": 3,
00:16:37.323    "num_base_bdevs_operational": 3,
00:16:37.323    "base_bdevs_list": [
00:16:37.323      {
00:16:37.323        "name": null,
00:16:37.323        "uuid": "00000000-0000-0000-0000-000000000000",
00:16:37.323        "is_configured": false,
00:16:37.323        "data_offset": 0,
00:16:37.323        "data_size": 65536
00:16:37.323      },
00:16:37.323      {
00:16:37.323        "name": "BaseBdev2",
00:16:37.323        "uuid": "a6a76a9d-7a48-4ac5-ad94-4a751e04635d",
00:16:37.323        "is_configured": true,
00:16:37.323        "data_offset": 0,
00:16:37.323        "data_size": 65536
00:16:37.323      },
00:16:37.323      {
00:16:37.323        "name": "BaseBdev3",
00:16:37.323        "uuid": "c372af57-6ef1-4f67-bf1c-7235308a774a",
00:16:37.323        "is_configured": true,
00:16:37.323        "data_offset": 0,
00:16:37.323        "data_size": 65536
00:16:37.323      },
00:16:37.323      {
00:16:37.323        "name": "BaseBdev4",
00:16:37.323        "uuid": "7691cfc4-3462-4c8b-9a46-6ab8284c8388",
00:16:37.323        "is_configured": true,
00:16:37.323        "data_offset": 0,
00:16:37.323        "data_size": 65536
00:16:37.323      }
00:16:37.323    ]
00:16:37.323  }'
00:16:37.323   17:00:29	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:16:37.323   17:00:29	-- common/autotest_common.sh@10 -- # set +x
00:16:37.892   17:00:30	-- bdev/bdev_raid.sh@273 -- # (( i = 1 ))
00:16:37.892   17:00:30	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:16:37.892    17:00:30	-- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:16:37.892    17:00:30	-- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]'
00:16:38.151   17:00:30	-- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid
00:16:38.151   17:00:30	-- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:16:38.151   17:00:30	-- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2
00:16:38.151  [2024-11-19 17:00:30.951119] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2
00:16:38.151   17:00:30	-- bdev/bdev_raid.sh@273 -- # (( i++ ))
00:16:38.151   17:00:30	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:16:38.151    17:00:30	-- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]'
00:16:38.151    17:00:30	-- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:16:38.409   17:00:31	-- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid
00:16:38.409   17:00:31	-- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:16:38.409   17:00:31	-- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3
00:16:38.668  [2024-11-19 17:00:31.472059] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3
00:16:38.668   17:00:31	-- bdev/bdev_raid.sh@273 -- # (( i++ ))
00:16:38.668   17:00:31	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:16:38.668    17:00:31	-- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:16:38.668    17:00:31	-- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]'
00:16:38.927   17:00:31	-- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid
00:16:38.927   17:00:31	-- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:16:38.927   17:00:31	-- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4
00:16:39.186  [2024-11-19 17:00:31.864776] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4
00:16:39.186  [2024-11-19 17:00:31.865087] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline
00:16:39.186   17:00:31	-- bdev/bdev_raid.sh@273 -- # (( i++ ))
00:16:39.186   17:00:31	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:16:39.186    17:00:31	-- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:16:39.186    17:00:31	-- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)'
00:16:39.445   17:00:32	-- bdev/bdev_raid.sh@281 -- # raid_bdev=
00:16:39.445   17:00:32	-- bdev/bdev_raid.sh@282 -- # '[' -n '' ']'
00:16:39.445   17:00:32	-- bdev/bdev_raid.sh@287 -- # killprocess 128767
00:16:39.445   17:00:32	-- common/autotest_common.sh@936 -- # '[' -z 128767 ']'
00:16:39.445   17:00:32	-- common/autotest_common.sh@940 -- # kill -0 128767
00:16:39.445    17:00:32	-- common/autotest_common.sh@941 -- # uname
00:16:39.445   17:00:32	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:16:39.445    17:00:32	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 128767
00:16:39.445   17:00:32	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:16:39.445   17:00:32	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:16:39.445   17:00:32	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 128767'
00:16:39.445  killing process with pid 128767
00:16:39.445   17:00:32	-- common/autotest_common.sh@955 -- # kill 128767
00:16:39.445   17:00:32	-- common/autotest_common.sh@960 -- # wait 128767
00:16:39.445  [2024-11-19 17:00:32.111776] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:16:39.445  [2024-11-19 17:00:32.111923] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:16:39.704   17:00:32	-- bdev/bdev_raid.sh@289 -- # return 0
00:16:39.704  
00:16:39.704  real	0m12.597s
00:16:39.704  user	0m22.422s
00:16:39.704  sys	0m2.254s
00:16:39.704   17:00:32	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:16:39.705   17:00:32	-- common/autotest_common.sh@10 -- # set +x
00:16:39.705  ************************************
00:16:39.705  END TEST raid_state_function_test
00:16:39.705  ************************************
00:16:39.964   17:00:32	-- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true
00:16:39.964   17:00:32	-- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']'
00:16:39.964   17:00:32	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:16:39.964   17:00:32	-- common/autotest_common.sh@10 -- # set +x
00:16:39.964  ************************************
00:16:39.964  START TEST raid_state_function_test_sb
00:16:39.964  ************************************
00:16:39.964   17:00:32	-- common/autotest_common.sh@1114 -- # raid_state_function_test raid0 4 true
00:16:39.964   17:00:32	-- bdev/bdev_raid.sh@202 -- # local raid_level=raid0
00:16:39.964   17:00:32	-- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4
00:16:39.964   17:00:32	-- bdev/bdev_raid.sh@204 -- # local superblock=true
00:16:39.964   17:00:32	-- bdev/bdev_raid.sh@205 -- # local raid_bdev
00:16:39.964    17:00:32	-- bdev/bdev_raid.sh@206 -- # (( i = 1 ))
00:16:39.964    17:00:32	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:16:39.964    17:00:32	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev1
00:16:39.964    17:00:32	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:16:39.964    17:00:32	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:16:39.964    17:00:32	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev2
00:16:39.964    17:00:32	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:16:39.964    17:00:32	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:16:39.964    17:00:32	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev3
00:16:39.964    17:00:32	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:16:39.964    17:00:32	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:16:39.964    17:00:32	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev4
00:16:39.964    17:00:32	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:16:39.964    17:00:32	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:16:39.964   17:00:32	-- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4')
00:16:39.964   17:00:32	-- bdev/bdev_raid.sh@206 -- # local base_bdevs
00:16:39.964   17:00:32	-- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid
00:16:39.964   17:00:32	-- bdev/bdev_raid.sh@208 -- # local strip_size
00:16:39.964   17:00:32	-- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg
00:16:39.964   17:00:32	-- bdev/bdev_raid.sh@210 -- # local superblock_create_arg
00:16:39.964   17:00:32	-- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']'
00:16:39.964   17:00:32	-- bdev/bdev_raid.sh@213 -- # strip_size=64
00:16:39.964   17:00:32	-- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64'
00:16:39.964   17:00:32	-- bdev/bdev_raid.sh@219 -- # '[' true = true ']'
00:16:39.964   17:00:32	-- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s
00:16:39.964   17:00:32	-- bdev/bdev_raid.sh@226 -- # raid_pid=129190
00:16:39.964   17:00:32	-- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 129190'
00:16:39.964   17:00:32	-- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid
00:16:39.964  Process raid pid: 129190
00:16:39.964   17:00:32	-- bdev/bdev_raid.sh@228 -- # waitforlisten 129190 /var/tmp/spdk-raid.sock
00:16:39.964   17:00:32	-- common/autotest_common.sh@829 -- # '[' -z 129190 ']'
00:16:39.964   17:00:32	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock
00:16:39.964   17:00:32	-- common/autotest_common.sh@834 -- # local max_retries=100
00:16:39.964   17:00:32	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...'
00:16:39.964  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...
00:16:39.964   17:00:32	-- common/autotest_common.sh@838 -- # xtrace_disable
00:16:39.964   17:00:32	-- common/autotest_common.sh@10 -- # set +x
00:16:39.964  [2024-11-19 17:00:32.670489] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:16:39.964  [2024-11-19 17:00:32.671057] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:16:40.223  [2024-11-19 17:00:32.832103] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:16:40.223  [2024-11-19 17:00:32.886194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:16:40.223  [2024-11-19 17:00:32.934140] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:16:40.823   17:00:33	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:16:40.823   17:00:33	-- common/autotest_common.sh@862 -- # return 0
00:16:40.823   17:00:33	-- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid
00:16:41.095  [2024-11-19 17:00:33.698524] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:16:41.095  [2024-11-19 17:00:33.698864] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:16:41.095  [2024-11-19 17:00:33.698959] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:16:41.095  [2024-11-19 17:00:33.699070] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:16:41.095  [2024-11-19 17:00:33.699144] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:16:41.095  [2024-11-19 17:00:33.699220] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:16:41.095  [2024-11-19 17:00:33.699317] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4
00:16:41.095  [2024-11-19 17:00:33.699373] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now
00:16:41.095   17:00:33	-- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4
00:16:41.095   17:00:33	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:16:41.095   17:00:33	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:16:41.095   17:00:33	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid0
00:16:41.095   17:00:33	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:16:41.095   17:00:33	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:16:41.095   17:00:33	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:16:41.095   17:00:33	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:16:41.095   17:00:33	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:16:41.095   17:00:33	-- bdev/bdev_raid.sh@125 -- # local tmp
00:16:41.095    17:00:33	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:16:41.095    17:00:33	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:16:41.354   17:00:33	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:16:41.354    "name": "Existed_Raid",
00:16:41.354    "uuid": "6528f845-1899-4e38-9b09-992297c68330",
00:16:41.354    "strip_size_kb": 64,
00:16:41.354    "state": "configuring",
00:16:41.354    "raid_level": "raid0",
00:16:41.354    "superblock": true,
00:16:41.354    "num_base_bdevs": 4,
00:16:41.354    "num_base_bdevs_discovered": 0,
00:16:41.354    "num_base_bdevs_operational": 4,
00:16:41.354    "base_bdevs_list": [
00:16:41.354      {
00:16:41.354        "name": "BaseBdev1",
00:16:41.354        "uuid": "00000000-0000-0000-0000-000000000000",
00:16:41.354        "is_configured": false,
00:16:41.354        "data_offset": 0,
00:16:41.354        "data_size": 0
00:16:41.354      },
00:16:41.354      {
00:16:41.354        "name": "BaseBdev2",
00:16:41.354        "uuid": "00000000-0000-0000-0000-000000000000",
00:16:41.354        "is_configured": false,
00:16:41.354        "data_offset": 0,
00:16:41.354        "data_size": 0
00:16:41.354      },
00:16:41.354      {
00:16:41.354        "name": "BaseBdev3",
00:16:41.354        "uuid": "00000000-0000-0000-0000-000000000000",
00:16:41.354        "is_configured": false,
00:16:41.354        "data_offset": 0,
00:16:41.354        "data_size": 0
00:16:41.354      },
00:16:41.354      {
00:16:41.354        "name": "BaseBdev4",
00:16:41.354        "uuid": "00000000-0000-0000-0000-000000000000",
00:16:41.354        "is_configured": false,
00:16:41.354        "data_offset": 0,
00:16:41.354        "data_size": 0
00:16:41.354      }
00:16:41.354    ]
00:16:41.354  }'
00:16:41.354   17:00:33	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:16:41.354   17:00:33	-- common/autotest_common.sh@10 -- # set +x
00:16:41.923   17:00:34	-- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid
00:16:41.923  [2024-11-19 17:00:34.654523] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:16:41.923  [2024-11-19 17:00:34.654747] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring
00:16:41.923   17:00:34	-- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid
00:16:42.182  [2024-11-19 17:00:34.910673] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:16:42.182  [2024-11-19 17:00:34.910984] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:16:42.182  [2024-11-19 17:00:34.911109] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:16:42.182  [2024-11-19 17:00:34.911171] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:16:42.182  [2024-11-19 17:00:34.911247] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:16:42.182  [2024-11-19 17:00:34.911294] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:16:42.182  [2024-11-19 17:00:34.911321] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4
00:16:42.182  [2024-11-19 17:00:34.911411] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now
00:16:42.182   17:00:34	-- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1
00:16:42.441  [2024-11-19 17:00:35.152183] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:16:42.441  BaseBdev1
00:16:42.441   17:00:35	-- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1
00:16:42.441   17:00:35	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1
00:16:42.441   17:00:35	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:16:42.441   17:00:35	-- common/autotest_common.sh@899 -- # local i
00:16:42.441   17:00:35	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:16:42.441   17:00:35	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:16:42.442   17:00:35	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:16:42.701   17:00:35	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000
00:16:42.701  [
00:16:42.701    {
00:16:42.701      "name": "BaseBdev1",
00:16:42.701      "aliases": [
00:16:42.701        "e1539da0-dc02-4913-9a76-c8e9b75b551a"
00:16:42.701      ],
00:16:42.701      "product_name": "Malloc disk",
00:16:42.701      "block_size": 512,
00:16:42.701      "num_blocks": 65536,
00:16:42.701      "uuid": "e1539da0-dc02-4913-9a76-c8e9b75b551a",
00:16:42.701      "assigned_rate_limits": {
00:16:42.701        "rw_ios_per_sec": 0,
00:16:42.701        "rw_mbytes_per_sec": 0,
00:16:42.701        "r_mbytes_per_sec": 0,
00:16:42.701        "w_mbytes_per_sec": 0
00:16:42.701      },
00:16:42.701      "claimed": true,
00:16:42.701      "claim_type": "exclusive_write",
00:16:42.701      "zoned": false,
00:16:42.701      "supported_io_types": {
00:16:42.701        "read": true,
00:16:42.701        "write": true,
00:16:42.701        "unmap": true,
00:16:42.701        "write_zeroes": true,
00:16:42.701        "flush": true,
00:16:42.701        "reset": true,
00:16:42.701        "compare": false,
00:16:42.701        "compare_and_write": false,
00:16:42.701        "abort": true,
00:16:42.701        "nvme_admin": false,
00:16:42.701        "nvme_io": false
00:16:42.701      },
00:16:42.701      "memory_domains": [
00:16:42.701        {
00:16:42.701          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:16:42.701          "dma_device_type": 2
00:16:42.701        }
00:16:42.701      ],
00:16:42.701      "driver_specific": {}
00:16:42.701    }
00:16:42.701  ]
00:16:42.960   17:00:35	-- common/autotest_common.sh@905 -- # return 0
00:16:42.960   17:00:35	-- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4
00:16:42.960   17:00:35	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:16:42.960   17:00:35	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:16:42.960   17:00:35	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid0
00:16:42.960   17:00:35	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:16:42.960   17:00:35	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:16:42.960   17:00:35	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:16:42.960   17:00:35	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:16:42.960   17:00:35	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:16:42.960   17:00:35	-- bdev/bdev_raid.sh@125 -- # local tmp
00:16:42.960    17:00:35	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:16:42.960    17:00:35	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:16:42.960   17:00:35	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:16:42.960    "name": "Existed_Raid",
00:16:42.960    "uuid": "f8bb08e0-6937-41d4-933d-6b98b5e4a251",
00:16:42.960    "strip_size_kb": 64,
00:16:42.960    "state": "configuring",
00:16:42.960    "raid_level": "raid0",
00:16:42.960    "superblock": true,
00:16:42.960    "num_base_bdevs": 4,
00:16:42.960    "num_base_bdevs_discovered": 1,
00:16:42.960    "num_base_bdevs_operational": 4,
00:16:42.960    "base_bdevs_list": [
00:16:42.960      {
00:16:42.960        "name": "BaseBdev1",
00:16:42.960        "uuid": "e1539da0-dc02-4913-9a76-c8e9b75b551a",
00:16:42.960        "is_configured": true,
00:16:42.960        "data_offset": 2048,
00:16:42.960        "data_size": 63488
00:16:42.960      },
00:16:42.960      {
00:16:42.960        "name": "BaseBdev2",
00:16:42.960        "uuid": "00000000-0000-0000-0000-000000000000",
00:16:42.960        "is_configured": false,
00:16:42.960        "data_offset": 0,
00:16:42.960        "data_size": 0
00:16:42.960      },
00:16:42.960      {
00:16:42.960        "name": "BaseBdev3",
00:16:42.960        "uuid": "00000000-0000-0000-0000-000000000000",
00:16:42.960        "is_configured": false,
00:16:42.960        "data_offset": 0,
00:16:42.960        "data_size": 0
00:16:42.960      },
00:16:42.960      {
00:16:42.960        "name": "BaseBdev4",
00:16:42.960        "uuid": "00000000-0000-0000-0000-000000000000",
00:16:42.960        "is_configured": false,
00:16:42.960        "data_offset": 0,
00:16:42.960        "data_size": 0
00:16:42.960      }
00:16:42.960    ]
00:16:42.960  }'
00:16:42.960   17:00:35	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:16:42.960   17:00:35	-- common/autotest_common.sh@10 -- # set +x
00:16:43.528   17:00:36	-- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid
00:16:43.787  [2024-11-19 17:00:36.512488] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:16:43.787  [2024-11-19 17:00:36.512810] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring
00:16:43.787   17:00:36	-- bdev/bdev_raid.sh@244 -- # '[' true = true ']'
00:16:43.787   17:00:36	-- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1
00:16:44.046   17:00:36	-- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1
00:16:44.304  BaseBdev1
00:16:44.304   17:00:36	-- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1
00:16:44.304   17:00:36	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1
00:16:44.304   17:00:36	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:16:44.304   17:00:36	-- common/autotest_common.sh@899 -- # local i
00:16:44.304   17:00:36	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:16:44.304   17:00:36	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:16:44.304   17:00:36	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:16:44.562   17:00:37	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000
00:16:44.562  [
00:16:44.562    {
00:16:44.562      "name": "BaseBdev1",
00:16:44.562      "aliases": [
00:16:44.562        "1fb349b8-ed44-40b7-9bb2-cd46aa0603ed"
00:16:44.562      ],
00:16:44.562      "product_name": "Malloc disk",
00:16:44.562      "block_size": 512,
00:16:44.562      "num_blocks": 65536,
00:16:44.562      "uuid": "1fb349b8-ed44-40b7-9bb2-cd46aa0603ed",
00:16:44.562      "assigned_rate_limits": {
00:16:44.562        "rw_ios_per_sec": 0,
00:16:44.562        "rw_mbytes_per_sec": 0,
00:16:44.562        "r_mbytes_per_sec": 0,
00:16:44.562        "w_mbytes_per_sec": 0
00:16:44.562      },
00:16:44.562      "claimed": false,
00:16:44.562      "zoned": false,
00:16:44.562      "supported_io_types": {
00:16:44.562        "read": true,
00:16:44.562        "write": true,
00:16:44.562        "unmap": true,
00:16:44.562        "write_zeroes": true,
00:16:44.562        "flush": true,
00:16:44.563        "reset": true,
00:16:44.563        "compare": false,
00:16:44.563        "compare_and_write": false,
00:16:44.563        "abort": true,
00:16:44.563        "nvme_admin": false,
00:16:44.563        "nvme_io": false
00:16:44.563      },
00:16:44.563      "memory_domains": [
00:16:44.563        {
00:16:44.563          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:16:44.563          "dma_device_type": 2
00:16:44.563        }
00:16:44.563      ],
00:16:44.563      "driver_specific": {}
00:16:44.563    }
00:16:44.563  ]
00:16:44.563   17:00:37	-- common/autotest_common.sh@905 -- # return 0
00:16:44.563   17:00:37	-- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid
00:16:44.821  [2024-11-19 17:00:37.518174] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:16:44.821  [2024-11-19 17:00:37.520756] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:16:44.821  [2024-11-19 17:00:37.521007] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:16:44.821  [2024-11-19 17:00:37.521160] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:16:44.821  [2024-11-19 17:00:37.521229] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:16:44.821  [2024-11-19 17:00:37.521263] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4
00:16:44.821  [2024-11-19 17:00:37.521306] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now
00:16:44.821   17:00:37	-- bdev/bdev_raid.sh@254 -- # (( i = 1 ))
00:16:44.821   17:00:37	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:16:44.821   17:00:37	-- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4
00:16:44.822   17:00:37	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:16:44.822   17:00:37	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:16:44.822   17:00:37	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid0
00:16:44.822   17:00:37	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:16:44.822   17:00:37	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:16:44.822   17:00:37	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:16:44.822   17:00:37	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:16:44.822   17:00:37	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:16:44.822   17:00:37	-- bdev/bdev_raid.sh@125 -- # local tmp
00:16:44.822    17:00:37	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:16:44.822    17:00:37	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:16:45.081   17:00:37	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:16:45.081    "name": "Existed_Raid",
00:16:45.081    "uuid": "9007b1ea-8037-45bf-befc-65188ffae52d",
00:16:45.081    "strip_size_kb": 64,
00:16:45.081    "state": "configuring",
00:16:45.081    "raid_level": "raid0",
00:16:45.081    "superblock": true,
00:16:45.081    "num_base_bdevs": 4,
00:16:45.081    "num_base_bdevs_discovered": 1,
00:16:45.081    "num_base_bdevs_operational": 4,
00:16:45.081    "base_bdevs_list": [
00:16:45.081      {
00:16:45.081        "name": "BaseBdev1",
00:16:45.081        "uuid": "1fb349b8-ed44-40b7-9bb2-cd46aa0603ed",
00:16:45.081        "is_configured": true,
00:16:45.081        "data_offset": 2048,
00:16:45.081        "data_size": 63488
00:16:45.081      },
00:16:45.081      {
00:16:45.081        "name": "BaseBdev2",
00:16:45.081        "uuid": "00000000-0000-0000-0000-000000000000",
00:16:45.081        "is_configured": false,
00:16:45.081        "data_offset": 0,
00:16:45.081        "data_size": 0
00:16:45.081      },
00:16:45.081      {
00:16:45.081        "name": "BaseBdev3",
00:16:45.081        "uuid": "00000000-0000-0000-0000-000000000000",
00:16:45.081        "is_configured": false,
00:16:45.081        "data_offset": 0,
00:16:45.081        "data_size": 0
00:16:45.081      },
00:16:45.081      {
00:16:45.081        "name": "BaseBdev4",
00:16:45.081        "uuid": "00000000-0000-0000-0000-000000000000",
00:16:45.081        "is_configured": false,
00:16:45.081        "data_offset": 0,
00:16:45.081        "data_size": 0
00:16:45.081      }
00:16:45.081    ]
00:16:45.081  }'
00:16:45.081   17:00:37	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:16:45.081   17:00:37	-- common/autotest_common.sh@10 -- # set +x
00:16:45.649   17:00:38	-- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2
00:16:45.649  [2024-11-19 17:00:38.426323] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:16:45.649  BaseBdev2
00:16:45.649   17:00:38	-- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2
00:16:45.649   17:00:38	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2
00:16:45.649   17:00:38	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:16:45.649   17:00:38	-- common/autotest_common.sh@899 -- # local i
00:16:45.649   17:00:38	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:16:45.649   17:00:38	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:16:45.649   17:00:38	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:16:45.907   17:00:38	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000
00:16:46.166  [
00:16:46.166    {
00:16:46.166      "name": "BaseBdev2",
00:16:46.166      "aliases": [
00:16:46.166        "1f0b8083-b9b1-4513-83fb-94cf6f8461ab"
00:16:46.166      ],
00:16:46.166      "product_name": "Malloc disk",
00:16:46.166      "block_size": 512,
00:16:46.166      "num_blocks": 65536,
00:16:46.166      "uuid": "1f0b8083-b9b1-4513-83fb-94cf6f8461ab",
00:16:46.166      "assigned_rate_limits": {
00:16:46.166        "rw_ios_per_sec": 0,
00:16:46.166        "rw_mbytes_per_sec": 0,
00:16:46.166        "r_mbytes_per_sec": 0,
00:16:46.166        "w_mbytes_per_sec": 0
00:16:46.166      },
00:16:46.166      "claimed": true,
00:16:46.166      "claim_type": "exclusive_write",
00:16:46.166      "zoned": false,
00:16:46.166      "supported_io_types": {
00:16:46.166        "read": true,
00:16:46.166        "write": true,
00:16:46.166        "unmap": true,
00:16:46.166        "write_zeroes": true,
00:16:46.166        "flush": true,
00:16:46.166        "reset": true,
00:16:46.166        "compare": false,
00:16:46.166        "compare_and_write": false,
00:16:46.166        "abort": true,
00:16:46.166        "nvme_admin": false,
00:16:46.166        "nvme_io": false
00:16:46.166      },
00:16:46.166      "memory_domains": [
00:16:46.166        {
00:16:46.166          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:16:46.166          "dma_device_type": 2
00:16:46.166        }
00:16:46.166      ],
00:16:46.166      "driver_specific": {}
00:16:46.166    }
00:16:46.166  ]
00:16:46.166   17:00:38	-- common/autotest_common.sh@905 -- # return 0
00:16:46.166   17:00:38	-- bdev/bdev_raid.sh@254 -- # (( i++ ))
00:16:46.166   17:00:38	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:16:46.166   17:00:38	-- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4
00:16:46.166   17:00:38	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:16:46.166   17:00:38	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:16:46.166   17:00:38	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid0
00:16:46.166   17:00:38	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:16:46.166   17:00:38	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:16:46.166   17:00:38	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:16:46.166   17:00:38	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:16:46.166   17:00:38	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:16:46.166   17:00:38	-- bdev/bdev_raid.sh@125 -- # local tmp
00:16:46.166    17:00:38	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:16:46.166    17:00:38	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:16:46.426   17:00:39	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:16:46.426    "name": "Existed_Raid",
00:16:46.426    "uuid": "9007b1ea-8037-45bf-befc-65188ffae52d",
00:16:46.426    "strip_size_kb": 64,
00:16:46.426    "state": "configuring",
00:16:46.426    "raid_level": "raid0",
00:16:46.426    "superblock": true,
00:16:46.426    "num_base_bdevs": 4,
00:16:46.426    "num_base_bdevs_discovered": 2,
00:16:46.426    "num_base_bdevs_operational": 4,
00:16:46.426    "base_bdevs_list": [
00:16:46.426      {
00:16:46.426        "name": "BaseBdev1",
00:16:46.426        "uuid": "1fb349b8-ed44-40b7-9bb2-cd46aa0603ed",
00:16:46.426        "is_configured": true,
00:16:46.426        "data_offset": 2048,
00:16:46.426        "data_size": 63488
00:16:46.426      },
00:16:46.426      {
00:16:46.426        "name": "BaseBdev2",
00:16:46.426        "uuid": "1f0b8083-b9b1-4513-83fb-94cf6f8461ab",
00:16:46.426        "is_configured": true,
00:16:46.426        "data_offset": 2048,
00:16:46.426        "data_size": 63488
00:16:46.426      },
00:16:46.426      {
00:16:46.426        "name": "BaseBdev3",
00:16:46.426        "uuid": "00000000-0000-0000-0000-000000000000",
00:16:46.426        "is_configured": false,
00:16:46.426        "data_offset": 0,
00:16:46.426        "data_size": 0
00:16:46.426      },
00:16:46.426      {
00:16:46.426        "name": "BaseBdev4",
00:16:46.426        "uuid": "00000000-0000-0000-0000-000000000000",
00:16:46.426        "is_configured": false,
00:16:46.426        "data_offset": 0,
00:16:46.426        "data_size": 0
00:16:46.426      }
00:16:46.426    ]
00:16:46.426  }'
00:16:46.426   17:00:39	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:16:46.426   17:00:39	-- common/autotest_common.sh@10 -- # set +x
00:16:46.994   17:00:39	-- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3
00:16:46.994  [2024-11-19 17:00:39.721581] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:16:46.994  BaseBdev3
00:16:46.994   17:00:39	-- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3
00:16:46.994   17:00:39	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3
00:16:46.994   17:00:39	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:16:46.994   17:00:39	-- common/autotest_common.sh@899 -- # local i
00:16:46.994   17:00:39	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:16:46.994   17:00:39	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:16:46.994   17:00:39	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:16:47.254   17:00:40	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000
00:16:47.513  [
00:16:47.513    {
00:16:47.513      "name": "BaseBdev3",
00:16:47.513      "aliases": [
00:16:47.513        "c4c5cab4-7201-4451-ab3b-7ee8f7d44d6d"
00:16:47.513      ],
00:16:47.513      "product_name": "Malloc disk",
00:16:47.513      "block_size": 512,
00:16:47.513      "num_blocks": 65536,
00:16:47.513      "uuid": "c4c5cab4-7201-4451-ab3b-7ee8f7d44d6d",
00:16:47.513      "assigned_rate_limits": {
00:16:47.513        "rw_ios_per_sec": 0,
00:16:47.513        "rw_mbytes_per_sec": 0,
00:16:47.513        "r_mbytes_per_sec": 0,
00:16:47.513        "w_mbytes_per_sec": 0
00:16:47.513      },
00:16:47.513      "claimed": true,
00:16:47.513      "claim_type": "exclusive_write",
00:16:47.513      "zoned": false,
00:16:47.513      "supported_io_types": {
00:16:47.513        "read": true,
00:16:47.513        "write": true,
00:16:47.513        "unmap": true,
00:16:47.513        "write_zeroes": true,
00:16:47.513        "flush": true,
00:16:47.513        "reset": true,
00:16:47.513        "compare": false,
00:16:47.513        "compare_and_write": false,
00:16:47.513        "abort": true,
00:16:47.513        "nvme_admin": false,
00:16:47.513        "nvme_io": false
00:16:47.513      },
00:16:47.513      "memory_domains": [
00:16:47.513        {
00:16:47.513          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:16:47.513          "dma_device_type": 2
00:16:47.513        }
00:16:47.513      ],
00:16:47.513      "driver_specific": {}
00:16:47.513    }
00:16:47.513  ]
00:16:47.513   17:00:40	-- common/autotest_common.sh@905 -- # return 0
00:16:47.513   17:00:40	-- bdev/bdev_raid.sh@254 -- # (( i++ ))
00:16:47.513   17:00:40	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:16:47.514   17:00:40	-- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4
00:16:47.514   17:00:40	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:16:47.514   17:00:40	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:16:47.514   17:00:40	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid0
00:16:47.514   17:00:40	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:16:47.514   17:00:40	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:16:47.514   17:00:40	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:16:47.514   17:00:40	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:16:47.514   17:00:40	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:16:47.514   17:00:40	-- bdev/bdev_raid.sh@125 -- # local tmp
00:16:47.514    17:00:40	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:16:47.514    17:00:40	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:16:47.773   17:00:40	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:16:47.773    "name": "Existed_Raid",
00:16:47.773    "uuid": "9007b1ea-8037-45bf-befc-65188ffae52d",
00:16:47.773    "strip_size_kb": 64,
00:16:47.773    "state": "configuring",
00:16:47.773    "raid_level": "raid0",
00:16:47.773    "superblock": true,
00:16:47.773    "num_base_bdevs": 4,
00:16:47.773    "num_base_bdevs_discovered": 3,
00:16:47.773    "num_base_bdevs_operational": 4,
00:16:47.773    "base_bdevs_list": [
00:16:47.773      {
00:16:47.773        "name": "BaseBdev1",
00:16:47.773        "uuid": "1fb349b8-ed44-40b7-9bb2-cd46aa0603ed",
00:16:47.773        "is_configured": true,
00:16:47.773        "data_offset": 2048,
00:16:47.773        "data_size": 63488
00:16:47.773      },
00:16:47.773      {
00:16:47.773        "name": "BaseBdev2",
00:16:47.773        "uuid": "1f0b8083-b9b1-4513-83fb-94cf6f8461ab",
00:16:47.773        "is_configured": true,
00:16:47.773        "data_offset": 2048,
00:16:47.773        "data_size": 63488
00:16:47.773      },
00:16:47.773      {
00:16:47.773        "name": "BaseBdev3",
00:16:47.773        "uuid": "c4c5cab4-7201-4451-ab3b-7ee8f7d44d6d",
00:16:47.773        "is_configured": true,
00:16:47.773        "data_offset": 2048,
00:16:47.773        "data_size": 63488
00:16:47.773      },
00:16:47.773      {
00:16:47.773        "name": "BaseBdev4",
00:16:47.773        "uuid": "00000000-0000-0000-0000-000000000000",
00:16:47.773        "is_configured": false,
00:16:47.773        "data_offset": 0,
00:16:47.773        "data_size": 0
00:16:47.773      }
00:16:47.773    ]
00:16:47.773  }'
00:16:47.773   17:00:40	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:16:47.773   17:00:40	-- common/autotest_common.sh@10 -- # set +x
00:16:48.341   17:00:41	-- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4
00:16:48.600  [2024-11-19 17:00:41.334486] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed
00:16:48.600  [2024-11-19 17:00:41.334899] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006680
00:16:48.600  [2024-11-19 17:00:41.335034] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512
00:16:48.600  [2024-11-19 17:00:41.335243] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000021f0
00:16:48.600  [2024-11-19 17:00:41.335772] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006680
00:16:48.600  [2024-11-19 17:00:41.335889] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006680
00:16:48.600  BaseBdev4
00:16:48.600  [2024-11-19 17:00:41.336129] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:16:48.600   17:00:41	-- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4
00:16:48.600   17:00:41	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4
00:16:48.600   17:00:41	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:16:48.600   17:00:41	-- common/autotest_common.sh@899 -- # local i
00:16:48.600   17:00:41	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:16:48.600   17:00:41	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:16:48.600   17:00:41	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:16:48.859   17:00:41	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000
00:16:49.117  [
00:16:49.117    {
00:16:49.117      "name": "BaseBdev4",
00:16:49.117      "aliases": [
00:16:49.117        "e96d481c-1901-44b3-8f0e-cf01e2e2d2af"
00:16:49.117      ],
00:16:49.117      "product_name": "Malloc disk",
00:16:49.117      "block_size": 512,
00:16:49.117      "num_blocks": 65536,
00:16:49.117      "uuid": "e96d481c-1901-44b3-8f0e-cf01e2e2d2af",
00:16:49.117      "assigned_rate_limits": {
00:16:49.117        "rw_ios_per_sec": 0,
00:16:49.117        "rw_mbytes_per_sec": 0,
00:16:49.117        "r_mbytes_per_sec": 0,
00:16:49.117        "w_mbytes_per_sec": 0
00:16:49.117      },
00:16:49.117      "claimed": true,
00:16:49.117      "claim_type": "exclusive_write",
00:16:49.117      "zoned": false,
00:16:49.117      "supported_io_types": {
00:16:49.117        "read": true,
00:16:49.117        "write": true,
00:16:49.117        "unmap": true,
00:16:49.117        "write_zeroes": true,
00:16:49.117        "flush": true,
00:16:49.117        "reset": true,
00:16:49.117        "compare": false,
00:16:49.117        "compare_and_write": false,
00:16:49.117        "abort": true,
00:16:49.117        "nvme_admin": false,
00:16:49.117        "nvme_io": false
00:16:49.117      },
00:16:49.117      "memory_domains": [
00:16:49.117        {
00:16:49.117          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:16:49.117          "dma_device_type": 2
00:16:49.117        }
00:16:49.117      ],
00:16:49.117      "driver_specific": {}
00:16:49.117    }
00:16:49.117  ]
00:16:49.117   17:00:41	-- common/autotest_common.sh@905 -- # return 0
00:16:49.117   17:00:41	-- bdev/bdev_raid.sh@254 -- # (( i++ ))
00:16:49.117   17:00:41	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:16:49.117   17:00:41	-- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4
00:16:49.117   17:00:41	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:16:49.117   17:00:41	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:16:49.117   17:00:41	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid0
00:16:49.117   17:00:41	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:16:49.117   17:00:41	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:16:49.117   17:00:41	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:16:49.117   17:00:41	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:16:49.117   17:00:41	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:16:49.117   17:00:41	-- bdev/bdev_raid.sh@125 -- # local tmp
00:16:49.117    17:00:41	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:16:49.117    17:00:41	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:16:49.376   17:00:42	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:16:49.376    "name": "Existed_Raid",
00:16:49.376    "uuid": "9007b1ea-8037-45bf-befc-65188ffae52d",
00:16:49.376    "strip_size_kb": 64,
00:16:49.376    "state": "online",
00:16:49.376    "raid_level": "raid0",
00:16:49.376    "superblock": true,
00:16:49.376    "num_base_bdevs": 4,
00:16:49.376    "num_base_bdevs_discovered": 4,
00:16:49.376    "num_base_bdevs_operational": 4,
00:16:49.376    "base_bdevs_list": [
00:16:49.376      {
00:16:49.376        "name": "BaseBdev1",
00:16:49.376        "uuid": "1fb349b8-ed44-40b7-9bb2-cd46aa0603ed",
00:16:49.376        "is_configured": true,
00:16:49.376        "data_offset": 2048,
00:16:49.376        "data_size": 63488
00:16:49.376      },
00:16:49.376      {
00:16:49.376        "name": "BaseBdev2",
00:16:49.376        "uuid": "1f0b8083-b9b1-4513-83fb-94cf6f8461ab",
00:16:49.376        "is_configured": true,
00:16:49.376        "data_offset": 2048,
00:16:49.376        "data_size": 63488
00:16:49.376      },
00:16:49.376      {
00:16:49.376        "name": "BaseBdev3",
00:16:49.376        "uuid": "c4c5cab4-7201-4451-ab3b-7ee8f7d44d6d",
00:16:49.376        "is_configured": true,
00:16:49.376        "data_offset": 2048,
00:16:49.376        "data_size": 63488
00:16:49.376      },
00:16:49.376      {
00:16:49.376        "name": "BaseBdev4",
00:16:49.376        "uuid": "e96d481c-1901-44b3-8f0e-cf01e2e2d2af",
00:16:49.376        "is_configured": true,
00:16:49.376        "data_offset": 2048,
00:16:49.376        "data_size": 63488
00:16:49.376      }
00:16:49.376    ]
00:16:49.376  }'
00:16:49.376   17:00:42	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:16:49.376   17:00:42	-- common/autotest_common.sh@10 -- # set +x
00:16:49.943   17:00:42	-- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1
00:16:50.201  [2024-11-19 17:00:42.850958] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:16:50.201  [2024-11-19 17:00:42.851192] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:16:50.201  [2024-11-19 17:00:42.851427] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:16:50.201   17:00:42	-- bdev/bdev_raid.sh@263 -- # local expected_state
00:16:50.201   17:00:42	-- bdev/bdev_raid.sh@264 -- # has_redundancy raid0
00:16:50.201   17:00:42	-- bdev/bdev_raid.sh@195 -- # case $1 in
00:16:50.201   17:00:42	-- bdev/bdev_raid.sh@197 -- # return 1
00:16:50.201   17:00:42	-- bdev/bdev_raid.sh@265 -- # expected_state=offline
00:16:50.201   17:00:42	-- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3
00:16:50.201   17:00:42	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:16:50.201   17:00:42	-- bdev/bdev_raid.sh@118 -- # local expected_state=offline
00:16:50.201   17:00:42	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid0
00:16:50.201   17:00:42	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:16:50.201   17:00:42	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:16:50.201   17:00:42	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:16:50.201   17:00:42	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:16:50.201   17:00:42	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:16:50.201   17:00:42	-- bdev/bdev_raid.sh@125 -- # local tmp
00:16:50.201    17:00:42	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:16:50.201    17:00:42	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:16:50.460   17:00:43	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:16:50.461    "name": "Existed_Raid",
00:16:50.461    "uuid": "9007b1ea-8037-45bf-befc-65188ffae52d",
00:16:50.461    "strip_size_kb": 64,
00:16:50.461    "state": "offline",
00:16:50.461    "raid_level": "raid0",
00:16:50.461    "superblock": true,
00:16:50.461    "num_base_bdevs": 4,
00:16:50.461    "num_base_bdevs_discovered": 3,
00:16:50.461    "num_base_bdevs_operational": 3,
00:16:50.461    "base_bdevs_list": [
00:16:50.461      {
00:16:50.461        "name": null,
00:16:50.461        "uuid": "00000000-0000-0000-0000-000000000000",
00:16:50.461        "is_configured": false,
00:16:50.461        "data_offset": 2048,
00:16:50.461        "data_size": 63488
00:16:50.461      },
00:16:50.461      {
00:16:50.461        "name": "BaseBdev2",
00:16:50.461        "uuid": "1f0b8083-b9b1-4513-83fb-94cf6f8461ab",
00:16:50.461        "is_configured": true,
00:16:50.461        "data_offset": 2048,
00:16:50.461        "data_size": 63488
00:16:50.461      },
00:16:50.461      {
00:16:50.461        "name": "BaseBdev3",
00:16:50.461        "uuid": "c4c5cab4-7201-4451-ab3b-7ee8f7d44d6d",
00:16:50.461        "is_configured": true,
00:16:50.461        "data_offset": 2048,
00:16:50.461        "data_size": 63488
00:16:50.461      },
00:16:50.461      {
00:16:50.461        "name": "BaseBdev4",
00:16:50.461        "uuid": "e96d481c-1901-44b3-8f0e-cf01e2e2d2af",
00:16:50.461        "is_configured": true,
00:16:50.461        "data_offset": 2048,
00:16:50.461        "data_size": 63488
00:16:50.461      }
00:16:50.461    ]
00:16:50.461  }'
00:16:50.461   17:00:43	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:16:50.461   17:00:43	-- common/autotest_common.sh@10 -- # set +x
00:16:51.028   17:00:43	-- bdev/bdev_raid.sh@273 -- # (( i = 1 ))
00:16:51.028   17:00:43	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:16:51.028    17:00:43	-- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:16:51.028    17:00:43	-- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]'
00:16:51.286   17:00:43	-- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid
00:16:51.287   17:00:43	-- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:16:51.287   17:00:43	-- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2
00:16:51.545  [2024-11-19 17:00:44.210612] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2
00:16:51.545   17:00:44	-- bdev/bdev_raid.sh@273 -- # (( i++ ))
00:16:51.545   17:00:44	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:16:51.545    17:00:44	-- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:16:51.545    17:00:44	-- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]'
00:16:51.804   17:00:44	-- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid
00:16:51.804   17:00:44	-- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:16:51.804   17:00:44	-- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3
00:16:52.063  [2024-11-19 17:00:44.663683] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3
00:16:52.063   17:00:44	-- bdev/bdev_raid.sh@273 -- # (( i++ ))
00:16:52.063   17:00:44	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:16:52.063    17:00:44	-- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:16:52.063    17:00:44	-- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]'
00:16:52.323   17:00:44	-- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid
00:16:52.323   17:00:44	-- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:16:52.323   17:00:44	-- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4
00:16:52.582  [2024-11-19 17:00:45.187048] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4
00:16:52.582  [2024-11-19 17:00:45.187537] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state offline
00:16:52.582   17:00:45	-- bdev/bdev_raid.sh@273 -- # (( i++ ))
00:16:52.582   17:00:45	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:16:52.582    17:00:45	-- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:16:52.582    17:00:45	-- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)'
00:16:52.582   17:00:45	-- bdev/bdev_raid.sh@281 -- # raid_bdev=
00:16:52.582   17:00:45	-- bdev/bdev_raid.sh@282 -- # '[' -n '' ']'
00:16:52.582   17:00:45	-- bdev/bdev_raid.sh@287 -- # killprocess 129190
00:16:52.582   17:00:45	-- common/autotest_common.sh@936 -- # '[' -z 129190 ']'
00:16:52.582   17:00:45	-- common/autotest_common.sh@940 -- # kill -0 129190
00:16:52.582    17:00:45	-- common/autotest_common.sh@941 -- # uname
00:16:52.582   17:00:45	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:16:52.582    17:00:45	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 129190
00:16:52.841   17:00:45	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:16:52.841   17:00:45	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:16:52.841   17:00:45	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 129190'
00:16:52.841  killing process with pid 129190
00:16:52.841   17:00:45	-- common/autotest_common.sh@955 -- # kill 129190
00:16:52.841   17:00:45	-- common/autotest_common.sh@960 -- # wait 129190
00:16:52.841  [2024-11-19 17:00:45.448862] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:16:52.841  [2024-11-19 17:00:45.448972] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:16:53.101   17:00:45	-- bdev/bdev_raid.sh@289 -- # return 0
00:16:53.101  
00:16:53.101  real	0m13.136s
00:16:53.101  user	0m23.347s
00:16:53.101  sys	0m2.391s
00:16:53.101   17:00:45	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:16:53.101   17:00:45	-- common/autotest_common.sh@10 -- # set +x
00:16:53.101  ************************************
00:16:53.101  END TEST raid_state_function_test_sb
00:16:53.101  ************************************
00:16:53.101   17:00:45	-- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 4
00:16:53.101   17:00:45	-- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']'
00:16:53.101   17:00:45	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:16:53.101   17:00:45	-- common/autotest_common.sh@10 -- # set +x
00:16:53.101  ************************************
00:16:53.101  START TEST raid_superblock_test
00:16:53.101  ************************************
00:16:53.101   17:00:45	-- common/autotest_common.sh@1114 -- # raid_superblock_test raid0 4
00:16:53.101   17:00:45	-- bdev/bdev_raid.sh@338 -- # local raid_level=raid0
00:16:53.101   17:00:45	-- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4
00:16:53.101   17:00:45	-- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=()
00:16:53.101   17:00:45	-- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc
00:16:53.101   17:00:45	-- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=()
00:16:53.101   17:00:45	-- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt
00:16:53.101   17:00:45	-- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=()
00:16:53.101   17:00:45	-- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid
00:16:53.101   17:00:45	-- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1
00:16:53.101   17:00:45	-- bdev/bdev_raid.sh@344 -- # local strip_size
00:16:53.101   17:00:45	-- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg
00:16:53.101   17:00:45	-- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid
00:16:53.101   17:00:45	-- bdev/bdev_raid.sh@347 -- # local raid_bdev
00:16:53.101   17:00:45	-- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']'
00:16:53.101   17:00:45	-- bdev/bdev_raid.sh@350 -- # strip_size=64
00:16:53.101   17:00:45	-- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64'
00:16:53.101   17:00:45	-- bdev/bdev_raid.sh@357 -- # raid_pid=129624
00:16:53.101   17:00:45	-- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid
00:16:53.101   17:00:45	-- bdev/bdev_raid.sh@358 -- # waitforlisten 129624 /var/tmp/spdk-raid.sock
00:16:53.101   17:00:45	-- common/autotest_common.sh@829 -- # '[' -z 129624 ']'
00:16:53.101   17:00:45	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock
00:16:53.101   17:00:45	-- common/autotest_common.sh@834 -- # local max_retries=100
00:16:53.101  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...
00:16:53.101   17:00:45	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...'
00:16:53.101   17:00:45	-- common/autotest_common.sh@838 -- # xtrace_disable
00:16:53.101   17:00:45	-- common/autotest_common.sh@10 -- # set +x
00:16:53.101  [2024-11-19 17:00:45.871095] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:16:53.101  [2024-11-19 17:00:45.872278] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129624 ]
00:16:53.360  [2024-11-19 17:00:46.031887] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:16:53.360  [2024-11-19 17:00:46.085683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:16:53.360  [2024-11-19 17:00:46.134075] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:16:54.298   17:00:46	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:16:54.298   17:00:46	-- common/autotest_common.sh@862 -- # return 0
00:16:54.298   17:00:46	-- bdev/bdev_raid.sh@361 -- # (( i = 1 ))
00:16:54.298   17:00:46	-- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs ))
00:16:54.298   17:00:46	-- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1
00:16:54.298   17:00:46	-- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1
00:16:54.298   17:00:46	-- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001
00:16:54.298   17:00:46	-- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc)
00:16:54.298   17:00:46	-- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt)
00:16:54.298   17:00:46	-- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:16:54.298   17:00:46	-- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1
00:16:54.298  malloc1
00:16:54.298   17:00:47	-- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001
00:16:54.556  [2024-11-19 17:00:47.272052] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1
00:16:54.556  [2024-11-19 17:00:47.272396] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:16:54.556  [2024-11-19 17:00:47.272479] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80
00:16:54.556  [2024-11-19 17:00:47.272621] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:16:54.557  [2024-11-19 17:00:47.275266] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:16:54.557  [2024-11-19 17:00:47.275457] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1
00:16:54.557  pt1
00:16:54.557   17:00:47	-- bdev/bdev_raid.sh@361 -- # (( i++ ))
00:16:54.557   17:00:47	-- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs ))
00:16:54.557   17:00:47	-- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2
00:16:54.557   17:00:47	-- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2
00:16:54.557   17:00:47	-- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002
00:16:54.557   17:00:47	-- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc)
00:16:54.557   17:00:47	-- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt)
00:16:54.557   17:00:47	-- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:16:54.557   17:00:47	-- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2
00:16:54.815  malloc2
00:16:54.815   17:00:47	-- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:16:55.073  [2024-11-19 17:00:47.709319] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:16:55.073  [2024-11-19 17:00:47.709621] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:16:55.073  [2024-11-19 17:00:47.709692] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680
00:16:55.073  [2024-11-19 17:00:47.709841] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:16:55.073  [2024-11-19 17:00:47.712462] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:16:55.073  [2024-11-19 17:00:47.712647] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:16:55.073  pt2
00:16:55.073   17:00:47	-- bdev/bdev_raid.sh@361 -- # (( i++ ))
00:16:55.073   17:00:47	-- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs ))
00:16:55.074   17:00:47	-- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3
00:16:55.074   17:00:47	-- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3
00:16:55.074   17:00:47	-- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003
00:16:55.074   17:00:47	-- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc)
00:16:55.074   17:00:47	-- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt)
00:16:55.074   17:00:47	-- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:16:55.074   17:00:47	-- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3
00:16:55.074  malloc3
00:16:55.332   17:00:47	-- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003
00:16:55.332  [2024-11-19 17:00:48.113706] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3
00:16:55.332  [2024-11-19 17:00:48.113996] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:16:55.332  [2024-11-19 17:00:48.114077] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280
00:16:55.332  [2024-11-19 17:00:48.114292] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:16:55.332  [2024-11-19 17:00:48.117151] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:16:55.332  [2024-11-19 17:00:48.117328] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3
00:16:55.332  pt3
00:16:55.332   17:00:48	-- bdev/bdev_raid.sh@361 -- # (( i++ ))
00:16:55.332   17:00:48	-- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs ))
00:16:55.333   17:00:48	-- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4
00:16:55.333   17:00:48	-- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4
00:16:55.333   17:00:48	-- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004
00:16:55.333   17:00:48	-- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc)
00:16:55.333   17:00:48	-- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt)
00:16:55.333   17:00:48	-- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:16:55.333   17:00:48	-- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4
00:16:55.592  malloc4
00:16:55.592   17:00:48	-- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004
00:16:55.851  [2024-11-19 17:00:48.495066] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4
00:16:55.851  [2024-11-19 17:00:48.495383] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:16:55.851  [2024-11-19 17:00:48.495454] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80
00:16:55.851  [2024-11-19 17:00:48.495585] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:16:55.851  [2024-11-19 17:00:48.498071] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:16:55.851  [2024-11-19 17:00:48.498253] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4
00:16:55.851  pt4
00:16:55.851   17:00:48	-- bdev/bdev_raid.sh@361 -- # (( i++ ))
00:16:55.851   17:00:48	-- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs ))
00:16:55.851   17:00:48	-- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s
00:16:55.851  [2024-11-19 17:00:48.687247] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed
00:16:55.851  [2024-11-19 17:00:48.689637] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:16:55.851  [2024-11-19 17:00:48.689821] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed
00:16:55.851  [2024-11-19 17:00:48.689891] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed
00:16:55.851  [2024-11-19 17:00:48.690203] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008480
00:16:55.851  [2024-11-19 17:00:48.690302] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512
00:16:55.851  [2024-11-19 17:00:48.690502] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460
00:16:55.851  [2024-11-19 17:00:48.690941] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008480
00:16:55.851  [2024-11-19 17:00:48.691080] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008480
00:16:55.851  [2024-11-19 17:00:48.691375] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:16:56.109   17:00:48	-- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4
00:16:56.109   17:00:48	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:16:56.109   17:00:48	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:16:56.109   17:00:48	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid0
00:16:56.109   17:00:48	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:16:56.109   17:00:48	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:16:56.109   17:00:48	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:16:56.109   17:00:48	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:16:56.109   17:00:48	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:16:56.109   17:00:48	-- bdev/bdev_raid.sh@125 -- # local tmp
00:16:56.109    17:00:48	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:16:56.109    17:00:48	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:16:56.109   17:00:48	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:16:56.109    "name": "raid_bdev1",
00:16:56.109    "uuid": "b77c5e0c-285e-4bcd-b3be-9c5d9b7f16e4",
00:16:56.109    "strip_size_kb": 64,
00:16:56.109    "state": "online",
00:16:56.109    "raid_level": "raid0",
00:16:56.109    "superblock": true,
00:16:56.109    "num_base_bdevs": 4,
00:16:56.110    "num_base_bdevs_discovered": 4,
00:16:56.110    "num_base_bdevs_operational": 4,
00:16:56.110    "base_bdevs_list": [
00:16:56.110      {
00:16:56.110        "name": "pt1",
00:16:56.110        "uuid": "817aa349-16a2-5e31-b706-30e230ae80eb",
00:16:56.110        "is_configured": true,
00:16:56.110        "data_offset": 2048,
00:16:56.110        "data_size": 63488
00:16:56.110      },
00:16:56.110      {
00:16:56.110        "name": "pt2",
00:16:56.110        "uuid": "5dbbc9ce-a1fb-503b-8219-4e8d0201052e",
00:16:56.110        "is_configured": true,
00:16:56.110        "data_offset": 2048,
00:16:56.110        "data_size": 63488
00:16:56.110      },
00:16:56.110      {
00:16:56.110        "name": "pt3",
00:16:56.110        "uuid": "f17712f7-f830-5ad7-8947-556f345a81af",
00:16:56.110        "is_configured": true,
00:16:56.110        "data_offset": 2048,
00:16:56.110        "data_size": 63488
00:16:56.110      },
00:16:56.110      {
00:16:56.110        "name": "pt4",
00:16:56.110        "uuid": "7a14caff-d0c4-5746-a88c-71ea21a45e16",
00:16:56.110        "is_configured": true,
00:16:56.110        "data_offset": 2048,
00:16:56.110        "data_size": 63488
00:16:56.110      }
00:16:56.110    ]
00:16:56.110  }'
00:16:56.110   17:00:48	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:16:56.110   17:00:48	-- common/autotest_common.sh@10 -- # set +x
00:16:56.677    17:00:49	-- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1
00:16:56.677    17:00:49	-- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid'
00:16:56.936  [2024-11-19 17:00:49.755794] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:16:56.936   17:00:49	-- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=b77c5e0c-285e-4bcd-b3be-9c5d9b7f16e4
00:16:56.936   17:00:49	-- bdev/bdev_raid.sh@380 -- # '[' -z b77c5e0c-285e-4bcd-b3be-9c5d9b7f16e4 ']'
00:16:56.936   17:00:49	-- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1
00:16:57.195  [2024-11-19 17:00:49.943608] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:16:57.195  [2024-11-19 17:00:49.943858] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:16:57.195  [2024-11-19 17:00:49.944035] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:16:57.195  [2024-11-19 17:00:49.944148] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:16:57.195  [2024-11-19 17:00:49.944376] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state offline
00:16:57.195    17:00:49	-- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:16:57.195    17:00:49	-- bdev/bdev_raid.sh@386 -- # jq -r '.[]'
00:16:57.454   17:00:50	-- bdev/bdev_raid.sh@386 -- # raid_bdev=
00:16:57.454   17:00:50	-- bdev/bdev_raid.sh@387 -- # '[' -n '' ']'
00:16:57.454   17:00:50	-- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}"
00:16:57.454   17:00:50	-- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1
00:16:57.713   17:00:50	-- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}"
00:16:57.713   17:00:50	-- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2
00:16:57.713   17:00:50	-- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}"
00:16:57.713   17:00:50	-- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3
00:16:57.972   17:00:50	-- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}"
00:16:57.972   17:00:50	-- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4
00:16:58.230    17:00:50	-- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs
00:16:58.231    17:00:50	-- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any'
00:16:58.490   17:00:51	-- bdev/bdev_raid.sh@395 -- # '[' false == true ']'
00:16:58.490   17:00:51	-- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1
00:16:58.490   17:00:51	-- common/autotest_common.sh@650 -- # local es=0
00:16:58.490   17:00:51	-- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1
00:16:58.490   17:00:51	-- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:16:58.490   17:00:51	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:16:58.490    17:00:51	-- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:16:58.490   17:00:51	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:16:58.490    17:00:51	-- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:16:58.490   17:00:51	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:16:58.490   17:00:51	-- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:16:58.490   17:00:51	-- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]]
00:16:58.490   17:00:51	-- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1
00:16:58.749  [2024-11-19 17:00:51.475849] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed
00:16:58.749  [2024-11-19 17:00:51.478243] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed
00:16:58.749  [2024-11-19 17:00:51.478434] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed
00:16:58.749  [2024-11-19 17:00:51.478498] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed
00:16:58.749  [2024-11-19 17:00:51.478655] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1
00:16:58.749  [2024-11-19 17:00:51.478840] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2
00:16:58.749  [2024-11-19 17:00:51.478925] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3
00:16:58.749  [2024-11-19 17:00:51.479127] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4
00:16:58.749  [2024-11-19 17:00:51.479253] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:16:58.749  [2024-11-19 17:00:51.479290] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state configuring
00:16:58.749  request:
00:16:58.749  {
00:16:58.749    "name": "raid_bdev1",
00:16:58.749    "raid_level": "raid0",
00:16:58.749    "base_bdevs": [
00:16:58.749      "malloc1",
00:16:58.749      "malloc2",
00:16:58.749      "malloc3",
00:16:58.749      "malloc4"
00:16:58.749    ],
00:16:58.749    "superblock": false,
00:16:58.749    "strip_size_kb": 64,
00:16:58.749    "method": "bdev_raid_create",
00:16:58.749    "req_id": 1
00:16:58.749  }
00:16:58.749  Got JSON-RPC error response
00:16:58.749  response:
00:16:58.749  {
00:16:58.749    "code": -17,
00:16:58.749    "message": "Failed to create RAID bdev raid_bdev1: File exists"
00:16:58.749  }
00:16:58.749   17:00:51	-- common/autotest_common.sh@653 -- # es=1
00:16:58.749   17:00:51	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:16:58.749   17:00:51	-- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:16:58.749   17:00:51	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:16:58.749    17:00:51	-- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:16:58.749    17:00:51	-- bdev/bdev_raid.sh@403 -- # jq -r '.[]'
00:16:59.008   17:00:51	-- bdev/bdev_raid.sh@403 -- # raid_bdev=
00:16:59.008   17:00:51	-- bdev/bdev_raid.sh@404 -- # '[' -n '' ']'
00:16:59.008   17:00:51	-- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001
00:16:59.267  [2024-11-19 17:00:51.944051] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1
00:16:59.267  [2024-11-19 17:00:51.944386] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:16:59.267  [2024-11-19 17:00:51.944458] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080
00:16:59.267  [2024-11-19 17:00:51.944570] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:16:59.267  [2024-11-19 17:00:51.947085] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:16:59.267  [2024-11-19 17:00:51.947293] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1
00:16:59.267  [2024-11-19 17:00:51.947471] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1
00:16:59.267  [2024-11-19 17:00:51.947612] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed
00:16:59.267  pt1
00:16:59.267   17:00:51	-- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4
00:16:59.267   17:00:51	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:16:59.267   17:00:51	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:16:59.267   17:00:51	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid0
00:16:59.267   17:00:51	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:16:59.267   17:00:51	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:16:59.267   17:00:51	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:16:59.267   17:00:51	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:16:59.267   17:00:51	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:16:59.267   17:00:51	-- bdev/bdev_raid.sh@125 -- # local tmp
00:16:59.267    17:00:51	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:16:59.267    17:00:51	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:16:59.526   17:00:52	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:16:59.526    "name": "raid_bdev1",
00:16:59.526    "uuid": "b77c5e0c-285e-4bcd-b3be-9c5d9b7f16e4",
00:16:59.526    "strip_size_kb": 64,
00:16:59.526    "state": "configuring",
00:16:59.526    "raid_level": "raid0",
00:16:59.526    "superblock": true,
00:16:59.526    "num_base_bdevs": 4,
00:16:59.526    "num_base_bdevs_discovered": 1,
00:16:59.526    "num_base_bdevs_operational": 4,
00:16:59.526    "base_bdevs_list": [
00:16:59.526      {
00:16:59.526        "name": "pt1",
00:16:59.526        "uuid": "817aa349-16a2-5e31-b706-30e230ae80eb",
00:16:59.526        "is_configured": true,
00:16:59.526        "data_offset": 2048,
00:16:59.526        "data_size": 63488
00:16:59.526      },
00:16:59.526      {
00:16:59.526        "name": null,
00:16:59.526        "uuid": "5dbbc9ce-a1fb-503b-8219-4e8d0201052e",
00:16:59.526        "is_configured": false,
00:16:59.526        "data_offset": 2048,
00:16:59.526        "data_size": 63488
00:16:59.526      },
00:16:59.526      {
00:16:59.526        "name": null,
00:16:59.526        "uuid": "f17712f7-f830-5ad7-8947-556f345a81af",
00:16:59.526        "is_configured": false,
00:16:59.526        "data_offset": 2048,
00:16:59.526        "data_size": 63488
00:16:59.526      },
00:16:59.526      {
00:16:59.526        "name": null,
00:16:59.526        "uuid": "7a14caff-d0c4-5746-a88c-71ea21a45e16",
00:16:59.526        "is_configured": false,
00:16:59.526        "data_offset": 2048,
00:16:59.526        "data_size": 63488
00:16:59.526      }
00:16:59.526    ]
00:16:59.526  }'
00:16:59.526   17:00:52	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:16:59.526   17:00:52	-- common/autotest_common.sh@10 -- # set +x
00:17:00.093   17:00:52	-- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']'
00:17:00.093   17:00:52	-- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:17:00.352  [2024-11-19 17:00:52.976315] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:17:00.352  [2024-11-19 17:00:52.976655] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:17:00.352  [2024-11-19 17:00:52.976737] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980
00:17:00.352  [2024-11-19 17:00:52.976848] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:17:00.352  [2024-11-19 17:00:52.977328] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:17:00.352  [2024-11-19 17:00:52.977503] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:17:00.352  [2024-11-19 17:00:52.977706] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2
00:17:00.352  [2024-11-19 17:00:52.977815] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:17:00.352  pt2
00:17:00.352   17:00:52	-- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2
00:17:00.610  [2024-11-19 17:00:53.228344] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2
00:17:00.610   17:00:53	-- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4
00:17:00.610   17:00:53	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:17:00.610   17:00:53	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:17:00.610   17:00:53	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid0
00:17:00.610   17:00:53	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:17:00.610   17:00:53	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:17:00.610   17:00:53	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:17:00.610   17:00:53	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:17:00.610   17:00:53	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:17:00.610   17:00:53	-- bdev/bdev_raid.sh@125 -- # local tmp
00:17:00.610    17:00:53	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:17:00.610    17:00:53	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:17:00.610   17:00:53	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:17:00.610    "name": "raid_bdev1",
00:17:00.610    "uuid": "b77c5e0c-285e-4bcd-b3be-9c5d9b7f16e4",
00:17:00.610    "strip_size_kb": 64,
00:17:00.610    "state": "configuring",
00:17:00.610    "raid_level": "raid0",
00:17:00.610    "superblock": true,
00:17:00.610    "num_base_bdevs": 4,
00:17:00.610    "num_base_bdevs_discovered": 1,
00:17:00.610    "num_base_bdevs_operational": 4,
00:17:00.610    "base_bdevs_list": [
00:17:00.610      {
00:17:00.610        "name": "pt1",
00:17:00.610        "uuid": "817aa349-16a2-5e31-b706-30e230ae80eb",
00:17:00.610        "is_configured": true,
00:17:00.610        "data_offset": 2048,
00:17:00.610        "data_size": 63488
00:17:00.610      },
00:17:00.610      {
00:17:00.610        "name": null,
00:17:00.610        "uuid": "5dbbc9ce-a1fb-503b-8219-4e8d0201052e",
00:17:00.610        "is_configured": false,
00:17:00.610        "data_offset": 2048,
00:17:00.610        "data_size": 63488
00:17:00.610      },
00:17:00.610      {
00:17:00.610        "name": null,
00:17:00.610        "uuid": "f17712f7-f830-5ad7-8947-556f345a81af",
00:17:00.610        "is_configured": false,
00:17:00.610        "data_offset": 2048,
00:17:00.610        "data_size": 63488
00:17:00.610      },
00:17:00.610      {
00:17:00.610        "name": null,
00:17:00.610        "uuid": "7a14caff-d0c4-5746-a88c-71ea21a45e16",
00:17:00.610        "is_configured": false,
00:17:00.610        "data_offset": 2048,
00:17:00.610        "data_size": 63488
00:17:00.610      }
00:17:00.610    ]
00:17:00.610  }'
00:17:00.610   17:00:53	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:17:00.610   17:00:53	-- common/autotest_common.sh@10 -- # set +x
00:17:01.546   17:00:54	-- bdev/bdev_raid.sh@422 -- # (( i = 1 ))
00:17:01.546   17:00:54	-- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs ))
00:17:01.546   17:00:54	-- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:17:01.546  [2024-11-19 17:00:54.288561] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:17:01.546  [2024-11-19 17:00:54.288916] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:17:01.547  [2024-11-19 17:00:54.289002] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80
00:17:01.547  [2024-11-19 17:00:54.289111] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:17:01.547  [2024-11-19 17:00:54.289612] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:17:01.547  [2024-11-19 17:00:54.289782] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:17:01.547  [2024-11-19 17:00:54.289964] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2
00:17:01.547  [2024-11-19 17:00:54.290093] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:17:01.547  pt2
00:17:01.547   17:00:54	-- bdev/bdev_raid.sh@422 -- # (( i++ ))
00:17:01.547   17:00:54	-- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs ))
00:17:01.547   17:00:54	-- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003
00:17:01.806  [2024-11-19 17:00:54.536639] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3
00:17:01.806  [2024-11-19 17:00:54.536979] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:17:01.806  [2024-11-19 17:00:54.537048] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80
00:17:01.806  [2024-11-19 17:00:54.537186] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:17:01.806  [2024-11-19 17:00:54.537647] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:17:01.806  [2024-11-19 17:00:54.537808] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3
00:17:01.806  [2024-11-19 17:00:54.537967] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3
00:17:01.806  [2024-11-19 17:00:54.538065] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed
00:17:01.806  pt3
00:17:01.806   17:00:54	-- bdev/bdev_raid.sh@422 -- # (( i++ ))
00:17:01.806   17:00:54	-- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs ))
00:17:01.806   17:00:54	-- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004
00:17:02.065  [2024-11-19 17:00:54.796669] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4
00:17:02.065  [2024-11-19 17:00:54.796975] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:17:02.065  [2024-11-19 17:00:54.797047] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280
00:17:02.065  [2024-11-19 17:00:54.797154] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:17:02.065  [2024-11-19 17:00:54.797657] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:17:02.065  [2024-11-19 17:00:54.797840] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4
00:17:02.065  [2024-11-19 17:00:54.798022] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4
00:17:02.065  [2024-11-19 17:00:54.798125] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed
00:17:02.065  [2024-11-19 17:00:54.798381] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680
00:17:02.065  [2024-11-19 17:00:54.798487] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512
00:17:02.065  [2024-11-19 17:00:54.798604] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002940
00:17:02.065  [2024-11-19 17:00:54.799041] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680
00:17:02.065  [2024-11-19 17:00:54.799087] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680
00:17:02.065  [2024-11-19 17:00:54.799303] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:17:02.065  pt4
00:17:02.065   17:00:54	-- bdev/bdev_raid.sh@422 -- # (( i++ ))
00:17:02.065   17:00:54	-- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs ))
00:17:02.066   17:00:54	-- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4
00:17:02.066   17:00:54	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:17:02.066   17:00:54	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:17:02.066   17:00:54	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid0
00:17:02.066   17:00:54	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:17:02.066   17:00:54	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:17:02.066   17:00:54	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:17:02.066   17:00:54	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:17:02.066   17:00:54	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:17:02.066   17:00:54	-- bdev/bdev_raid.sh@125 -- # local tmp
00:17:02.066    17:00:54	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:17:02.066    17:00:54	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:17:02.324   17:00:55	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:17:02.324    "name": "raid_bdev1",
00:17:02.324    "uuid": "b77c5e0c-285e-4bcd-b3be-9c5d9b7f16e4",
00:17:02.324    "strip_size_kb": 64,
00:17:02.324    "state": "online",
00:17:02.324    "raid_level": "raid0",
00:17:02.324    "superblock": true,
00:17:02.324    "num_base_bdevs": 4,
00:17:02.324    "num_base_bdevs_discovered": 4,
00:17:02.324    "num_base_bdevs_operational": 4,
00:17:02.324    "base_bdevs_list": [
00:17:02.324      {
00:17:02.324        "name": "pt1",
00:17:02.324        "uuid": "817aa349-16a2-5e31-b706-30e230ae80eb",
00:17:02.324        "is_configured": true,
00:17:02.324        "data_offset": 2048,
00:17:02.324        "data_size": 63488
00:17:02.324      },
00:17:02.324      {
00:17:02.324        "name": "pt2",
00:17:02.324        "uuid": "5dbbc9ce-a1fb-503b-8219-4e8d0201052e",
00:17:02.324        "is_configured": true,
00:17:02.324        "data_offset": 2048,
00:17:02.324        "data_size": 63488
00:17:02.324      },
00:17:02.324      {
00:17:02.324        "name": "pt3",
00:17:02.324        "uuid": "f17712f7-f830-5ad7-8947-556f345a81af",
00:17:02.324        "is_configured": true,
00:17:02.324        "data_offset": 2048,
00:17:02.325        "data_size": 63488
00:17:02.325      },
00:17:02.325      {
00:17:02.325        "name": "pt4",
00:17:02.325        "uuid": "7a14caff-d0c4-5746-a88c-71ea21a45e16",
00:17:02.325        "is_configured": true,
00:17:02.325        "data_offset": 2048,
00:17:02.325        "data_size": 63488
00:17:02.325      }
00:17:02.325    ]
00:17:02.325  }'
00:17:02.325   17:00:55	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:17:02.325   17:00:55	-- common/autotest_common.sh@10 -- # set +x
00:17:02.893    17:00:55	-- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid'
00:17:02.893    17:00:55	-- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1
00:17:03.152  [2024-11-19 17:00:55.845046] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:17:03.152   17:00:55	-- bdev/bdev_raid.sh@430 -- # '[' b77c5e0c-285e-4bcd-b3be-9c5d9b7f16e4 '!=' b77c5e0c-285e-4bcd-b3be-9c5d9b7f16e4 ']'
00:17:03.152   17:00:55	-- bdev/bdev_raid.sh@434 -- # has_redundancy raid0
00:17:03.152   17:00:55	-- bdev/bdev_raid.sh@195 -- # case $1 in
00:17:03.152   17:00:55	-- bdev/bdev_raid.sh@197 -- # return 1
00:17:03.152   17:00:55	-- bdev/bdev_raid.sh@511 -- # killprocess 129624
00:17:03.152   17:00:55	-- common/autotest_common.sh@936 -- # '[' -z 129624 ']'
00:17:03.152   17:00:55	-- common/autotest_common.sh@940 -- # kill -0 129624
00:17:03.152    17:00:55	-- common/autotest_common.sh@941 -- # uname
00:17:03.152   17:00:55	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:17:03.152    17:00:55	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 129624
00:17:03.152   17:00:55	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:17:03.152   17:00:55	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:17:03.152   17:00:55	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 129624'
00:17:03.152  killing process with pid 129624
00:17:03.152   17:00:55	-- common/autotest_common.sh@955 -- # kill 129624
00:17:03.152  [2024-11-19 17:00:55.896852] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:17:03.152   17:00:55	-- common/autotest_common.sh@960 -- # wait 129624
00:17:03.152  [2024-11-19 17:00:55.896999] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:17:03.152  [2024-11-19 17:00:55.897069] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:17:03.152  [2024-11-19 17:00:55.897078] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline
00:17:03.152  [2024-11-19 17:00:55.945186] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:17:03.411  ************************************
00:17:03.411  END TEST raid_superblock_test
00:17:03.411  ************************************
00:17:03.411   17:00:56	-- bdev/bdev_raid.sh@513 -- # return 0
00:17:03.411  
00:17:03.411  real	0m10.393s
00:17:03.411  user	0m18.376s
00:17:03.411  sys	0m1.806s
00:17:03.411   17:00:56	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:17:03.411   17:00:56	-- common/autotest_common.sh@10 -- # set +x
00:17:03.411   17:00:56	-- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1
00:17:03.411   17:00:56	-- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 4 false
00:17:03.411   17:00:56	-- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']'
00:17:03.411   17:00:56	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:17:03.411   17:00:56	-- common/autotest_common.sh@10 -- # set +x
00:17:03.411  ************************************
00:17:03.411  START TEST raid_state_function_test
00:17:03.411  ************************************
00:17:03.411   17:00:56	-- common/autotest_common.sh@1114 -- # raid_state_function_test concat 4 false
00:17:03.411   17:00:56	-- bdev/bdev_raid.sh@202 -- # local raid_level=concat
00:17:03.411   17:00:56	-- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4
00:17:03.411   17:00:56	-- bdev/bdev_raid.sh@204 -- # local superblock=false
00:17:03.411   17:00:56	-- bdev/bdev_raid.sh@205 -- # local raid_bdev
00:17:03.411    17:00:56	-- bdev/bdev_raid.sh@206 -- # (( i = 1 ))
00:17:03.411    17:00:56	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:17:03.411    17:00:56	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev1
00:17:03.411    17:00:56	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:17:03.411    17:00:56	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:17:03.411    17:00:56	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev2
00:17:03.411    17:00:56	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:17:03.411    17:00:56	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:17:03.411    17:00:56	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev3
00:17:03.411    17:00:56	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:17:03.411    17:00:56	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:17:03.411    17:00:56	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev4
00:17:03.411    17:00:56	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:17:03.411    17:00:56	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:17:03.411   17:00:56	-- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4')
00:17:03.411   17:00:56	-- bdev/bdev_raid.sh@206 -- # local base_bdevs
00:17:03.411   17:00:56	-- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid
00:17:03.411   17:00:56	-- bdev/bdev_raid.sh@208 -- # local strip_size
00:17:03.411   17:00:56	-- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg
00:17:03.411   17:00:56	-- bdev/bdev_raid.sh@210 -- # local superblock_create_arg
00:17:03.411   17:00:56	-- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']'
00:17:03.411   17:00:56	-- bdev/bdev_raid.sh@213 -- # strip_size=64
00:17:03.411   17:00:56	-- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64'
00:17:03.411   17:00:56	-- bdev/bdev_raid.sh@219 -- # '[' false = true ']'
00:17:03.411   17:00:56	-- bdev/bdev_raid.sh@222 -- # superblock_create_arg=
00:17:03.411   17:00:56	-- bdev/bdev_raid.sh@226 -- # raid_pid=129941
00:17:03.670   17:00:56	-- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 129941'
00:17:03.670  Process raid pid: 129941
00:17:03.670   17:00:56	-- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid
00:17:03.670   17:00:56	-- bdev/bdev_raid.sh@228 -- # waitforlisten 129941 /var/tmp/spdk-raid.sock
00:17:03.670   17:00:56	-- common/autotest_common.sh@829 -- # '[' -z 129941 ']'
00:17:03.670   17:00:56	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock
00:17:03.670   17:00:56	-- common/autotest_common.sh@834 -- # local max_retries=100
00:17:03.670   17:00:56	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...'
00:17:03.670  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...
00:17:03.670   17:00:56	-- common/autotest_common.sh@838 -- # xtrace_disable
00:17:03.670   17:00:56	-- common/autotest_common.sh@10 -- # set +x
00:17:03.670  [2024-11-19 17:00:56.327206] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:17:03.670  [2024-11-19 17:00:56.327454] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:17:03.670  [2024-11-19 17:00:56.482907] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:17:03.929  [2024-11-19 17:00:56.539665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:17:03.929  [2024-11-19 17:00:56.587324] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:17:04.568   17:00:57	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:17:04.568   17:00:57	-- common/autotest_common.sh@862 -- # return 0
00:17:04.568   17:00:57	-- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid
00:17:04.841  [2024-11-19 17:00:57.508209] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:17:04.841  [2024-11-19 17:00:57.508295] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:17:04.841  [2024-11-19 17:00:57.508307] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:17:04.841  [2024-11-19 17:00:57.508327] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:17:04.841  [2024-11-19 17:00:57.508334] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:17:04.841  [2024-11-19 17:00:57.508379] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:17:04.841  [2024-11-19 17:00:57.508387] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4
00:17:04.841  [2024-11-19 17:00:57.508414] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now
00:17:04.841   17:00:57	-- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4
00:17:04.841   17:00:57	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:17:04.841   17:00:57	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:17:04.841   17:00:57	-- bdev/bdev_raid.sh@119 -- # local raid_level=concat
00:17:04.841   17:00:57	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:17:04.841   17:00:57	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:17:04.841   17:00:57	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:17:04.841   17:00:57	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:17:04.841   17:00:57	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:17:04.841   17:00:57	-- bdev/bdev_raid.sh@125 -- # local tmp
00:17:04.841    17:00:57	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:17:04.841    17:00:57	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:17:05.101   17:00:57	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:17:05.101    "name": "Existed_Raid",
00:17:05.101    "uuid": "00000000-0000-0000-0000-000000000000",
00:17:05.101    "strip_size_kb": 64,
00:17:05.101    "state": "configuring",
00:17:05.101    "raid_level": "concat",
00:17:05.101    "superblock": false,
00:17:05.101    "num_base_bdevs": 4,
00:17:05.101    "num_base_bdevs_discovered": 0,
00:17:05.101    "num_base_bdevs_operational": 4,
00:17:05.101    "base_bdevs_list": [
00:17:05.101      {
00:17:05.101        "name": "BaseBdev1",
00:17:05.101        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:05.101        "is_configured": false,
00:17:05.101        "data_offset": 0,
00:17:05.101        "data_size": 0
00:17:05.101      },
00:17:05.101      {
00:17:05.101        "name": "BaseBdev2",
00:17:05.101        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:05.101        "is_configured": false,
00:17:05.101        "data_offset": 0,
00:17:05.101        "data_size": 0
00:17:05.101      },
00:17:05.101      {
00:17:05.101        "name": "BaseBdev3",
00:17:05.101        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:05.101        "is_configured": false,
00:17:05.101        "data_offset": 0,
00:17:05.101        "data_size": 0
00:17:05.101      },
00:17:05.101      {
00:17:05.101        "name": "BaseBdev4",
00:17:05.101        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:05.101        "is_configured": false,
00:17:05.101        "data_offset": 0,
00:17:05.101        "data_size": 0
00:17:05.101      }
00:17:05.101    ]
00:17:05.101  }'
00:17:05.101   17:00:57	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:17:05.101   17:00:57	-- common/autotest_common.sh@10 -- # set +x
00:17:05.669   17:00:58	-- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid
00:17:05.928  [2024-11-19 17:00:58.556214] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:17:05.928  [2024-11-19 17:00:58.556264] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring
00:17:05.928   17:00:58	-- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid
00:17:06.187  [2024-11-19 17:00:58.808365] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:17:06.187  [2024-11-19 17:00:58.808445] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:17:06.187  [2024-11-19 17:00:58.808455] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:17:06.187  [2024-11-19 17:00:58.808480] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:17:06.187  [2024-11-19 17:00:58.808487] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:17:06.187  [2024-11-19 17:00:58.808504] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:17:06.187  [2024-11-19 17:00:58.808511] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4
00:17:06.187  [2024-11-19 17:00:58.808537] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now
00:17:06.187   17:00:58	-- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1
00:17:06.446  [2024-11-19 17:00:59.073766] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:17:06.446  BaseBdev1
00:17:06.446   17:00:59	-- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1
00:17:06.446   17:00:59	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1
00:17:06.446   17:00:59	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:17:06.446   17:00:59	-- common/autotest_common.sh@899 -- # local i
00:17:06.446   17:00:59	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:17:06.446   17:00:59	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:17:06.446   17:00:59	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:17:06.446   17:00:59	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000
00:17:06.705  [
00:17:06.705    {
00:17:06.705      "name": "BaseBdev1",
00:17:06.705      "aliases": [
00:17:06.705        "4a8deb02-2f36-45d0-86dd-3f3cb5ebb289"
00:17:06.705      ],
00:17:06.705      "product_name": "Malloc disk",
00:17:06.705      "block_size": 512,
00:17:06.705      "num_blocks": 65536,
00:17:06.705      "uuid": "4a8deb02-2f36-45d0-86dd-3f3cb5ebb289",
00:17:06.705      "assigned_rate_limits": {
00:17:06.705        "rw_ios_per_sec": 0,
00:17:06.705        "rw_mbytes_per_sec": 0,
00:17:06.705        "r_mbytes_per_sec": 0,
00:17:06.705        "w_mbytes_per_sec": 0
00:17:06.705      },
00:17:06.705      "claimed": true,
00:17:06.705      "claim_type": "exclusive_write",
00:17:06.705      "zoned": false,
00:17:06.705      "supported_io_types": {
00:17:06.705        "read": true,
00:17:06.705        "write": true,
00:17:06.705        "unmap": true,
00:17:06.705        "write_zeroes": true,
00:17:06.705        "flush": true,
00:17:06.705        "reset": true,
00:17:06.705        "compare": false,
00:17:06.705        "compare_and_write": false,
00:17:06.705        "abort": true,
00:17:06.705        "nvme_admin": false,
00:17:06.705        "nvme_io": false
00:17:06.705      },
00:17:06.705      "memory_domains": [
00:17:06.705        {
00:17:06.705          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:17:06.705          "dma_device_type": 2
00:17:06.705        }
00:17:06.705      ],
00:17:06.705      "driver_specific": {}
00:17:06.705    }
00:17:06.705  ]
00:17:06.705   17:00:59	-- common/autotest_common.sh@905 -- # return 0
00:17:06.705   17:00:59	-- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4
00:17:06.705   17:00:59	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:17:06.705   17:00:59	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:17:06.705   17:00:59	-- bdev/bdev_raid.sh@119 -- # local raid_level=concat
00:17:06.705   17:00:59	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:17:06.705   17:00:59	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:17:06.705   17:00:59	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:17:06.705   17:00:59	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:17:06.705   17:00:59	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:17:06.705   17:00:59	-- bdev/bdev_raid.sh@125 -- # local tmp
00:17:06.705    17:00:59	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:17:06.705    17:00:59	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:17:06.964   17:00:59	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:17:06.964    "name": "Existed_Raid",
00:17:06.964    "uuid": "00000000-0000-0000-0000-000000000000",
00:17:06.964    "strip_size_kb": 64,
00:17:06.964    "state": "configuring",
00:17:06.964    "raid_level": "concat",
00:17:06.964    "superblock": false,
00:17:06.964    "num_base_bdevs": 4,
00:17:06.964    "num_base_bdevs_discovered": 1,
00:17:06.964    "num_base_bdevs_operational": 4,
00:17:06.964    "base_bdevs_list": [
00:17:06.964      {
00:17:06.964        "name": "BaseBdev1",
00:17:06.964        "uuid": "4a8deb02-2f36-45d0-86dd-3f3cb5ebb289",
00:17:06.964        "is_configured": true,
00:17:06.964        "data_offset": 0,
00:17:06.964        "data_size": 65536
00:17:06.964      },
00:17:06.964      {
00:17:06.964        "name": "BaseBdev2",
00:17:06.964        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:06.964        "is_configured": false,
00:17:06.964        "data_offset": 0,
00:17:06.964        "data_size": 0
00:17:06.964      },
00:17:06.964      {
00:17:06.964        "name": "BaseBdev3",
00:17:06.964        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:06.964        "is_configured": false,
00:17:06.964        "data_offset": 0,
00:17:06.964        "data_size": 0
00:17:06.964      },
00:17:06.964      {
00:17:06.964        "name": "BaseBdev4",
00:17:06.964        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:06.964        "is_configured": false,
00:17:06.964        "data_offset": 0,
00:17:06.964        "data_size": 0
00:17:06.964      }
00:17:06.964    ]
00:17:06.964  }'
00:17:06.964   17:00:59	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:17:06.964   17:00:59	-- common/autotest_common.sh@10 -- # set +x
00:17:07.532   17:01:00	-- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid
00:17:07.791  [2024-11-19 17:01:00.498057] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:17:07.791  [2024-11-19 17:01:00.498134] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring
00:17:07.791   17:01:00	-- bdev/bdev_raid.sh@244 -- # '[' false = true ']'
00:17:07.791   17:01:00	-- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid
00:17:08.050  [2024-11-19 17:01:00.678207] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:17:08.050  [2024-11-19 17:01:00.680494] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:17:08.050  [2024-11-19 17:01:00.680578] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:17:08.050  [2024-11-19 17:01:00.680588] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:17:08.050  [2024-11-19 17:01:00.680614] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:17:08.050  [2024-11-19 17:01:00.680621] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4
00:17:08.050  [2024-11-19 17:01:00.680639] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now
00:17:08.050   17:01:00	-- bdev/bdev_raid.sh@254 -- # (( i = 1 ))
00:17:08.050   17:01:00	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:17:08.050   17:01:00	-- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4
00:17:08.050   17:01:00	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:17:08.050   17:01:00	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:17:08.050   17:01:00	-- bdev/bdev_raid.sh@119 -- # local raid_level=concat
00:17:08.050   17:01:00	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:17:08.050   17:01:00	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:17:08.050   17:01:00	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:17:08.050   17:01:00	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:17:08.050   17:01:00	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:17:08.050   17:01:00	-- bdev/bdev_raid.sh@125 -- # local tmp
00:17:08.050    17:01:00	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:17:08.050    17:01:00	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:17:08.309   17:01:00	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:17:08.309    "name": "Existed_Raid",
00:17:08.309    "uuid": "00000000-0000-0000-0000-000000000000",
00:17:08.309    "strip_size_kb": 64,
00:17:08.309    "state": "configuring",
00:17:08.309    "raid_level": "concat",
00:17:08.309    "superblock": false,
00:17:08.309    "num_base_bdevs": 4,
00:17:08.309    "num_base_bdevs_discovered": 1,
00:17:08.309    "num_base_bdevs_operational": 4,
00:17:08.309    "base_bdevs_list": [
00:17:08.309      {
00:17:08.309        "name": "BaseBdev1",
00:17:08.309        "uuid": "4a8deb02-2f36-45d0-86dd-3f3cb5ebb289",
00:17:08.309        "is_configured": true,
00:17:08.309        "data_offset": 0,
00:17:08.309        "data_size": 65536
00:17:08.309      },
00:17:08.309      {
00:17:08.309        "name": "BaseBdev2",
00:17:08.309        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:08.309        "is_configured": false,
00:17:08.309        "data_offset": 0,
00:17:08.309        "data_size": 0
00:17:08.309      },
00:17:08.309      {
00:17:08.309        "name": "BaseBdev3",
00:17:08.309        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:08.309        "is_configured": false,
00:17:08.309        "data_offset": 0,
00:17:08.309        "data_size": 0
00:17:08.309      },
00:17:08.309      {
00:17:08.309        "name": "BaseBdev4",
00:17:08.309        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:08.309        "is_configured": false,
00:17:08.309        "data_offset": 0,
00:17:08.309        "data_size": 0
00:17:08.309      }
00:17:08.309    ]
00:17:08.309  }'
00:17:08.309   17:01:00	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:17:08.309   17:01:00	-- common/autotest_common.sh@10 -- # set +x
00:17:08.876   17:01:01	-- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2
00:17:09.135  [2024-11-19 17:01:01.762459] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:17:09.135  BaseBdev2
00:17:09.135   17:01:01	-- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2
00:17:09.135   17:01:01	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2
00:17:09.135   17:01:01	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:17:09.135   17:01:01	-- common/autotest_common.sh@899 -- # local i
00:17:09.135   17:01:01	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:17:09.135   17:01:01	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:17:09.135   17:01:01	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:17:09.394   17:01:02	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000
00:17:09.653  [
00:17:09.653    {
00:17:09.653      "name": "BaseBdev2",
00:17:09.653      "aliases": [
00:17:09.653        "fe960dc0-ec9d-45a9-a7ec-314325b0e0a2"
00:17:09.653      ],
00:17:09.653      "product_name": "Malloc disk",
00:17:09.653      "block_size": 512,
00:17:09.653      "num_blocks": 65536,
00:17:09.653      "uuid": "fe960dc0-ec9d-45a9-a7ec-314325b0e0a2",
00:17:09.653      "assigned_rate_limits": {
00:17:09.653        "rw_ios_per_sec": 0,
00:17:09.653        "rw_mbytes_per_sec": 0,
00:17:09.653        "r_mbytes_per_sec": 0,
00:17:09.653        "w_mbytes_per_sec": 0
00:17:09.653      },
00:17:09.653      "claimed": true,
00:17:09.653      "claim_type": "exclusive_write",
00:17:09.653      "zoned": false,
00:17:09.653      "supported_io_types": {
00:17:09.653        "read": true,
00:17:09.653        "write": true,
00:17:09.653        "unmap": true,
00:17:09.653        "write_zeroes": true,
00:17:09.653        "flush": true,
00:17:09.653        "reset": true,
00:17:09.653        "compare": false,
00:17:09.653        "compare_and_write": false,
00:17:09.653        "abort": true,
00:17:09.653        "nvme_admin": false,
00:17:09.653        "nvme_io": false
00:17:09.653      },
00:17:09.653      "memory_domains": [
00:17:09.653        {
00:17:09.653          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:17:09.653          "dma_device_type": 2
00:17:09.653        }
00:17:09.653      ],
00:17:09.653      "driver_specific": {}
00:17:09.653    }
00:17:09.653  ]
00:17:09.653   17:01:02	-- common/autotest_common.sh@905 -- # return 0
00:17:09.653   17:01:02	-- bdev/bdev_raid.sh@254 -- # (( i++ ))
00:17:09.653   17:01:02	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:17:09.653   17:01:02	-- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4
00:17:09.653   17:01:02	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:17:09.653   17:01:02	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:17:09.653   17:01:02	-- bdev/bdev_raid.sh@119 -- # local raid_level=concat
00:17:09.653   17:01:02	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:17:09.653   17:01:02	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:17:09.653   17:01:02	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:17:09.653   17:01:02	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:17:09.653   17:01:02	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:17:09.653   17:01:02	-- bdev/bdev_raid.sh@125 -- # local tmp
00:17:09.653    17:01:02	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:17:09.653    17:01:02	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:17:09.912   17:01:02	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:17:09.912    "name": "Existed_Raid",
00:17:09.912    "uuid": "00000000-0000-0000-0000-000000000000",
00:17:09.912    "strip_size_kb": 64,
00:17:09.912    "state": "configuring",
00:17:09.912    "raid_level": "concat",
00:17:09.912    "superblock": false,
00:17:09.912    "num_base_bdevs": 4,
00:17:09.912    "num_base_bdevs_discovered": 2,
00:17:09.912    "num_base_bdevs_operational": 4,
00:17:09.912    "base_bdevs_list": [
00:17:09.912      {
00:17:09.912        "name": "BaseBdev1",
00:17:09.912        "uuid": "4a8deb02-2f36-45d0-86dd-3f3cb5ebb289",
00:17:09.912        "is_configured": true,
00:17:09.912        "data_offset": 0,
00:17:09.912        "data_size": 65536
00:17:09.912      },
00:17:09.912      {
00:17:09.912        "name": "BaseBdev2",
00:17:09.912        "uuid": "fe960dc0-ec9d-45a9-a7ec-314325b0e0a2",
00:17:09.912        "is_configured": true,
00:17:09.912        "data_offset": 0,
00:17:09.912        "data_size": 65536
00:17:09.912      },
00:17:09.912      {
00:17:09.912        "name": "BaseBdev3",
00:17:09.912        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:09.912        "is_configured": false,
00:17:09.912        "data_offset": 0,
00:17:09.912        "data_size": 0
00:17:09.912      },
00:17:09.912      {
00:17:09.912        "name": "BaseBdev4",
00:17:09.912        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:09.912        "is_configured": false,
00:17:09.912        "data_offset": 0,
00:17:09.912        "data_size": 0
00:17:09.912      }
00:17:09.912    ]
00:17:09.912  }'
00:17:09.912   17:01:02	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:17:09.912   17:01:02	-- common/autotest_common.sh@10 -- # set +x
00:17:10.479   17:01:03	-- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3
00:17:10.738  [2024-11-19 17:01:03.390104] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:17:10.738  BaseBdev3
00:17:10.738   17:01:03	-- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3
00:17:10.738   17:01:03	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3
00:17:10.738   17:01:03	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:17:10.738   17:01:03	-- common/autotest_common.sh@899 -- # local i
00:17:10.738   17:01:03	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:17:10.738   17:01:03	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:17:10.738   17:01:03	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:17:10.997   17:01:03	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000
00:17:11.255  [
00:17:11.255    {
00:17:11.255      "name": "BaseBdev3",
00:17:11.255      "aliases": [
00:17:11.255        "cd6677e9-93e6-46be-a160-73427ea5bcdc"
00:17:11.255      ],
00:17:11.255      "product_name": "Malloc disk",
00:17:11.255      "block_size": 512,
00:17:11.255      "num_blocks": 65536,
00:17:11.255      "uuid": "cd6677e9-93e6-46be-a160-73427ea5bcdc",
00:17:11.255      "assigned_rate_limits": {
00:17:11.255        "rw_ios_per_sec": 0,
00:17:11.255        "rw_mbytes_per_sec": 0,
00:17:11.255        "r_mbytes_per_sec": 0,
00:17:11.255        "w_mbytes_per_sec": 0
00:17:11.255      },
00:17:11.255      "claimed": true,
00:17:11.255      "claim_type": "exclusive_write",
00:17:11.255      "zoned": false,
00:17:11.255      "supported_io_types": {
00:17:11.255        "read": true,
00:17:11.255        "write": true,
00:17:11.255        "unmap": true,
00:17:11.255        "write_zeroes": true,
00:17:11.255        "flush": true,
00:17:11.255        "reset": true,
00:17:11.255        "compare": false,
00:17:11.255        "compare_and_write": false,
00:17:11.255        "abort": true,
00:17:11.255        "nvme_admin": false,
00:17:11.255        "nvme_io": false
00:17:11.255      },
00:17:11.255      "memory_domains": [
00:17:11.255        {
00:17:11.255          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:17:11.255          "dma_device_type": 2
00:17:11.255        }
00:17:11.255      ],
00:17:11.255      "driver_specific": {}
00:17:11.255    }
00:17:11.255  ]
00:17:11.255   17:01:03	-- common/autotest_common.sh@905 -- # return 0
00:17:11.256   17:01:03	-- bdev/bdev_raid.sh@254 -- # (( i++ ))
00:17:11.256   17:01:03	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:17:11.256   17:01:03	-- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4
00:17:11.256   17:01:03	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:17:11.256   17:01:03	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:17:11.256   17:01:03	-- bdev/bdev_raid.sh@119 -- # local raid_level=concat
00:17:11.256   17:01:03	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:17:11.256   17:01:03	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:17:11.256   17:01:03	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:17:11.256   17:01:03	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:17:11.256   17:01:03	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:17:11.256   17:01:03	-- bdev/bdev_raid.sh@125 -- # local tmp
00:17:11.256    17:01:03	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:17:11.256    17:01:03	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:17:11.256   17:01:04	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:17:11.256    "name": "Existed_Raid",
00:17:11.256    "uuid": "00000000-0000-0000-0000-000000000000",
00:17:11.256    "strip_size_kb": 64,
00:17:11.256    "state": "configuring",
00:17:11.256    "raid_level": "concat",
00:17:11.256    "superblock": false,
00:17:11.256    "num_base_bdevs": 4,
00:17:11.256    "num_base_bdevs_discovered": 3,
00:17:11.256    "num_base_bdevs_operational": 4,
00:17:11.256    "base_bdevs_list": [
00:17:11.256      {
00:17:11.256        "name": "BaseBdev1",
00:17:11.256        "uuid": "4a8deb02-2f36-45d0-86dd-3f3cb5ebb289",
00:17:11.256        "is_configured": true,
00:17:11.256        "data_offset": 0,
00:17:11.256        "data_size": 65536
00:17:11.256      },
00:17:11.256      {
00:17:11.256        "name": "BaseBdev2",
00:17:11.256        "uuid": "fe960dc0-ec9d-45a9-a7ec-314325b0e0a2",
00:17:11.256        "is_configured": true,
00:17:11.256        "data_offset": 0,
00:17:11.256        "data_size": 65536
00:17:11.256      },
00:17:11.256      {
00:17:11.256        "name": "BaseBdev3",
00:17:11.256        "uuid": "cd6677e9-93e6-46be-a160-73427ea5bcdc",
00:17:11.256        "is_configured": true,
00:17:11.256        "data_offset": 0,
00:17:11.256        "data_size": 65536
00:17:11.256      },
00:17:11.256      {
00:17:11.256        "name": "BaseBdev4",
00:17:11.256        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:11.256        "is_configured": false,
00:17:11.256        "data_offset": 0,
00:17:11.256        "data_size": 0
00:17:11.256      }
00:17:11.256    ]
00:17:11.256  }'
00:17:11.256   17:01:04	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:17:11.256   17:01:04	-- common/autotest_common.sh@10 -- # set +x
00:17:12.191   17:01:04	-- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4
00:17:12.191  [2024-11-19 17:01:04.953644] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed
00:17:12.191  [2024-11-19 17:01:04.953708] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080
00:17:12.191  [2024-11-19 17:01:04.953718] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512
00:17:12.191  [2024-11-19 17:01:04.953878] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120
00:17:12.191  [2024-11-19 17:01:04.954281] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080
00:17:12.191  [2024-11-19 17:01:04.954303] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080
00:17:12.191  [2024-11-19 17:01:04.954527] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:17:12.191  BaseBdev4
00:17:12.191   17:01:04	-- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4
00:17:12.191   17:01:04	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4
00:17:12.191   17:01:04	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:17:12.192   17:01:04	-- common/autotest_common.sh@899 -- # local i
00:17:12.192   17:01:04	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:17:12.192   17:01:04	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:17:12.192   17:01:04	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:17:12.450   17:01:05	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000
00:17:12.708  [
00:17:12.708    {
00:17:12.708      "name": "BaseBdev4",
00:17:12.708      "aliases": [
00:17:12.708        "f730cbed-e02b-4e04-b020-7aeef7c21462"
00:17:12.708      ],
00:17:12.708      "product_name": "Malloc disk",
00:17:12.708      "block_size": 512,
00:17:12.708      "num_blocks": 65536,
00:17:12.708      "uuid": "f730cbed-e02b-4e04-b020-7aeef7c21462",
00:17:12.708      "assigned_rate_limits": {
00:17:12.708        "rw_ios_per_sec": 0,
00:17:12.708        "rw_mbytes_per_sec": 0,
00:17:12.708        "r_mbytes_per_sec": 0,
00:17:12.708        "w_mbytes_per_sec": 0
00:17:12.708      },
00:17:12.708      "claimed": true,
00:17:12.708      "claim_type": "exclusive_write",
00:17:12.708      "zoned": false,
00:17:12.708      "supported_io_types": {
00:17:12.708        "read": true,
00:17:12.708        "write": true,
00:17:12.708        "unmap": true,
00:17:12.708        "write_zeroes": true,
00:17:12.708        "flush": true,
00:17:12.708        "reset": true,
00:17:12.708        "compare": false,
00:17:12.708        "compare_and_write": false,
00:17:12.708        "abort": true,
00:17:12.708        "nvme_admin": false,
00:17:12.708        "nvme_io": false
00:17:12.708      },
00:17:12.708      "memory_domains": [
00:17:12.708        {
00:17:12.708          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:17:12.708          "dma_device_type": 2
00:17:12.708        }
00:17:12.708      ],
00:17:12.708      "driver_specific": {}
00:17:12.708    }
00:17:12.708  ]
00:17:12.708   17:01:05	-- common/autotest_common.sh@905 -- # return 0
00:17:12.708   17:01:05	-- bdev/bdev_raid.sh@254 -- # (( i++ ))
00:17:12.708   17:01:05	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:17:12.708   17:01:05	-- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 4
00:17:12.708   17:01:05	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:17:12.708   17:01:05	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:17:12.708   17:01:05	-- bdev/bdev_raid.sh@119 -- # local raid_level=concat
00:17:12.708   17:01:05	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:17:12.708   17:01:05	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:17:12.708   17:01:05	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:17:12.708   17:01:05	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:17:12.708   17:01:05	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:17:12.708   17:01:05	-- bdev/bdev_raid.sh@125 -- # local tmp
00:17:12.708    17:01:05	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:17:12.708    17:01:05	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:17:12.967   17:01:05	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:17:12.967    "name": "Existed_Raid",
00:17:12.967    "uuid": "80c36523-bd12-463a-b42f-9fd78de8b2e4",
00:17:12.967    "strip_size_kb": 64,
00:17:12.967    "state": "online",
00:17:12.967    "raid_level": "concat",
00:17:12.967    "superblock": false,
00:17:12.967    "num_base_bdevs": 4,
00:17:12.967    "num_base_bdevs_discovered": 4,
00:17:12.967    "num_base_bdevs_operational": 4,
00:17:12.967    "base_bdevs_list": [
00:17:12.967      {
00:17:12.967        "name": "BaseBdev1",
00:17:12.967        "uuid": "4a8deb02-2f36-45d0-86dd-3f3cb5ebb289",
00:17:12.967        "is_configured": true,
00:17:12.967        "data_offset": 0,
00:17:12.967        "data_size": 65536
00:17:12.967      },
00:17:12.967      {
00:17:12.967        "name": "BaseBdev2",
00:17:12.967        "uuid": "fe960dc0-ec9d-45a9-a7ec-314325b0e0a2",
00:17:12.967        "is_configured": true,
00:17:12.967        "data_offset": 0,
00:17:12.967        "data_size": 65536
00:17:12.967      },
00:17:12.967      {
00:17:12.967        "name": "BaseBdev3",
00:17:12.967        "uuid": "cd6677e9-93e6-46be-a160-73427ea5bcdc",
00:17:12.967        "is_configured": true,
00:17:12.967        "data_offset": 0,
00:17:12.967        "data_size": 65536
00:17:12.967      },
00:17:12.967      {
00:17:12.967        "name": "BaseBdev4",
00:17:12.967        "uuid": "f730cbed-e02b-4e04-b020-7aeef7c21462",
00:17:12.967        "is_configured": true,
00:17:12.967        "data_offset": 0,
00:17:12.967        "data_size": 65536
00:17:12.967      }
00:17:12.967    ]
00:17:12.967  }'
00:17:12.967   17:01:05	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:17:12.967   17:01:05	-- common/autotest_common.sh@10 -- # set +x
00:17:13.532   17:01:06	-- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1
00:17:13.790  [2024-11-19 17:01:06.474112] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:17:13.790  [2024-11-19 17:01:06.474148] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:17:13.790  [2024-11-19 17:01:06.474236] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:17:13.790   17:01:06	-- bdev/bdev_raid.sh@263 -- # local expected_state
00:17:13.790   17:01:06	-- bdev/bdev_raid.sh@264 -- # has_redundancy concat
00:17:13.790   17:01:06	-- bdev/bdev_raid.sh@195 -- # case $1 in
00:17:13.790   17:01:06	-- bdev/bdev_raid.sh@197 -- # return 1
00:17:13.791   17:01:06	-- bdev/bdev_raid.sh@265 -- # expected_state=offline
00:17:13.791   17:01:06	-- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3
00:17:13.791   17:01:06	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:17:13.791   17:01:06	-- bdev/bdev_raid.sh@118 -- # local expected_state=offline
00:17:13.791   17:01:06	-- bdev/bdev_raid.sh@119 -- # local raid_level=concat
00:17:13.791   17:01:06	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:17:13.791   17:01:06	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:17:13.791   17:01:06	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:17:13.791   17:01:06	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:17:13.791   17:01:06	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:17:13.791   17:01:06	-- bdev/bdev_raid.sh@125 -- # local tmp
00:17:13.791    17:01:06	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:17:13.791    17:01:06	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:17:14.049   17:01:06	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:17:14.049    "name": "Existed_Raid",
00:17:14.049    "uuid": "80c36523-bd12-463a-b42f-9fd78de8b2e4",
00:17:14.049    "strip_size_kb": 64,
00:17:14.049    "state": "offline",
00:17:14.049    "raid_level": "concat",
00:17:14.049    "superblock": false,
00:17:14.049    "num_base_bdevs": 4,
00:17:14.049    "num_base_bdevs_discovered": 3,
00:17:14.049    "num_base_bdevs_operational": 3,
00:17:14.049    "base_bdevs_list": [
00:17:14.049      {
00:17:14.049        "name": null,
00:17:14.049        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:14.049        "is_configured": false,
00:17:14.049        "data_offset": 0,
00:17:14.049        "data_size": 65536
00:17:14.049      },
00:17:14.049      {
00:17:14.049        "name": "BaseBdev2",
00:17:14.049        "uuid": "fe960dc0-ec9d-45a9-a7ec-314325b0e0a2",
00:17:14.049        "is_configured": true,
00:17:14.049        "data_offset": 0,
00:17:14.049        "data_size": 65536
00:17:14.049      },
00:17:14.049      {
00:17:14.049        "name": "BaseBdev3",
00:17:14.049        "uuid": "cd6677e9-93e6-46be-a160-73427ea5bcdc",
00:17:14.049        "is_configured": true,
00:17:14.049        "data_offset": 0,
00:17:14.049        "data_size": 65536
00:17:14.049      },
00:17:14.049      {
00:17:14.049        "name": "BaseBdev4",
00:17:14.049        "uuid": "f730cbed-e02b-4e04-b020-7aeef7c21462",
00:17:14.049        "is_configured": true,
00:17:14.049        "data_offset": 0,
00:17:14.049        "data_size": 65536
00:17:14.049      }
00:17:14.049    ]
00:17:14.049  }'
00:17:14.049   17:01:06	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:17:14.049   17:01:06	-- common/autotest_common.sh@10 -- # set +x
00:17:14.615   17:01:07	-- bdev/bdev_raid.sh@273 -- # (( i = 1 ))
00:17:14.615   17:01:07	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:17:14.615    17:01:07	-- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:17:14.615    17:01:07	-- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]'
00:17:14.874   17:01:07	-- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid
00:17:14.874   17:01:07	-- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:17:14.874   17:01:07	-- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2
00:17:14.874  [2024-11-19 17:01:07.707320] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2
00:17:15.133   17:01:07	-- bdev/bdev_raid.sh@273 -- # (( i++ ))
00:17:15.133   17:01:07	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:17:15.133    17:01:07	-- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:17:15.133    17:01:07	-- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]'
00:17:15.391   17:01:08	-- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid
00:17:15.391   17:01:08	-- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:17:15.391   17:01:08	-- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3
00:17:15.391  [2024-11-19 17:01:08.232026] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3
00:17:15.649   17:01:08	-- bdev/bdev_raid.sh@273 -- # (( i++ ))
00:17:15.649   17:01:08	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:17:15.649    17:01:08	-- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:17:15.649    17:01:08	-- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]'
00:17:15.908   17:01:08	-- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid
00:17:15.908   17:01:08	-- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:17:15.908   17:01:08	-- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4
00:17:15.908  [2024-11-19 17:01:08.692680] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4
00:17:15.908  [2024-11-19 17:01:08.692745] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline
00:17:15.908   17:01:08	-- bdev/bdev_raid.sh@273 -- # (( i++ ))
00:17:15.908   17:01:08	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:17:15.908    17:01:08	-- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:17:15.908    17:01:08	-- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)'
00:17:16.167   17:01:08	-- bdev/bdev_raid.sh@281 -- # raid_bdev=
00:17:16.167   17:01:08	-- bdev/bdev_raid.sh@282 -- # '[' -n '' ']'
00:17:16.167   17:01:08	-- bdev/bdev_raid.sh@287 -- # killprocess 129941
00:17:16.167   17:01:08	-- common/autotest_common.sh@936 -- # '[' -z 129941 ']'
00:17:16.167   17:01:08	-- common/autotest_common.sh@940 -- # kill -0 129941
00:17:16.167    17:01:08	-- common/autotest_common.sh@941 -- # uname
00:17:16.167   17:01:08	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:17:16.167    17:01:08	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 129941
00:17:16.167   17:01:08	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:17:16.167   17:01:08	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:17:16.167   17:01:08	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 129941'
00:17:16.167  killing process with pid 129941
00:17:16.167   17:01:08	-- common/autotest_common.sh@955 -- # kill 129941
00:17:16.167   17:01:08	-- common/autotest_common.sh@960 -- # wait 129941
00:17:16.167  [2024-11-19 17:01:08.954001] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:17:16.167  [2024-11-19 17:01:08.954121] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:17:16.737   17:01:09	-- bdev/bdev_raid.sh@289 -- # return 0
00:17:16.737  
00:17:16.737  real	0m13.089s
00:17:16.737  user	0m23.425s
00:17:16.737  sys	0m2.199s
00:17:16.737   17:01:09	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:17:16.737   17:01:09	-- common/autotest_common.sh@10 -- # set +x
00:17:16.737  ************************************
00:17:16.737  END TEST raid_state_function_test
00:17:16.737  ************************************
00:17:16.737   17:01:09	-- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true
00:17:16.737   17:01:09	-- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']'
00:17:16.737   17:01:09	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:17:16.737   17:01:09	-- common/autotest_common.sh@10 -- # set +x
00:17:16.737  ************************************
00:17:16.737  START TEST raid_state_function_test_sb
00:17:16.737  ************************************
00:17:16.737   17:01:09	-- common/autotest_common.sh@1114 -- # raid_state_function_test concat 4 true
00:17:16.737   17:01:09	-- bdev/bdev_raid.sh@202 -- # local raid_level=concat
00:17:16.737   17:01:09	-- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4
00:17:16.737   17:01:09	-- bdev/bdev_raid.sh@204 -- # local superblock=true
00:17:16.737   17:01:09	-- bdev/bdev_raid.sh@205 -- # local raid_bdev
00:17:16.737    17:01:09	-- bdev/bdev_raid.sh@206 -- # (( i = 1 ))
00:17:16.737    17:01:09	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:17:16.737    17:01:09	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev1
00:17:16.737    17:01:09	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:17:16.737    17:01:09	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:17:16.737    17:01:09	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev2
00:17:16.737    17:01:09	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:17:16.737    17:01:09	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:17:16.737    17:01:09	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev3
00:17:16.737    17:01:09	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:17:16.737    17:01:09	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:17:16.737    17:01:09	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev4
00:17:16.737    17:01:09	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:17:16.737    17:01:09	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:17:16.737   17:01:09	-- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4')
00:17:16.737   17:01:09	-- bdev/bdev_raid.sh@206 -- # local base_bdevs
00:17:16.737   17:01:09	-- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid
00:17:16.737   17:01:09	-- bdev/bdev_raid.sh@208 -- # local strip_size
00:17:16.737   17:01:09	-- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg
00:17:16.737   17:01:09	-- bdev/bdev_raid.sh@210 -- # local superblock_create_arg
00:17:16.737   17:01:09	-- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']'
00:17:16.737   17:01:09	-- bdev/bdev_raid.sh@213 -- # strip_size=64
00:17:16.737   17:01:09	-- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64'
00:17:16.737   17:01:09	-- bdev/bdev_raid.sh@219 -- # '[' true = true ']'
00:17:16.737   17:01:09	-- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s
00:17:16.737   17:01:09	-- bdev/bdev_raid.sh@226 -- # raid_pid=130359
00:17:16.737  Process raid pid: 130359
00:17:16.737   17:01:09	-- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 130359'
00:17:16.737   17:01:09	-- bdev/bdev_raid.sh@228 -- # waitforlisten 130359 /var/tmp/spdk-raid.sock
00:17:16.737   17:01:09	-- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid
00:17:16.737   17:01:09	-- common/autotest_common.sh@829 -- # '[' -z 130359 ']'
00:17:16.737   17:01:09	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock
00:17:16.737  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...
00:17:16.737   17:01:09	-- common/autotest_common.sh@834 -- # local max_retries=100
00:17:16.737   17:01:09	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...'
00:17:16.737   17:01:09	-- common/autotest_common.sh@838 -- # xtrace_disable
00:17:16.737   17:01:09	-- common/autotest_common.sh@10 -- # set +x
00:17:16.737  [2024-11-19 17:01:09.491944] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:17:16.737  [2024-11-19 17:01:09.492180] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:17:16.996  [2024-11-19 17:01:09.651685] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:17:16.996  [2024-11-19 17:01:09.705345] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:17:16.996  [2024-11-19 17:01:09.753278] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:17:17.563   17:01:10	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:17:17.563   17:01:10	-- common/autotest_common.sh@862 -- # return 0
00:17:17.563   17:01:10	-- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid
00:17:17.821  [2024-11-19 17:01:10.633717] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:17:17.821  [2024-11-19 17:01:10.633985] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:17:17.821  [2024-11-19 17:01:10.634103] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:17:17.821  [2024-11-19 17:01:10.634156] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:17:17.821  [2024-11-19 17:01:10.634182] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:17:17.821  [2024-11-19 17:01:10.634318] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:17:17.821  [2024-11-19 17:01:10.634355] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4
00:17:17.821  [2024-11-19 17:01:10.634404] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now
00:17:17.821   17:01:10	-- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4
00:17:17.821   17:01:10	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:17:17.821   17:01:10	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:17:17.821   17:01:10	-- bdev/bdev_raid.sh@119 -- # local raid_level=concat
00:17:17.821   17:01:10	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:17:17.821   17:01:10	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:17:17.821   17:01:10	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:17:17.821   17:01:10	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:17:17.821   17:01:10	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:17:17.821   17:01:10	-- bdev/bdev_raid.sh@125 -- # local tmp
00:17:17.821    17:01:10	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:17:17.821    17:01:10	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:17:18.080   17:01:10	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:17:18.080    "name": "Existed_Raid",
00:17:18.080    "uuid": "1109e02c-5318-43fd-ad20-7301beeb4eed",
00:17:18.080    "strip_size_kb": 64,
00:17:18.080    "state": "configuring",
00:17:18.080    "raid_level": "concat",
00:17:18.080    "superblock": true,
00:17:18.080    "num_base_bdevs": 4,
00:17:18.080    "num_base_bdevs_discovered": 0,
00:17:18.080    "num_base_bdevs_operational": 4,
00:17:18.080    "base_bdevs_list": [
00:17:18.080      {
00:17:18.080        "name": "BaseBdev1",
00:17:18.080        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:18.080        "is_configured": false,
00:17:18.080        "data_offset": 0,
00:17:18.080        "data_size": 0
00:17:18.080      },
00:17:18.080      {
00:17:18.080        "name": "BaseBdev2",
00:17:18.080        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:18.080        "is_configured": false,
00:17:18.080        "data_offset": 0,
00:17:18.080        "data_size": 0
00:17:18.080      },
00:17:18.080      {
00:17:18.080        "name": "BaseBdev3",
00:17:18.080        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:18.080        "is_configured": false,
00:17:18.080        "data_offset": 0,
00:17:18.080        "data_size": 0
00:17:18.080      },
00:17:18.080      {
00:17:18.080        "name": "BaseBdev4",
00:17:18.080        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:18.080        "is_configured": false,
00:17:18.080        "data_offset": 0,
00:17:18.080        "data_size": 0
00:17:18.080      }
00:17:18.080    ]
00:17:18.080  }'
00:17:18.080   17:01:10	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:17:18.080   17:01:10	-- common/autotest_common.sh@10 -- # set +x
00:17:18.648   17:01:11	-- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid
00:17:18.906  [2024-11-19 17:01:11.721730] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:17:18.906  [2024-11-19 17:01:11.721992] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring
00:17:18.906   17:01:11	-- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid
00:17:19.165  [2024-11-19 17:01:11.981843] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:17:19.165  [2024-11-19 17:01:11.982091] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:17:19.165  [2024-11-19 17:01:11.982226] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:17:19.165  [2024-11-19 17:01:11.982292] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:17:19.165  [2024-11-19 17:01:11.982473] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:17:19.165  [2024-11-19 17:01:11.982522] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:17:19.165  [2024-11-19 17:01:11.982551] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4
00:17:19.165  [2024-11-19 17:01:11.982744] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now
00:17:19.165   17:01:11	-- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1
00:17:19.422  [2024-11-19 17:01:12.243425] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:17:19.422  BaseBdev1
00:17:19.422   17:01:12	-- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1
00:17:19.422   17:01:12	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1
00:17:19.422   17:01:12	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:17:19.422   17:01:12	-- common/autotest_common.sh@899 -- # local i
00:17:19.422   17:01:12	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:17:19.422   17:01:12	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:17:19.423   17:01:12	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:17:19.988   17:01:12	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000
00:17:19.988  [
00:17:19.988    {
00:17:19.988      "name": "BaseBdev1",
00:17:19.988      "aliases": [
00:17:19.988        "a6caba59-2280-4971-bf3e-75d776eeb525"
00:17:19.988      ],
00:17:19.988      "product_name": "Malloc disk",
00:17:19.988      "block_size": 512,
00:17:19.988      "num_blocks": 65536,
00:17:19.988      "uuid": "a6caba59-2280-4971-bf3e-75d776eeb525",
00:17:19.988      "assigned_rate_limits": {
00:17:19.988        "rw_ios_per_sec": 0,
00:17:19.988        "rw_mbytes_per_sec": 0,
00:17:19.988        "r_mbytes_per_sec": 0,
00:17:19.988        "w_mbytes_per_sec": 0
00:17:19.988      },
00:17:19.988      "claimed": true,
00:17:19.988      "claim_type": "exclusive_write",
00:17:19.988      "zoned": false,
00:17:19.988      "supported_io_types": {
00:17:19.988        "read": true,
00:17:19.988        "write": true,
00:17:19.988        "unmap": true,
00:17:19.988        "write_zeroes": true,
00:17:19.988        "flush": true,
00:17:19.988        "reset": true,
00:17:19.988        "compare": false,
00:17:19.988        "compare_and_write": false,
00:17:19.988        "abort": true,
00:17:19.988        "nvme_admin": false,
00:17:19.988        "nvme_io": false
00:17:19.988      },
00:17:19.988      "memory_domains": [
00:17:19.988        {
00:17:19.988          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:17:19.988          "dma_device_type": 2
00:17:19.988        }
00:17:19.988      ],
00:17:19.988      "driver_specific": {}
00:17:19.988    }
00:17:19.988  ]
00:17:19.988   17:01:12	-- common/autotest_common.sh@905 -- # return 0
00:17:19.988   17:01:12	-- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4
00:17:19.988   17:01:12	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:17:19.988   17:01:12	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:17:19.988   17:01:12	-- bdev/bdev_raid.sh@119 -- # local raid_level=concat
00:17:19.988   17:01:12	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:17:19.988   17:01:12	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:17:19.988   17:01:12	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:17:19.988   17:01:12	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:17:19.988   17:01:12	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:17:19.988   17:01:12	-- bdev/bdev_raid.sh@125 -- # local tmp
00:17:19.988    17:01:12	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:17:19.988    17:01:12	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:17:20.247   17:01:12	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:17:20.247    "name": "Existed_Raid",
00:17:20.247    "uuid": "e6819052-c4cf-48d9-8036-9a278c094e11",
00:17:20.247    "strip_size_kb": 64,
00:17:20.247    "state": "configuring",
00:17:20.247    "raid_level": "concat",
00:17:20.247    "superblock": true,
00:17:20.247    "num_base_bdevs": 4,
00:17:20.247    "num_base_bdevs_discovered": 1,
00:17:20.247    "num_base_bdevs_operational": 4,
00:17:20.247    "base_bdevs_list": [
00:17:20.247      {
00:17:20.247        "name": "BaseBdev1",
00:17:20.247        "uuid": "a6caba59-2280-4971-bf3e-75d776eeb525",
00:17:20.247        "is_configured": true,
00:17:20.247        "data_offset": 2048,
00:17:20.247        "data_size": 63488
00:17:20.247      },
00:17:20.247      {
00:17:20.247        "name": "BaseBdev2",
00:17:20.247        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:20.247        "is_configured": false,
00:17:20.247        "data_offset": 0,
00:17:20.247        "data_size": 0
00:17:20.247      },
00:17:20.247      {
00:17:20.247        "name": "BaseBdev3",
00:17:20.247        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:20.247        "is_configured": false,
00:17:20.247        "data_offset": 0,
00:17:20.247        "data_size": 0
00:17:20.247      },
00:17:20.247      {
00:17:20.247        "name": "BaseBdev4",
00:17:20.247        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:20.247        "is_configured": false,
00:17:20.247        "data_offset": 0,
00:17:20.247        "data_size": 0
00:17:20.247      }
00:17:20.247    ]
00:17:20.247  }'
00:17:20.247   17:01:12	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:17:20.247   17:01:12	-- common/autotest_common.sh@10 -- # set +x
00:17:20.813   17:01:13	-- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid
00:17:21.072  [2024-11-19 17:01:13.691776] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:17:21.072  [2024-11-19 17:01:13.692026] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring
00:17:21.072   17:01:13	-- bdev/bdev_raid.sh@244 -- # '[' true = true ']'
00:17:21.072   17:01:13	-- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1
00:17:21.460   17:01:13	-- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1
00:17:21.460  BaseBdev1
00:17:21.460   17:01:14	-- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1
00:17:21.460   17:01:14	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1
00:17:21.460   17:01:14	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:17:21.460   17:01:14	-- common/autotest_common.sh@899 -- # local i
00:17:21.460   17:01:14	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:17:21.460   17:01:14	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:17:21.460   17:01:14	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:17:21.723   17:01:14	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000
00:17:21.723  [
00:17:21.723    {
00:17:21.723      "name": "BaseBdev1",
00:17:21.723      "aliases": [
00:17:21.723        "abfb280f-ea67-4d7b-a86f-0f6e69c9a894"
00:17:21.723      ],
00:17:21.723      "product_name": "Malloc disk",
00:17:21.723      "block_size": 512,
00:17:21.723      "num_blocks": 65536,
00:17:21.723      "uuid": "abfb280f-ea67-4d7b-a86f-0f6e69c9a894",
00:17:21.723      "assigned_rate_limits": {
00:17:21.723        "rw_ios_per_sec": 0,
00:17:21.723        "rw_mbytes_per_sec": 0,
00:17:21.723        "r_mbytes_per_sec": 0,
00:17:21.723        "w_mbytes_per_sec": 0
00:17:21.723      },
00:17:21.723      "claimed": false,
00:17:21.723      "zoned": false,
00:17:21.723      "supported_io_types": {
00:17:21.723        "read": true,
00:17:21.723        "write": true,
00:17:21.723        "unmap": true,
00:17:21.723        "write_zeroes": true,
00:17:21.723        "flush": true,
00:17:21.723        "reset": true,
00:17:21.723        "compare": false,
00:17:21.723        "compare_and_write": false,
00:17:21.723        "abort": true,
00:17:21.723        "nvme_admin": false,
00:17:21.723        "nvme_io": false
00:17:21.723      },
00:17:21.723      "memory_domains": [
00:17:21.723        {
00:17:21.723          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:17:21.723          "dma_device_type": 2
00:17:21.723        }
00:17:21.723      ],
00:17:21.723      "driver_specific": {}
00:17:21.723    }
00:17:21.723  ]
00:17:21.723   17:01:14	-- common/autotest_common.sh@905 -- # return 0
00:17:21.723   17:01:14	-- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid
00:17:21.981  [2024-11-19 17:01:14.749640] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:17:21.981  [2024-11-19 17:01:14.752125] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:17:21.981  [2024-11-19 17:01:14.752358] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:17:21.981  [2024-11-19 17:01:14.752469] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:17:21.981  [2024-11-19 17:01:14.752584] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:17:21.981  [2024-11-19 17:01:14.752657] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4
00:17:21.981  [2024-11-19 17:01:14.752723] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now
00:17:21.981   17:01:14	-- bdev/bdev_raid.sh@254 -- # (( i = 1 ))
00:17:21.981   17:01:14	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:17:21.981   17:01:14	-- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4
00:17:21.981   17:01:14	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:17:21.981   17:01:14	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:17:21.981   17:01:14	-- bdev/bdev_raid.sh@119 -- # local raid_level=concat
00:17:21.981   17:01:14	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:17:21.981   17:01:14	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:17:21.981   17:01:14	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:17:21.981   17:01:14	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:17:21.981   17:01:14	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:17:21.981   17:01:14	-- bdev/bdev_raid.sh@125 -- # local tmp
00:17:21.981    17:01:14	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:17:21.981    17:01:14	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:17:22.239   17:01:14	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:17:22.239    "name": "Existed_Raid",
00:17:22.239    "uuid": "9e837a32-4643-4278-9e2c-cbf28a2029e1",
00:17:22.239    "strip_size_kb": 64,
00:17:22.239    "state": "configuring",
00:17:22.239    "raid_level": "concat",
00:17:22.239    "superblock": true,
00:17:22.239    "num_base_bdevs": 4,
00:17:22.239    "num_base_bdevs_discovered": 1,
00:17:22.239    "num_base_bdevs_operational": 4,
00:17:22.239    "base_bdevs_list": [
00:17:22.239      {
00:17:22.239        "name": "BaseBdev1",
00:17:22.239        "uuid": "abfb280f-ea67-4d7b-a86f-0f6e69c9a894",
00:17:22.239        "is_configured": true,
00:17:22.239        "data_offset": 2048,
00:17:22.239        "data_size": 63488
00:17:22.239      },
00:17:22.239      {
00:17:22.239        "name": "BaseBdev2",
00:17:22.239        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:22.239        "is_configured": false,
00:17:22.239        "data_offset": 0,
00:17:22.239        "data_size": 0
00:17:22.239      },
00:17:22.239      {
00:17:22.239        "name": "BaseBdev3",
00:17:22.239        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:22.239        "is_configured": false,
00:17:22.239        "data_offset": 0,
00:17:22.239        "data_size": 0
00:17:22.239      },
00:17:22.239      {
00:17:22.239        "name": "BaseBdev4",
00:17:22.239        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:22.239        "is_configured": false,
00:17:22.239        "data_offset": 0,
00:17:22.239        "data_size": 0
00:17:22.239      }
00:17:22.239    ]
00:17:22.239  }'
00:17:22.239   17:01:14	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:17:22.239   17:01:14	-- common/autotest_common.sh@10 -- # set +x
00:17:22.806   17:01:15	-- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2
00:17:23.065  [2024-11-19 17:01:15.774206] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:17:23.065  BaseBdev2
00:17:23.065   17:01:15	-- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2
00:17:23.065   17:01:15	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2
00:17:23.065   17:01:15	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:17:23.065   17:01:15	-- common/autotest_common.sh@899 -- # local i
00:17:23.065   17:01:15	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:17:23.065   17:01:15	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:17:23.065   17:01:15	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:17:23.323   17:01:15	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000
00:17:23.582  [
00:17:23.582    {
00:17:23.582      "name": "BaseBdev2",
00:17:23.582      "aliases": [
00:17:23.582        "e8f602d6-e51b-445d-95c3-c24563a4360a"
00:17:23.582      ],
00:17:23.582      "product_name": "Malloc disk",
00:17:23.582      "block_size": 512,
00:17:23.582      "num_blocks": 65536,
00:17:23.582      "uuid": "e8f602d6-e51b-445d-95c3-c24563a4360a",
00:17:23.582      "assigned_rate_limits": {
00:17:23.582        "rw_ios_per_sec": 0,
00:17:23.582        "rw_mbytes_per_sec": 0,
00:17:23.582        "r_mbytes_per_sec": 0,
00:17:23.582        "w_mbytes_per_sec": 0
00:17:23.582      },
00:17:23.582      "claimed": true,
00:17:23.582      "claim_type": "exclusive_write",
00:17:23.582      "zoned": false,
00:17:23.582      "supported_io_types": {
00:17:23.582        "read": true,
00:17:23.582        "write": true,
00:17:23.582        "unmap": true,
00:17:23.582        "write_zeroes": true,
00:17:23.582        "flush": true,
00:17:23.582        "reset": true,
00:17:23.582        "compare": false,
00:17:23.582        "compare_and_write": false,
00:17:23.582        "abort": true,
00:17:23.582        "nvme_admin": false,
00:17:23.582        "nvme_io": false
00:17:23.582      },
00:17:23.582      "memory_domains": [
00:17:23.582        {
00:17:23.582          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:17:23.582          "dma_device_type": 2
00:17:23.582        }
00:17:23.582      ],
00:17:23.582      "driver_specific": {}
00:17:23.582    }
00:17:23.582  ]
00:17:23.582   17:01:16	-- common/autotest_common.sh@905 -- # return 0
00:17:23.582   17:01:16	-- bdev/bdev_raid.sh@254 -- # (( i++ ))
00:17:23.582   17:01:16	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:17:23.582   17:01:16	-- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4
00:17:23.582   17:01:16	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:17:23.582   17:01:16	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:17:23.582   17:01:16	-- bdev/bdev_raid.sh@119 -- # local raid_level=concat
00:17:23.582   17:01:16	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:17:23.582   17:01:16	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:17:23.582   17:01:16	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:17:23.582   17:01:16	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:17:23.582   17:01:16	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:17:23.582   17:01:16	-- bdev/bdev_raid.sh@125 -- # local tmp
00:17:23.582    17:01:16	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:17:23.582    17:01:16	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:17:23.840   17:01:16	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:17:23.840    "name": "Existed_Raid",
00:17:23.840    "uuid": "9e837a32-4643-4278-9e2c-cbf28a2029e1",
00:17:23.840    "strip_size_kb": 64,
00:17:23.840    "state": "configuring",
00:17:23.840    "raid_level": "concat",
00:17:23.840    "superblock": true,
00:17:23.840    "num_base_bdevs": 4,
00:17:23.840    "num_base_bdevs_discovered": 2,
00:17:23.840    "num_base_bdevs_operational": 4,
00:17:23.840    "base_bdevs_list": [
00:17:23.840      {
00:17:23.840        "name": "BaseBdev1",
00:17:23.840        "uuid": "abfb280f-ea67-4d7b-a86f-0f6e69c9a894",
00:17:23.840        "is_configured": true,
00:17:23.840        "data_offset": 2048,
00:17:23.840        "data_size": 63488
00:17:23.840      },
00:17:23.840      {
00:17:23.840        "name": "BaseBdev2",
00:17:23.840        "uuid": "e8f602d6-e51b-445d-95c3-c24563a4360a",
00:17:23.841        "is_configured": true,
00:17:23.841        "data_offset": 2048,
00:17:23.841        "data_size": 63488
00:17:23.841      },
00:17:23.841      {
00:17:23.841        "name": "BaseBdev3",
00:17:23.841        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:23.841        "is_configured": false,
00:17:23.841        "data_offset": 0,
00:17:23.841        "data_size": 0
00:17:23.841      },
00:17:23.841      {
00:17:23.841        "name": "BaseBdev4",
00:17:23.841        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:23.841        "is_configured": false,
00:17:23.841        "data_offset": 0,
00:17:23.841        "data_size": 0
00:17:23.841      }
00:17:23.841    ]
00:17:23.841  }'
00:17:23.841   17:01:16	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:17:23.841   17:01:16	-- common/autotest_common.sh@10 -- # set +x
00:17:24.408   17:01:17	-- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3
00:17:24.666  [2024-11-19 17:01:17.273726] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:17:24.666  BaseBdev3
00:17:24.666   17:01:17	-- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3
00:17:24.666   17:01:17	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3
00:17:24.666   17:01:17	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:17:24.666   17:01:17	-- common/autotest_common.sh@899 -- # local i
00:17:24.666   17:01:17	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:17:24.666   17:01:17	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:17:24.666   17:01:17	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:17:24.924   17:01:17	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000
00:17:24.924  [
00:17:24.924    {
00:17:24.924      "name": "BaseBdev3",
00:17:24.924      "aliases": [
00:17:24.924        "a6c7c4d1-73d3-4e89-bd08-0ca3f25ee54e"
00:17:24.924      ],
00:17:24.924      "product_name": "Malloc disk",
00:17:24.924      "block_size": 512,
00:17:24.924      "num_blocks": 65536,
00:17:24.924      "uuid": "a6c7c4d1-73d3-4e89-bd08-0ca3f25ee54e",
00:17:24.924      "assigned_rate_limits": {
00:17:24.924        "rw_ios_per_sec": 0,
00:17:24.924        "rw_mbytes_per_sec": 0,
00:17:24.924        "r_mbytes_per_sec": 0,
00:17:24.924        "w_mbytes_per_sec": 0
00:17:24.924      },
00:17:24.924      "claimed": true,
00:17:24.924      "claim_type": "exclusive_write",
00:17:24.924      "zoned": false,
00:17:24.924      "supported_io_types": {
00:17:24.924        "read": true,
00:17:24.924        "write": true,
00:17:24.924        "unmap": true,
00:17:24.924        "write_zeroes": true,
00:17:24.924        "flush": true,
00:17:24.924        "reset": true,
00:17:24.924        "compare": false,
00:17:24.924        "compare_and_write": false,
00:17:24.924        "abort": true,
00:17:24.924        "nvme_admin": false,
00:17:24.924        "nvme_io": false
00:17:24.924      },
00:17:24.924      "memory_domains": [
00:17:24.924        {
00:17:24.924          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:17:24.924          "dma_device_type": 2
00:17:24.924        }
00:17:24.924      ],
00:17:24.924      "driver_specific": {}
00:17:24.924    }
00:17:24.924  ]
00:17:24.924   17:01:17	-- common/autotest_common.sh@905 -- # return 0
00:17:24.924   17:01:17	-- bdev/bdev_raid.sh@254 -- # (( i++ ))
00:17:24.924   17:01:17	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:17:24.924   17:01:17	-- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4
00:17:24.924   17:01:17	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:17:24.924   17:01:17	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:17:24.924   17:01:17	-- bdev/bdev_raid.sh@119 -- # local raid_level=concat
00:17:24.924   17:01:17	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:17:24.924   17:01:17	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:17:24.924   17:01:17	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:17:24.924   17:01:17	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:17:24.924   17:01:17	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:17:24.924   17:01:17	-- bdev/bdev_raid.sh@125 -- # local tmp
00:17:25.182    17:01:17	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:17:25.182    17:01:17	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:17:25.440   17:01:18	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:17:25.440    "name": "Existed_Raid",
00:17:25.440    "uuid": "9e837a32-4643-4278-9e2c-cbf28a2029e1",
00:17:25.440    "strip_size_kb": 64,
00:17:25.440    "state": "configuring",
00:17:25.440    "raid_level": "concat",
00:17:25.440    "superblock": true,
00:17:25.440    "num_base_bdevs": 4,
00:17:25.440    "num_base_bdevs_discovered": 3,
00:17:25.440    "num_base_bdevs_operational": 4,
00:17:25.440    "base_bdevs_list": [
00:17:25.440      {
00:17:25.440        "name": "BaseBdev1",
00:17:25.440        "uuid": "abfb280f-ea67-4d7b-a86f-0f6e69c9a894",
00:17:25.440        "is_configured": true,
00:17:25.440        "data_offset": 2048,
00:17:25.440        "data_size": 63488
00:17:25.440      },
00:17:25.440      {
00:17:25.440        "name": "BaseBdev2",
00:17:25.440        "uuid": "e8f602d6-e51b-445d-95c3-c24563a4360a",
00:17:25.440        "is_configured": true,
00:17:25.440        "data_offset": 2048,
00:17:25.440        "data_size": 63488
00:17:25.440      },
00:17:25.440      {
00:17:25.440        "name": "BaseBdev3",
00:17:25.440        "uuid": "a6c7c4d1-73d3-4e89-bd08-0ca3f25ee54e",
00:17:25.440        "is_configured": true,
00:17:25.440        "data_offset": 2048,
00:17:25.440        "data_size": 63488
00:17:25.440      },
00:17:25.440      {
00:17:25.440        "name": "BaseBdev4",
00:17:25.440        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:25.440        "is_configured": false,
00:17:25.440        "data_offset": 0,
00:17:25.440        "data_size": 0
00:17:25.440      }
00:17:25.440    ]
00:17:25.440  }'
00:17:25.440   17:01:18	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:17:25.440   17:01:18	-- common/autotest_common.sh@10 -- # set +x
00:17:26.006   17:01:18	-- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4
00:17:26.264  [2024-11-19 17:01:18.937254] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed
00:17:26.264  [2024-11-19 17:01:18.937488] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006680
00:17:26.264  [2024-11-19 17:01:18.937501] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512
00:17:26.264  [2024-11-19 17:01:18.937626] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000021f0
00:17:26.264  [2024-11-19 17:01:18.937998] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006680
00:17:26.264  [2024-11-19 17:01:18.938018] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006680
00:17:26.264  [2024-11-19 17:01:18.938177] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:17:26.264  BaseBdev4
00:17:26.264   17:01:18	-- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4
00:17:26.264   17:01:18	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4
00:17:26.264   17:01:18	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:17:26.264   17:01:18	-- common/autotest_common.sh@899 -- # local i
00:17:26.264   17:01:18	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:17:26.264   17:01:18	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:17:26.264   17:01:18	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:17:26.524   17:01:19	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000
00:17:26.524  [
00:17:26.524    {
00:17:26.524      "name": "BaseBdev4",
00:17:26.524      "aliases": [
00:17:26.524        "bbe9251e-2534-41fb-9c77-4115fcb8dc23"
00:17:26.524      ],
00:17:26.524      "product_name": "Malloc disk",
00:17:26.524      "block_size": 512,
00:17:26.524      "num_blocks": 65536,
00:17:26.524      "uuid": "bbe9251e-2534-41fb-9c77-4115fcb8dc23",
00:17:26.524      "assigned_rate_limits": {
00:17:26.524        "rw_ios_per_sec": 0,
00:17:26.524        "rw_mbytes_per_sec": 0,
00:17:26.524        "r_mbytes_per_sec": 0,
00:17:26.524        "w_mbytes_per_sec": 0
00:17:26.524      },
00:17:26.524      "claimed": true,
00:17:26.524      "claim_type": "exclusive_write",
00:17:26.524      "zoned": false,
00:17:26.524      "supported_io_types": {
00:17:26.524        "read": true,
00:17:26.524        "write": true,
00:17:26.524        "unmap": true,
00:17:26.524        "write_zeroes": true,
00:17:26.524        "flush": true,
00:17:26.524        "reset": true,
00:17:26.524        "compare": false,
00:17:26.524        "compare_and_write": false,
00:17:26.524        "abort": true,
00:17:26.524        "nvme_admin": false,
00:17:26.524        "nvme_io": false
00:17:26.524      },
00:17:26.524      "memory_domains": [
00:17:26.524        {
00:17:26.524          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:17:26.524          "dma_device_type": 2
00:17:26.524        }
00:17:26.524      ],
00:17:26.524      "driver_specific": {}
00:17:26.524    }
00:17:26.524  ]
00:17:26.524   17:01:19	-- common/autotest_common.sh@905 -- # return 0
00:17:26.524   17:01:19	-- bdev/bdev_raid.sh@254 -- # (( i++ ))
00:17:26.524   17:01:19	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:17:26.524   17:01:19	-- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 4
00:17:26.524   17:01:19	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:17:26.524   17:01:19	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:17:26.524   17:01:19	-- bdev/bdev_raid.sh@119 -- # local raid_level=concat
00:17:26.524   17:01:19	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:17:26.524   17:01:19	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:17:26.524   17:01:19	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:17:26.524   17:01:19	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:17:26.524   17:01:19	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:17:26.524   17:01:19	-- bdev/bdev_raid.sh@125 -- # local tmp
00:17:26.524    17:01:19	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:17:26.524    17:01:19	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:17:26.782   17:01:19	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:17:26.782    "name": "Existed_Raid",
00:17:26.782    "uuid": "9e837a32-4643-4278-9e2c-cbf28a2029e1",
00:17:26.782    "strip_size_kb": 64,
00:17:26.782    "state": "online",
00:17:26.782    "raid_level": "concat",
00:17:26.782    "superblock": true,
00:17:26.782    "num_base_bdevs": 4,
00:17:26.782    "num_base_bdevs_discovered": 4,
00:17:26.782    "num_base_bdevs_operational": 4,
00:17:26.782    "base_bdevs_list": [
00:17:26.782      {
00:17:26.782        "name": "BaseBdev1",
00:17:26.782        "uuid": "abfb280f-ea67-4d7b-a86f-0f6e69c9a894",
00:17:26.782        "is_configured": true,
00:17:26.782        "data_offset": 2048,
00:17:26.782        "data_size": 63488
00:17:26.782      },
00:17:26.782      {
00:17:26.782        "name": "BaseBdev2",
00:17:26.782        "uuid": "e8f602d6-e51b-445d-95c3-c24563a4360a",
00:17:26.782        "is_configured": true,
00:17:26.782        "data_offset": 2048,
00:17:26.782        "data_size": 63488
00:17:26.782      },
00:17:26.782      {
00:17:26.782        "name": "BaseBdev3",
00:17:26.782        "uuid": "a6c7c4d1-73d3-4e89-bd08-0ca3f25ee54e",
00:17:26.782        "is_configured": true,
00:17:26.782        "data_offset": 2048,
00:17:26.782        "data_size": 63488
00:17:26.782      },
00:17:26.782      {
00:17:26.782        "name": "BaseBdev4",
00:17:26.782        "uuid": "bbe9251e-2534-41fb-9c77-4115fcb8dc23",
00:17:26.782        "is_configured": true,
00:17:26.782        "data_offset": 2048,
00:17:26.782        "data_size": 63488
00:17:26.782      }
00:17:26.782    ]
00:17:26.782  }'
00:17:26.782   17:01:19	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:17:26.782   17:01:19	-- common/autotest_common.sh@10 -- # set +x
00:17:27.720   17:01:20	-- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1
00:17:27.720  [2024-11-19 17:01:20.485406] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:17:27.720  [2024-11-19 17:01:20.485451] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:17:27.720  [2024-11-19 17:01:20.485554] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:17:27.720   17:01:20	-- bdev/bdev_raid.sh@263 -- # local expected_state
00:17:27.720   17:01:20	-- bdev/bdev_raid.sh@264 -- # has_redundancy concat
00:17:27.720   17:01:20	-- bdev/bdev_raid.sh@195 -- # case $1 in
00:17:27.720   17:01:20	-- bdev/bdev_raid.sh@197 -- # return 1
00:17:27.720   17:01:20	-- bdev/bdev_raid.sh@265 -- # expected_state=offline
00:17:27.720   17:01:20	-- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3
00:17:27.720   17:01:20	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:17:27.720   17:01:20	-- bdev/bdev_raid.sh@118 -- # local expected_state=offline
00:17:27.720   17:01:20	-- bdev/bdev_raid.sh@119 -- # local raid_level=concat
00:17:27.720   17:01:20	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:17:27.720   17:01:20	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:17:27.720   17:01:20	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:17:27.720   17:01:20	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:17:27.720   17:01:20	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:17:27.720   17:01:20	-- bdev/bdev_raid.sh@125 -- # local tmp
00:17:27.720    17:01:20	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:17:27.720    17:01:20	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:17:27.979   17:01:20	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:17:27.979    "name": "Existed_Raid",
00:17:27.979    "uuid": "9e837a32-4643-4278-9e2c-cbf28a2029e1",
00:17:27.979    "strip_size_kb": 64,
00:17:27.979    "state": "offline",
00:17:27.979    "raid_level": "concat",
00:17:27.979    "superblock": true,
00:17:27.979    "num_base_bdevs": 4,
00:17:27.979    "num_base_bdevs_discovered": 3,
00:17:27.979    "num_base_bdevs_operational": 3,
00:17:27.979    "base_bdevs_list": [
00:17:27.979      {
00:17:27.979        "name": null,
00:17:27.979        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:27.979        "is_configured": false,
00:17:27.979        "data_offset": 2048,
00:17:27.979        "data_size": 63488
00:17:27.979      },
00:17:27.979      {
00:17:27.979        "name": "BaseBdev2",
00:17:27.979        "uuid": "e8f602d6-e51b-445d-95c3-c24563a4360a",
00:17:27.979        "is_configured": true,
00:17:27.979        "data_offset": 2048,
00:17:27.979        "data_size": 63488
00:17:27.979      },
00:17:27.979      {
00:17:27.979        "name": "BaseBdev3",
00:17:27.979        "uuid": "a6c7c4d1-73d3-4e89-bd08-0ca3f25ee54e",
00:17:27.979        "is_configured": true,
00:17:27.979        "data_offset": 2048,
00:17:27.979        "data_size": 63488
00:17:27.979      },
00:17:27.979      {
00:17:27.979        "name": "BaseBdev4",
00:17:27.979        "uuid": "bbe9251e-2534-41fb-9c77-4115fcb8dc23",
00:17:27.979        "is_configured": true,
00:17:27.979        "data_offset": 2048,
00:17:27.979        "data_size": 63488
00:17:27.979      }
00:17:27.979    ]
00:17:27.979  }'
00:17:27.979   17:01:20	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:17:27.979   17:01:20	-- common/autotest_common.sh@10 -- # set +x
00:17:28.914   17:01:21	-- bdev/bdev_raid.sh@273 -- # (( i = 1 ))
00:17:28.914   17:01:21	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:17:28.914    17:01:21	-- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:17:28.914    17:01:21	-- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]'
00:17:28.914   17:01:21	-- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid
00:17:28.914   17:01:21	-- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:17:28.914   17:01:21	-- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2
00:17:29.173  [2024-11-19 17:01:21.865007] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2
00:17:29.173   17:01:21	-- bdev/bdev_raid.sh@273 -- # (( i++ ))
00:17:29.173   17:01:21	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:17:29.173    17:01:21	-- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]'
00:17:29.173    17:01:21	-- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:17:29.431   17:01:22	-- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid
00:17:29.431   17:01:22	-- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:17:29.431   17:01:22	-- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3
00:17:29.690  [2024-11-19 17:01:22.501569] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3
00:17:29.690   17:01:22	-- bdev/bdev_raid.sh@273 -- # (( i++ ))
00:17:29.690   17:01:22	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:17:29.690    17:01:22	-- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:17:29.690    17:01:22	-- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]'
00:17:29.948   17:01:22	-- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid
00:17:29.948   17:01:22	-- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:17:29.948   17:01:22	-- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4
00:17:30.206  [2024-11-19 17:01:22.989942] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4
00:17:30.206  [2024-11-19 17:01:22.990008] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state offline
00:17:30.206   17:01:23	-- bdev/bdev_raid.sh@273 -- # (( i++ ))
00:17:30.206   17:01:23	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:17:30.206    17:01:23	-- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)'
00:17:30.206    17:01:23	-- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:17:30.464   17:01:23	-- bdev/bdev_raid.sh@281 -- # raid_bdev=
00:17:30.464   17:01:23	-- bdev/bdev_raid.sh@282 -- # '[' -n '' ']'
00:17:30.464   17:01:23	-- bdev/bdev_raid.sh@287 -- # killprocess 130359
00:17:30.464   17:01:23	-- common/autotest_common.sh@936 -- # '[' -z 130359 ']'
00:17:30.464   17:01:23	-- common/autotest_common.sh@940 -- # kill -0 130359
00:17:30.464    17:01:23	-- common/autotest_common.sh@941 -- # uname
00:17:30.464   17:01:23	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:17:30.723    17:01:23	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 130359
00:17:30.723   17:01:23	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:17:30.723   17:01:23	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:17:30.723   17:01:23	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 130359'
00:17:30.723  killing process with pid 130359
00:17:30.723   17:01:23	-- common/autotest_common.sh@955 -- # kill 130359
00:17:30.723  [2024-11-19 17:01:23.339941] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:17:30.723  [2024-11-19 17:01:23.340046] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:17:30.723   17:01:23	-- common/autotest_common.sh@960 -- # wait 130359
00:17:30.982   17:01:23	-- bdev/bdev_raid.sh@289 -- # return 0
00:17:30.982  
00:17:30.982  real	0m14.179s
00:17:30.982  user	0m25.529s
00:17:30.982  sys	0m2.391s
00:17:30.982   17:01:23	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:17:30.982   17:01:23	-- common/autotest_common.sh@10 -- # set +x
00:17:30.982  ************************************
00:17:30.982  END TEST raid_state_function_test_sb
00:17:30.982  ************************************
00:17:30.982   17:01:23	-- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 4
00:17:30.982   17:01:23	-- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']'
00:17:30.982   17:01:23	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:17:30.982   17:01:23	-- common/autotest_common.sh@10 -- # set +x
00:17:30.982  ************************************
00:17:30.982  START TEST raid_superblock_test
00:17:30.982  ************************************
00:17:30.982   17:01:23	-- common/autotest_common.sh@1114 -- # raid_superblock_test concat 4
00:17:30.982   17:01:23	-- bdev/bdev_raid.sh@338 -- # local raid_level=concat
00:17:30.982   17:01:23	-- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4
00:17:30.982   17:01:23	-- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=()
00:17:30.982   17:01:23	-- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc
00:17:30.982   17:01:23	-- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=()
00:17:30.982   17:01:23	-- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt
00:17:30.982   17:01:23	-- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=()
00:17:30.982   17:01:23	-- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid
00:17:30.982   17:01:23	-- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1
00:17:30.982   17:01:23	-- bdev/bdev_raid.sh@344 -- # local strip_size
00:17:30.982   17:01:23	-- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg
00:17:30.982   17:01:23	-- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid
00:17:30.982   17:01:23	-- bdev/bdev_raid.sh@347 -- # local raid_bdev
00:17:30.982   17:01:23	-- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']'
00:17:30.982   17:01:23	-- bdev/bdev_raid.sh@350 -- # strip_size=64
00:17:30.982   17:01:23	-- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64'
00:17:30.982   17:01:23	-- bdev/bdev_raid.sh@357 -- # raid_pid=130800
00:17:30.982   17:01:23	-- bdev/bdev_raid.sh@358 -- # waitforlisten 130800 /var/tmp/spdk-raid.sock
00:17:30.982   17:01:23	-- common/autotest_common.sh@829 -- # '[' -z 130800 ']'
00:17:30.982   17:01:23	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock
00:17:30.982   17:01:23	-- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid
00:17:30.982   17:01:23	-- common/autotest_common.sh@834 -- # local max_retries=100
00:17:30.982   17:01:23	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...'
00:17:30.982  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...
00:17:30.982   17:01:23	-- common/autotest_common.sh@838 -- # xtrace_disable
00:17:30.982   17:01:23	-- common/autotest_common.sh@10 -- # set +x
00:17:30.982  [2024-11-19 17:01:23.753846] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:17:30.982  [2024-11-19 17:01:23.754688] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130800 ]
00:17:31.241  [2024-11-19 17:01:23.912524] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:17:31.241  [2024-11-19 17:01:23.966728] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:17:31.241  [2024-11-19 17:01:24.015222] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:17:32.176   17:01:24	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:17:32.176   17:01:24	-- common/autotest_common.sh@862 -- # return 0
00:17:32.176   17:01:24	-- bdev/bdev_raid.sh@361 -- # (( i = 1 ))
00:17:32.176   17:01:24	-- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs ))
00:17:32.176   17:01:24	-- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1
00:17:32.176   17:01:24	-- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1
00:17:32.176   17:01:24	-- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001
00:17:32.176   17:01:24	-- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc)
00:17:32.176   17:01:24	-- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt)
00:17:32.176   17:01:24	-- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:17:32.176   17:01:24	-- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1
00:17:32.176  malloc1
00:17:32.176   17:01:24	-- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001
00:17:32.434  [2024-11-19 17:01:25.237812] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1
00:17:32.434  [2024-11-19 17:01:25.237933] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:17:32.434  [2024-11-19 17:01:25.237976] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80
00:17:32.434  [2024-11-19 17:01:25.238031] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:17:32.434  [2024-11-19 17:01:25.240778] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:17:32.434  [2024-11-19 17:01:25.240856] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1
00:17:32.434  pt1
00:17:32.434   17:01:25	-- bdev/bdev_raid.sh@361 -- # (( i++ ))
00:17:32.434   17:01:25	-- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs ))
00:17:32.434   17:01:25	-- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2
00:17:32.434   17:01:25	-- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2
00:17:32.434   17:01:25	-- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002
00:17:32.434   17:01:25	-- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc)
00:17:32.434   17:01:25	-- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt)
00:17:32.434   17:01:25	-- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:17:32.434   17:01:25	-- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2
00:17:32.692  malloc2
00:17:32.692   17:01:25	-- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:17:32.950  [2024-11-19 17:01:25.699261] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:17:32.950  [2024-11-19 17:01:25.699359] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:17:32.950  [2024-11-19 17:01:25.699395] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680
00:17:32.950  [2024-11-19 17:01:25.699438] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:17:32.950  [2024-11-19 17:01:25.701972] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:17:32.950  [2024-11-19 17:01:25.702035] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:17:32.950  pt2
00:17:32.950   17:01:25	-- bdev/bdev_raid.sh@361 -- # (( i++ ))
00:17:32.950   17:01:25	-- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs ))
00:17:32.950   17:01:25	-- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3
00:17:32.950   17:01:25	-- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3
00:17:32.950   17:01:25	-- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003
00:17:32.950   17:01:25	-- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc)
00:17:32.950   17:01:25	-- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt)
00:17:32.950   17:01:25	-- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:17:32.950   17:01:25	-- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3
00:17:33.208  malloc3
00:17:33.208   17:01:25	-- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003
00:17:33.466  [2024-11-19 17:01:26.122334] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3
00:17:33.466  [2024-11-19 17:01:26.122430] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:17:33.466  [2024-11-19 17:01:26.122474] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280
00:17:33.466  [2024-11-19 17:01:26.122516] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:17:33.466  [2024-11-19 17:01:26.125021] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:17:33.466  [2024-11-19 17:01:26.125085] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3
00:17:33.466  pt3
00:17:33.466   17:01:26	-- bdev/bdev_raid.sh@361 -- # (( i++ ))
00:17:33.466   17:01:26	-- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs ))
00:17:33.466   17:01:26	-- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4
00:17:33.466   17:01:26	-- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4
00:17:33.466   17:01:26	-- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004
00:17:33.466   17:01:26	-- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc)
00:17:33.466   17:01:26	-- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt)
00:17:33.466   17:01:26	-- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:17:33.466   17:01:26	-- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4
00:17:33.724  malloc4
00:17:33.724   17:01:26	-- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004
00:17:33.724  [2024-11-19 17:01:26.512047] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4
00:17:33.724  [2024-11-19 17:01:26.512170] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:17:33.724  [2024-11-19 17:01:26.512207] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80
00:17:33.724  [2024-11-19 17:01:26.512268] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:17:33.724  [2024-11-19 17:01:26.514920] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:17:33.724  [2024-11-19 17:01:26.514986] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4
00:17:33.724  pt4
00:17:33.724   17:01:26	-- bdev/bdev_raid.sh@361 -- # (( i++ ))
00:17:33.724   17:01:26	-- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs ))
00:17:33.724   17:01:26	-- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s
00:17:33.983  [2024-11-19 17:01:26.716191] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed
00:17:33.983  [2024-11-19 17:01:26.718507] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:17:33.983  [2024-11-19 17:01:26.718576] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed
00:17:33.983  [2024-11-19 17:01:26.718618] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed
00:17:33.983  [2024-11-19 17:01:26.718833] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008480
00:17:33.983  [2024-11-19 17:01:26.718845] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512
00:17:33.983  [2024-11-19 17:01:26.719047] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460
00:17:33.983  [2024-11-19 17:01:26.719494] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008480
00:17:33.983  [2024-11-19 17:01:26.719515] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008480
00:17:33.983  [2024-11-19 17:01:26.719673] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:17:33.983   17:01:26	-- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4
00:17:33.983   17:01:26	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:17:33.983   17:01:26	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:17:33.983   17:01:26	-- bdev/bdev_raid.sh@119 -- # local raid_level=concat
00:17:33.983   17:01:26	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:17:33.983   17:01:26	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:17:33.983   17:01:26	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:17:33.983   17:01:26	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:17:33.983   17:01:26	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:17:33.983   17:01:26	-- bdev/bdev_raid.sh@125 -- # local tmp
00:17:33.983    17:01:26	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:17:33.983    17:01:26	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:17:34.242   17:01:26	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:17:34.242    "name": "raid_bdev1",
00:17:34.242    "uuid": "71db6fcc-91b7-47c0-a07e-b024c27e2071",
00:17:34.242    "strip_size_kb": 64,
00:17:34.242    "state": "online",
00:17:34.242    "raid_level": "concat",
00:17:34.242    "superblock": true,
00:17:34.242    "num_base_bdevs": 4,
00:17:34.242    "num_base_bdevs_discovered": 4,
00:17:34.242    "num_base_bdevs_operational": 4,
00:17:34.242    "base_bdevs_list": [
00:17:34.242      {
00:17:34.242        "name": "pt1",
00:17:34.243        "uuid": "a6c6e091-1ad2-5801-97bb-93045f618021",
00:17:34.243        "is_configured": true,
00:17:34.243        "data_offset": 2048,
00:17:34.243        "data_size": 63488
00:17:34.243      },
00:17:34.243      {
00:17:34.243        "name": "pt2",
00:17:34.243        "uuid": "81d3b7fc-7313-585d-ae25-feaa6f75b357",
00:17:34.243        "is_configured": true,
00:17:34.243        "data_offset": 2048,
00:17:34.243        "data_size": 63488
00:17:34.243      },
00:17:34.243      {
00:17:34.243        "name": "pt3",
00:17:34.243        "uuid": "256e17a1-d7a0-59f5-a867-6f3a72b2a99c",
00:17:34.243        "is_configured": true,
00:17:34.243        "data_offset": 2048,
00:17:34.243        "data_size": 63488
00:17:34.243      },
00:17:34.243      {
00:17:34.243        "name": "pt4",
00:17:34.243        "uuid": "ee2e8746-b585-59dc-ab12-966b99ff9dc9",
00:17:34.243        "is_configured": true,
00:17:34.243        "data_offset": 2048,
00:17:34.243        "data_size": 63488
00:17:34.243      }
00:17:34.243    ]
00:17:34.243  }'
00:17:34.243   17:01:26	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:17:34.243   17:01:26	-- common/autotest_common.sh@10 -- # set +x
00:17:34.813    17:01:27	-- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1
00:17:34.813    17:01:27	-- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid'
00:17:35.073  [2024-11-19 17:01:27.720727] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:17:35.073   17:01:27	-- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=71db6fcc-91b7-47c0-a07e-b024c27e2071
00:17:35.073   17:01:27	-- bdev/bdev_raid.sh@380 -- # '[' -z 71db6fcc-91b7-47c0-a07e-b024c27e2071 ']'
00:17:35.073   17:01:27	-- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1
00:17:35.073  [2024-11-19 17:01:27.904404] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:17:35.073  [2024-11-19 17:01:27.904456] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:17:35.073  [2024-11-19 17:01:27.904591] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:17:35.073  [2024-11-19 17:01:27.904695] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:17:35.073  [2024-11-19 17:01:27.904706] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state offline
00:17:35.073    17:01:27	-- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:17:35.073    17:01:27	-- bdev/bdev_raid.sh@386 -- # jq -r '.[]'
00:17:35.332   17:01:28	-- bdev/bdev_raid.sh@386 -- # raid_bdev=
00:17:35.332   17:01:28	-- bdev/bdev_raid.sh@387 -- # '[' -n '' ']'
00:17:35.332   17:01:28	-- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}"
00:17:35.332   17:01:28	-- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1
00:17:35.590   17:01:28	-- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}"
00:17:35.590   17:01:28	-- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2
00:17:35.848   17:01:28	-- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}"
00:17:35.848   17:01:28	-- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3
00:17:36.106   17:01:28	-- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}"
00:17:36.106   17:01:28	-- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4
00:17:36.365    17:01:29	-- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any'
00:17:36.365    17:01:29	-- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs
00:17:36.622   17:01:29	-- bdev/bdev_raid.sh@395 -- # '[' false == true ']'
00:17:36.622   17:01:29	-- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1
00:17:36.622   17:01:29	-- common/autotest_common.sh@650 -- # local es=0
00:17:36.622   17:01:29	-- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1
00:17:36.622   17:01:29	-- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:17:36.622   17:01:29	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:17:36.622    17:01:29	-- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:17:36.622   17:01:29	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:17:36.622    17:01:29	-- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:17:36.622   17:01:29	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:17:36.622   17:01:29	-- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:17:36.622   17:01:29	-- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]]
00:17:36.622   17:01:29	-- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1
00:17:36.880  [2024-11-19 17:01:29.516655] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed
00:17:36.880  [2024-11-19 17:01:29.518976] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed
00:17:36.880  [2024-11-19 17:01:29.519030] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed
00:17:36.880  [2024-11-19 17:01:29.519060] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed
00:17:36.880  [2024-11-19 17:01:29.519108] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1
00:17:36.880  [2024-11-19 17:01:29.519213] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2
00:17:36.880  [2024-11-19 17:01:29.519243] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3
00:17:36.880  [2024-11-19 17:01:29.519294] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4
00:17:36.880  [2024-11-19 17:01:29.519335] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:17:36.880  [2024-11-19 17:01:29.519345] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state configuring
00:17:36.880  request:
00:17:36.880  {
00:17:36.880    "name": "raid_bdev1",
00:17:36.880    "raid_level": "concat",
00:17:36.880    "base_bdevs": [
00:17:36.880      "malloc1",
00:17:36.880      "malloc2",
00:17:36.880      "malloc3",
00:17:36.880      "malloc4"
00:17:36.880    ],
00:17:36.880    "superblock": false,
00:17:36.880    "strip_size_kb": 64,
00:17:36.880    "method": "bdev_raid_create",
00:17:36.880    "req_id": 1
00:17:36.880  }
00:17:36.880  Got JSON-RPC error response
00:17:36.880  response:
00:17:36.880  {
00:17:36.880    "code": -17,
00:17:36.880    "message": "Failed to create RAID bdev raid_bdev1: File exists"
00:17:36.880  }
00:17:36.880   17:01:29	-- common/autotest_common.sh@653 -- # es=1
00:17:36.880   17:01:29	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:17:36.880   17:01:29	-- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:17:36.880   17:01:29	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:17:36.880    17:01:29	-- bdev/bdev_raid.sh@403 -- # jq -r '.[]'
00:17:36.880    17:01:29	-- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:17:37.137   17:01:29	-- bdev/bdev_raid.sh@403 -- # raid_bdev=
00:17:37.137   17:01:29	-- bdev/bdev_raid.sh@404 -- # '[' -n '' ']'
00:17:37.137   17:01:29	-- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001
00:17:37.395  [2024-11-19 17:01:30.024683] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1
00:17:37.395  [2024-11-19 17:01:30.024794] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:17:37.395  [2024-11-19 17:01:30.024831] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080
00:17:37.395  [2024-11-19 17:01:30.024859] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:17:37.395  [2024-11-19 17:01:30.027592] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:17:37.395  [2024-11-19 17:01:30.027681] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1
00:17:37.395  [2024-11-19 17:01:30.027780] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1
00:17:37.395  [2024-11-19 17:01:30.027854] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed
00:17:37.395  pt1
00:17:37.395   17:01:30	-- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4
00:17:37.395   17:01:30	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:17:37.395   17:01:30	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:17:37.395   17:01:30	-- bdev/bdev_raid.sh@119 -- # local raid_level=concat
00:17:37.395   17:01:30	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:17:37.395   17:01:30	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:17:37.395   17:01:30	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:17:37.395   17:01:30	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:17:37.395   17:01:30	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:17:37.395   17:01:30	-- bdev/bdev_raid.sh@125 -- # local tmp
00:17:37.395    17:01:30	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:17:37.395    17:01:30	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:17:37.652   17:01:30	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:17:37.652    "name": "raid_bdev1",
00:17:37.652    "uuid": "71db6fcc-91b7-47c0-a07e-b024c27e2071",
00:17:37.652    "strip_size_kb": 64,
00:17:37.652    "state": "configuring",
00:17:37.652    "raid_level": "concat",
00:17:37.652    "superblock": true,
00:17:37.652    "num_base_bdevs": 4,
00:17:37.652    "num_base_bdevs_discovered": 1,
00:17:37.652    "num_base_bdevs_operational": 4,
00:17:37.652    "base_bdevs_list": [
00:17:37.652      {
00:17:37.652        "name": "pt1",
00:17:37.652        "uuid": "a6c6e091-1ad2-5801-97bb-93045f618021",
00:17:37.652        "is_configured": true,
00:17:37.652        "data_offset": 2048,
00:17:37.652        "data_size": 63488
00:17:37.652      },
00:17:37.652      {
00:17:37.652        "name": null,
00:17:37.652        "uuid": "81d3b7fc-7313-585d-ae25-feaa6f75b357",
00:17:37.652        "is_configured": false,
00:17:37.652        "data_offset": 2048,
00:17:37.652        "data_size": 63488
00:17:37.652      },
00:17:37.652      {
00:17:37.652        "name": null,
00:17:37.652        "uuid": "256e17a1-d7a0-59f5-a867-6f3a72b2a99c",
00:17:37.652        "is_configured": false,
00:17:37.652        "data_offset": 2048,
00:17:37.652        "data_size": 63488
00:17:37.652      },
00:17:37.652      {
00:17:37.652        "name": null,
00:17:37.652        "uuid": "ee2e8746-b585-59dc-ab12-966b99ff9dc9",
00:17:37.652        "is_configured": false,
00:17:37.652        "data_offset": 2048,
00:17:37.652        "data_size": 63488
00:17:37.652      }
00:17:37.652    ]
00:17:37.652  }'
00:17:37.652   17:01:30	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:17:37.652   17:01:30	-- common/autotest_common.sh@10 -- # set +x
00:17:38.021   17:01:30	-- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']'
00:17:38.021   17:01:30	-- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:17:38.277  [2024-11-19 17:01:31.100954] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:17:38.278  [2024-11-19 17:01:31.101066] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:17:38.278  [2024-11-19 17:01:31.101111] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980
00:17:38.278  [2024-11-19 17:01:31.101133] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:17:38.278  [2024-11-19 17:01:31.101569] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:17:38.278  [2024-11-19 17:01:31.101611] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:17:38.278  [2024-11-19 17:01:31.101698] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2
00:17:38.278  [2024-11-19 17:01:31.101720] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:17:38.278  pt2
00:17:38.278   17:01:31	-- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2
00:17:38.535  [2024-11-19 17:01:31.312942] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2
00:17:38.535   17:01:31	-- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4
00:17:38.535   17:01:31	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:17:38.535   17:01:31	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:17:38.535   17:01:31	-- bdev/bdev_raid.sh@119 -- # local raid_level=concat
00:17:38.535   17:01:31	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:17:38.535   17:01:31	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:17:38.535   17:01:31	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:17:38.535   17:01:31	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:17:38.535   17:01:31	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:17:38.535   17:01:31	-- bdev/bdev_raid.sh@125 -- # local tmp
00:17:38.535    17:01:31	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:17:38.535    17:01:31	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:17:38.792   17:01:31	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:17:38.792    "name": "raid_bdev1",
00:17:38.792    "uuid": "71db6fcc-91b7-47c0-a07e-b024c27e2071",
00:17:38.792    "strip_size_kb": 64,
00:17:38.792    "state": "configuring",
00:17:38.792    "raid_level": "concat",
00:17:38.792    "superblock": true,
00:17:38.792    "num_base_bdevs": 4,
00:17:38.792    "num_base_bdevs_discovered": 1,
00:17:38.792    "num_base_bdevs_operational": 4,
00:17:38.792    "base_bdevs_list": [
00:17:38.792      {
00:17:38.792        "name": "pt1",
00:17:38.792        "uuid": "a6c6e091-1ad2-5801-97bb-93045f618021",
00:17:38.792        "is_configured": true,
00:17:38.792        "data_offset": 2048,
00:17:38.792        "data_size": 63488
00:17:38.792      },
00:17:38.792      {
00:17:38.792        "name": null,
00:17:38.792        "uuid": "81d3b7fc-7313-585d-ae25-feaa6f75b357",
00:17:38.792        "is_configured": false,
00:17:38.792        "data_offset": 2048,
00:17:38.792        "data_size": 63488
00:17:38.792      },
00:17:38.792      {
00:17:38.792        "name": null,
00:17:38.792        "uuid": "256e17a1-d7a0-59f5-a867-6f3a72b2a99c",
00:17:38.792        "is_configured": false,
00:17:38.793        "data_offset": 2048,
00:17:38.793        "data_size": 63488
00:17:38.793      },
00:17:38.793      {
00:17:38.793        "name": null,
00:17:38.793        "uuid": "ee2e8746-b585-59dc-ab12-966b99ff9dc9",
00:17:38.793        "is_configured": false,
00:17:38.793        "data_offset": 2048,
00:17:38.793        "data_size": 63488
00:17:38.793      }
00:17:38.793    ]
00:17:38.793  }'
00:17:38.793   17:01:31	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:17:38.793   17:01:31	-- common/autotest_common.sh@10 -- # set +x
00:17:39.357   17:01:32	-- bdev/bdev_raid.sh@422 -- # (( i = 1 ))
00:17:39.357   17:01:32	-- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs ))
00:17:39.357   17:01:32	-- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:17:39.746  [2024-11-19 17:01:32.361212] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:17:39.746  [2024-11-19 17:01:32.361317] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:17:39.746  [2024-11-19 17:01:32.361358] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80
00:17:39.746  [2024-11-19 17:01:32.361383] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:17:39.746  [2024-11-19 17:01:32.361843] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:17:39.746  [2024-11-19 17:01:32.361903] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:17:39.746  [2024-11-19 17:01:32.361988] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2
00:17:39.746  [2024-11-19 17:01:32.362011] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:17:39.746  pt2
00:17:39.746   17:01:32	-- bdev/bdev_raid.sh@422 -- # (( i++ ))
00:17:39.746   17:01:32	-- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs ))
00:17:39.746   17:01:32	-- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003
00:17:40.004  [2024-11-19 17:01:32.585280] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3
00:17:40.004  [2024-11-19 17:01:32.585401] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:17:40.004  [2024-11-19 17:01:32.585438] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80
00:17:40.004  [2024-11-19 17:01:32.585468] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:17:40.004  [2024-11-19 17:01:32.585908] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:17:40.004  [2024-11-19 17:01:32.585968] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3
00:17:40.004  [2024-11-19 17:01:32.586048] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3
00:17:40.004  [2024-11-19 17:01:32.586070] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed
00:17:40.004  pt3
00:17:40.004   17:01:32	-- bdev/bdev_raid.sh@422 -- # (( i++ ))
00:17:40.004   17:01:32	-- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs ))
00:17:40.004   17:01:32	-- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004
00:17:40.004  [2024-11-19 17:01:32.849315] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4
00:17:40.004  [2024-11-19 17:01:32.849413] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:17:40.004  [2024-11-19 17:01:32.849465] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280
00:17:40.004  [2024-11-19 17:01:32.849494] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:17:40.004  [2024-11-19 17:01:32.849942] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:17:40.004  [2024-11-19 17:01:32.849993] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4
00:17:40.004  [2024-11-19 17:01:32.850069] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4
00:17:40.004  [2024-11-19 17:01:32.850091] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed
00:17:40.004  [2024-11-19 17:01:32.850210] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680
00:17:40.004  [2024-11-19 17:01:32.850220] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512
00:17:40.004  [2024-11-19 17:01:32.850297] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002940
00:17:40.004  [2024-11-19 17:01:32.850617] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680
00:17:40.004  [2024-11-19 17:01:32.850639] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680
00:17:40.004  [2024-11-19 17:01:32.850738] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:17:40.004  pt4
00:17:40.262   17:01:32	-- bdev/bdev_raid.sh@422 -- # (( i++ ))
00:17:40.262   17:01:32	-- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs ))
00:17:40.262   17:01:32	-- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4
00:17:40.262   17:01:32	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:17:40.262   17:01:32	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:17:40.262   17:01:32	-- bdev/bdev_raid.sh@119 -- # local raid_level=concat
00:17:40.262   17:01:32	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:17:40.262   17:01:32	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:17:40.262   17:01:32	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:17:40.262   17:01:32	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:17:40.262   17:01:32	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:17:40.262   17:01:32	-- bdev/bdev_raid.sh@125 -- # local tmp
00:17:40.262    17:01:32	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:17:40.262    17:01:32	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:17:40.262   17:01:33	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:17:40.262    "name": "raid_bdev1",
00:17:40.262    "uuid": "71db6fcc-91b7-47c0-a07e-b024c27e2071",
00:17:40.262    "strip_size_kb": 64,
00:17:40.262    "state": "online",
00:17:40.262    "raid_level": "concat",
00:17:40.262    "superblock": true,
00:17:40.262    "num_base_bdevs": 4,
00:17:40.262    "num_base_bdevs_discovered": 4,
00:17:40.262    "num_base_bdevs_operational": 4,
00:17:40.262    "base_bdevs_list": [
00:17:40.262      {
00:17:40.262        "name": "pt1",
00:17:40.262        "uuid": "a6c6e091-1ad2-5801-97bb-93045f618021",
00:17:40.262        "is_configured": true,
00:17:40.262        "data_offset": 2048,
00:17:40.262        "data_size": 63488
00:17:40.262      },
00:17:40.262      {
00:17:40.262        "name": "pt2",
00:17:40.262        "uuid": "81d3b7fc-7313-585d-ae25-feaa6f75b357",
00:17:40.262        "is_configured": true,
00:17:40.262        "data_offset": 2048,
00:17:40.262        "data_size": 63488
00:17:40.262      },
00:17:40.262      {
00:17:40.263        "name": "pt3",
00:17:40.263        "uuid": "256e17a1-d7a0-59f5-a867-6f3a72b2a99c",
00:17:40.263        "is_configured": true,
00:17:40.263        "data_offset": 2048,
00:17:40.263        "data_size": 63488
00:17:40.263      },
00:17:40.263      {
00:17:40.263        "name": "pt4",
00:17:40.263        "uuid": "ee2e8746-b585-59dc-ab12-966b99ff9dc9",
00:17:40.263        "is_configured": true,
00:17:40.263        "data_offset": 2048,
00:17:40.263        "data_size": 63488
00:17:40.263      }
00:17:40.263    ]
00:17:40.263  }'
00:17:40.263   17:01:33	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:17:40.263   17:01:33	-- common/autotest_common.sh@10 -- # set +x
00:17:40.831    17:01:33	-- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1
00:17:40.831    17:01:33	-- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid'
00:17:41.090  [2024-11-19 17:01:33.811644] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:17:41.090   17:01:33	-- bdev/bdev_raid.sh@430 -- # '[' 71db6fcc-91b7-47c0-a07e-b024c27e2071 '!=' 71db6fcc-91b7-47c0-a07e-b024c27e2071 ']'
00:17:41.090   17:01:33	-- bdev/bdev_raid.sh@434 -- # has_redundancy concat
00:17:41.090   17:01:33	-- bdev/bdev_raid.sh@195 -- # case $1 in
00:17:41.090   17:01:33	-- bdev/bdev_raid.sh@197 -- # return 1
00:17:41.090   17:01:33	-- bdev/bdev_raid.sh@511 -- # killprocess 130800
00:17:41.090   17:01:33	-- common/autotest_common.sh@936 -- # '[' -z 130800 ']'
00:17:41.090   17:01:33	-- common/autotest_common.sh@940 -- # kill -0 130800
00:17:41.090    17:01:33	-- common/autotest_common.sh@941 -- # uname
00:17:41.090   17:01:33	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:17:41.090    17:01:33	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 130800
00:17:41.090   17:01:33	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:17:41.090  killing process with pid 130800
00:17:41.090   17:01:33	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:17:41.090   17:01:33	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 130800'
00:17:41.090   17:01:33	-- common/autotest_common.sh@955 -- # kill 130800
00:17:41.090  [2024-11-19 17:01:33.861555] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:17:41.090   17:01:33	-- common/autotest_common.sh@960 -- # wait 130800
00:17:41.090  [2024-11-19 17:01:33.861651] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:17:41.090  [2024-11-19 17:01:33.861729] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:17:41.090  [2024-11-19 17:01:33.861738] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline
00:17:41.090  [2024-11-19 17:01:33.942608] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:17:41.660   17:01:34	-- bdev/bdev_raid.sh@513 -- # return 0
00:17:41.660  
00:17:41.660  real	0m10.643s
00:17:41.660  user	0m18.675s
00:17:41.660  sys	0m1.790s
00:17:41.660   17:01:34	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:17:41.660  ************************************
00:17:41.660  END TEST raid_superblock_test
00:17:41.660  ************************************
00:17:41.660   17:01:34	-- common/autotest_common.sh@10 -- # set +x
00:17:41.660   17:01:34	-- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1
00:17:41.660   17:01:34	-- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false
00:17:41.660   17:01:34	-- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']'
00:17:41.660   17:01:34	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:17:41.660   17:01:34	-- common/autotest_common.sh@10 -- # set +x
00:17:41.660  ************************************
00:17:41.660  START TEST raid_state_function_test
00:17:41.660  ************************************
00:17:41.660   17:01:34	-- common/autotest_common.sh@1114 -- # raid_state_function_test raid1 4 false
00:17:41.660   17:01:34	-- bdev/bdev_raid.sh@202 -- # local raid_level=raid1
00:17:41.660   17:01:34	-- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4
00:17:41.660   17:01:34	-- bdev/bdev_raid.sh@204 -- # local superblock=false
00:17:41.660   17:01:34	-- bdev/bdev_raid.sh@205 -- # local raid_bdev
00:17:41.660    17:01:34	-- bdev/bdev_raid.sh@206 -- # (( i = 1 ))
00:17:41.660    17:01:34	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:17:41.660    17:01:34	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev1
00:17:41.660    17:01:34	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:17:41.660    17:01:34	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:17:41.660    17:01:34	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev2
00:17:41.660    17:01:34	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:17:41.660    17:01:34	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:17:41.660    17:01:34	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev3
00:17:41.660    17:01:34	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:17:41.660    17:01:34	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:17:41.660    17:01:34	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev4
00:17:41.660    17:01:34	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:17:41.660    17:01:34	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:17:41.660   17:01:34	-- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4')
00:17:41.660   17:01:34	-- bdev/bdev_raid.sh@206 -- # local base_bdevs
00:17:41.660   17:01:34	-- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid
00:17:41.660   17:01:34	-- bdev/bdev_raid.sh@208 -- # local strip_size
00:17:41.660   17:01:34	-- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg
00:17:41.660   17:01:34	-- bdev/bdev_raid.sh@210 -- # local superblock_create_arg
00:17:41.660   17:01:34	-- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']'
00:17:41.660   17:01:34	-- bdev/bdev_raid.sh@216 -- # strip_size=0
00:17:41.660   17:01:34	-- bdev/bdev_raid.sh@219 -- # '[' false = true ']'
00:17:41.660   17:01:34	-- bdev/bdev_raid.sh@222 -- # superblock_create_arg=
00:17:41.660   17:01:34	-- bdev/bdev_raid.sh@226 -- # raid_pid=131123
00:17:41.660  Process raid pid: 131123
00:17:41.660   17:01:34	-- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 131123'
00:17:41.660   17:01:34	-- bdev/bdev_raid.sh@228 -- # waitforlisten 131123 /var/tmp/spdk-raid.sock
00:17:41.660   17:01:34	-- common/autotest_common.sh@829 -- # '[' -z 131123 ']'
00:17:41.660   17:01:34	-- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid
00:17:41.660   17:01:34	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock
00:17:41.660   17:01:34	-- common/autotest_common.sh@834 -- # local max_retries=100
00:17:41.660  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...
00:17:41.660   17:01:34	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...'
00:17:41.660   17:01:34	-- common/autotest_common.sh@838 -- # xtrace_disable
00:17:41.660   17:01:34	-- common/autotest_common.sh@10 -- # set +x
00:17:41.660  [2024-11-19 17:01:34.457913] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:17:41.660  [2024-11-19 17:01:34.458124] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:17:41.918  [2024-11-19 17:01:34.601346] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:17:41.918  [2024-11-19 17:01:34.659430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:17:41.918  [2024-11-19 17:01:34.708084] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:17:42.854   17:01:35	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:17:42.854   17:01:35	-- common/autotest_common.sh@862 -- # return 0
00:17:42.854   17:01:35	-- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid
00:17:42.854  [2024-11-19 17:01:35.629506] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:17:42.854  [2024-11-19 17:01:35.629816] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:17:42.854  [2024-11-19 17:01:35.629912] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:17:42.854  [2024-11-19 17:01:35.629970] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:17:42.854  [2024-11-19 17:01:35.629998] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:17:42.854  [2024-11-19 17:01:35.630127] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:17:42.854  [2024-11-19 17:01:35.630161] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4
00:17:42.854  [2024-11-19 17:01:35.630209] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now
00:17:42.854   17:01:35	-- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4
00:17:42.854   17:01:35	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:17:42.854   17:01:35	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:17:42.854   17:01:35	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:17:42.854   17:01:35	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:17:42.854   17:01:35	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:17:42.854   17:01:35	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:17:42.854   17:01:35	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:17:42.854   17:01:35	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:17:42.854   17:01:35	-- bdev/bdev_raid.sh@125 -- # local tmp
00:17:42.854    17:01:35	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:17:42.854    17:01:35	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:17:43.113   17:01:35	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:17:43.113    "name": "Existed_Raid",
00:17:43.113    "uuid": "00000000-0000-0000-0000-000000000000",
00:17:43.113    "strip_size_kb": 0,
00:17:43.113    "state": "configuring",
00:17:43.113    "raid_level": "raid1",
00:17:43.113    "superblock": false,
00:17:43.113    "num_base_bdevs": 4,
00:17:43.113    "num_base_bdevs_discovered": 0,
00:17:43.113    "num_base_bdevs_operational": 4,
00:17:43.113    "base_bdevs_list": [
00:17:43.113      {
00:17:43.113        "name": "BaseBdev1",
00:17:43.113        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:43.114        "is_configured": false,
00:17:43.114        "data_offset": 0,
00:17:43.114        "data_size": 0
00:17:43.114      },
00:17:43.114      {
00:17:43.114        "name": "BaseBdev2",
00:17:43.114        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:43.114        "is_configured": false,
00:17:43.114        "data_offset": 0,
00:17:43.114        "data_size": 0
00:17:43.114      },
00:17:43.114      {
00:17:43.114        "name": "BaseBdev3",
00:17:43.114        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:43.114        "is_configured": false,
00:17:43.114        "data_offset": 0,
00:17:43.114        "data_size": 0
00:17:43.114      },
00:17:43.114      {
00:17:43.114        "name": "BaseBdev4",
00:17:43.114        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:43.114        "is_configured": false,
00:17:43.114        "data_offset": 0,
00:17:43.114        "data_size": 0
00:17:43.114      }
00:17:43.114    ]
00:17:43.114  }'
00:17:43.114   17:01:35	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:17:43.114   17:01:35	-- common/autotest_common.sh@10 -- # set +x
00:17:43.682   17:01:36	-- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid
00:17:43.940  [2024-11-19 17:01:36.691176] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:17:43.940  [2024-11-19 17:01:36.691500] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring
00:17:43.940   17:01:36	-- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid
00:17:44.198  [2024-11-19 17:01:36.879274] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:17:44.198  [2024-11-19 17:01:36.879600] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:17:44.198  [2024-11-19 17:01:36.879698] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:17:44.199  [2024-11-19 17:01:36.879765] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:17:44.199  [2024-11-19 17:01:36.879797] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:17:44.199  [2024-11-19 17:01:36.879885] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:17:44.199  [2024-11-19 17:01:36.879921] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4
00:17:44.199  [2024-11-19 17:01:36.879972] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now
00:17:44.199   17:01:36	-- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1
00:17:44.459  [2024-11-19 17:01:37.162482] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:17:44.459  BaseBdev1
00:17:44.459   17:01:37	-- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1
00:17:44.459   17:01:37	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1
00:17:44.459   17:01:37	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:17:44.459   17:01:37	-- common/autotest_common.sh@899 -- # local i
00:17:44.459   17:01:37	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:17:44.459   17:01:37	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:17:44.459   17:01:37	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:17:44.721   17:01:37	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000
00:17:44.983  [
00:17:44.984    {
00:17:44.984      "name": "BaseBdev1",
00:17:44.984      "aliases": [
00:17:44.984        "79ff81ae-18d5-4c39-98ad-07fefad77956"
00:17:44.984      ],
00:17:44.984      "product_name": "Malloc disk",
00:17:44.984      "block_size": 512,
00:17:44.984      "num_blocks": 65536,
00:17:44.984      "uuid": "79ff81ae-18d5-4c39-98ad-07fefad77956",
00:17:44.984      "assigned_rate_limits": {
00:17:44.984        "rw_ios_per_sec": 0,
00:17:44.984        "rw_mbytes_per_sec": 0,
00:17:44.984        "r_mbytes_per_sec": 0,
00:17:44.984        "w_mbytes_per_sec": 0
00:17:44.984      },
00:17:44.984      "claimed": true,
00:17:44.984      "claim_type": "exclusive_write",
00:17:44.984      "zoned": false,
00:17:44.984      "supported_io_types": {
00:17:44.984        "read": true,
00:17:44.984        "write": true,
00:17:44.984        "unmap": true,
00:17:44.984        "write_zeroes": true,
00:17:44.984        "flush": true,
00:17:44.984        "reset": true,
00:17:44.984        "compare": false,
00:17:44.984        "compare_and_write": false,
00:17:44.984        "abort": true,
00:17:44.984        "nvme_admin": false,
00:17:44.984        "nvme_io": false
00:17:44.984      },
00:17:44.984      "memory_domains": [
00:17:44.984        {
00:17:44.984          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:17:44.984          "dma_device_type": 2
00:17:44.984        }
00:17:44.984      ],
00:17:44.984      "driver_specific": {}
00:17:44.984    }
00:17:44.984  ]
00:17:44.984   17:01:37	-- common/autotest_common.sh@905 -- # return 0
00:17:44.984   17:01:37	-- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4
00:17:44.984   17:01:37	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:17:44.984   17:01:37	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:17:44.984   17:01:37	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:17:44.984   17:01:37	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:17:44.984   17:01:37	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:17:44.984   17:01:37	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:17:44.984   17:01:37	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:17:44.984   17:01:37	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:17:44.984   17:01:37	-- bdev/bdev_raid.sh@125 -- # local tmp
00:17:44.984    17:01:37	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:17:44.984    17:01:37	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:17:45.243   17:01:38	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:17:45.243    "name": "Existed_Raid",
00:17:45.243    "uuid": "00000000-0000-0000-0000-000000000000",
00:17:45.243    "strip_size_kb": 0,
00:17:45.243    "state": "configuring",
00:17:45.243    "raid_level": "raid1",
00:17:45.243    "superblock": false,
00:17:45.243    "num_base_bdevs": 4,
00:17:45.243    "num_base_bdevs_discovered": 1,
00:17:45.243    "num_base_bdevs_operational": 4,
00:17:45.243    "base_bdevs_list": [
00:17:45.243      {
00:17:45.243        "name": "BaseBdev1",
00:17:45.243        "uuid": "79ff81ae-18d5-4c39-98ad-07fefad77956",
00:17:45.243        "is_configured": true,
00:17:45.243        "data_offset": 0,
00:17:45.243        "data_size": 65536
00:17:45.243      },
00:17:45.243      {
00:17:45.243        "name": "BaseBdev2",
00:17:45.243        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:45.243        "is_configured": false,
00:17:45.243        "data_offset": 0,
00:17:45.243        "data_size": 0
00:17:45.243      },
00:17:45.243      {
00:17:45.243        "name": "BaseBdev3",
00:17:45.243        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:45.243        "is_configured": false,
00:17:45.243        "data_offset": 0,
00:17:45.243        "data_size": 0
00:17:45.243      },
00:17:45.243      {
00:17:45.243        "name": "BaseBdev4",
00:17:45.243        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:45.243        "is_configured": false,
00:17:45.243        "data_offset": 0,
00:17:45.243        "data_size": 0
00:17:45.243      }
00:17:45.243    ]
00:17:45.243  }'
00:17:45.243   17:01:38	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:17:45.243   17:01:38	-- common/autotest_common.sh@10 -- # set +x
00:17:45.811   17:01:38	-- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid
00:17:46.071  [2024-11-19 17:01:38.875476] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:17:46.071  [2024-11-19 17:01:38.875751] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring
00:17:46.071   17:01:38	-- bdev/bdev_raid.sh@244 -- # '[' false = true ']'
00:17:46.071   17:01:38	-- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid
00:17:46.330  [2024-11-19 17:01:39.095603] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:17:46.330  [2024-11-19 17:01:39.098122] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:17:46.330  [2024-11-19 17:01:39.098337] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:17:46.330  [2024-11-19 17:01:39.098431] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:17:46.330  [2024-11-19 17:01:39.098490] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:17:46.330  [2024-11-19 17:01:39.098520] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4
00:17:46.330  [2024-11-19 17:01:39.098558] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now
00:17:46.330   17:01:39	-- bdev/bdev_raid.sh@254 -- # (( i = 1 ))
00:17:46.330   17:01:39	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:17:46.330   17:01:39	-- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4
00:17:46.330   17:01:39	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:17:46.330   17:01:39	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:17:46.330   17:01:39	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:17:46.330   17:01:39	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:17:46.330   17:01:39	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:17:46.330   17:01:39	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:17:46.330   17:01:39	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:17:46.330   17:01:39	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:17:46.330   17:01:39	-- bdev/bdev_raid.sh@125 -- # local tmp
00:17:46.330    17:01:39	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:17:46.330    17:01:39	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:17:46.588   17:01:39	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:17:46.588    "name": "Existed_Raid",
00:17:46.588    "uuid": "00000000-0000-0000-0000-000000000000",
00:17:46.588    "strip_size_kb": 0,
00:17:46.588    "state": "configuring",
00:17:46.588    "raid_level": "raid1",
00:17:46.588    "superblock": false,
00:17:46.588    "num_base_bdevs": 4,
00:17:46.588    "num_base_bdevs_discovered": 1,
00:17:46.588    "num_base_bdevs_operational": 4,
00:17:46.588    "base_bdevs_list": [
00:17:46.588      {
00:17:46.588        "name": "BaseBdev1",
00:17:46.588        "uuid": "79ff81ae-18d5-4c39-98ad-07fefad77956",
00:17:46.588        "is_configured": true,
00:17:46.588        "data_offset": 0,
00:17:46.588        "data_size": 65536
00:17:46.588      },
00:17:46.588      {
00:17:46.588        "name": "BaseBdev2",
00:17:46.588        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:46.588        "is_configured": false,
00:17:46.588        "data_offset": 0,
00:17:46.588        "data_size": 0
00:17:46.588      },
00:17:46.588      {
00:17:46.588        "name": "BaseBdev3",
00:17:46.588        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:46.588        "is_configured": false,
00:17:46.588        "data_offset": 0,
00:17:46.588        "data_size": 0
00:17:46.588      },
00:17:46.588      {
00:17:46.588        "name": "BaseBdev4",
00:17:46.588        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:46.588        "is_configured": false,
00:17:46.588        "data_offset": 0,
00:17:46.588        "data_size": 0
00:17:46.588      }
00:17:46.588    ]
00:17:46.588  }'
00:17:46.588   17:01:39	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:17:46.588   17:01:39	-- common/autotest_common.sh@10 -- # set +x
00:17:47.523   17:01:40	-- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2
00:17:47.523  [2024-11-19 17:01:40.274841] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:17:47.523  BaseBdev2
00:17:47.523   17:01:40	-- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2
00:17:47.523   17:01:40	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2
00:17:47.523   17:01:40	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:17:47.523   17:01:40	-- common/autotest_common.sh@899 -- # local i
00:17:47.523   17:01:40	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:17:47.523   17:01:40	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:17:47.523   17:01:40	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:17:47.781   17:01:40	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000
00:17:48.039  [
00:17:48.039    {
00:17:48.039      "name": "BaseBdev2",
00:17:48.039      "aliases": [
00:17:48.039        "406585e4-7c77-43fa-8f7f-66ac57fbddbf"
00:17:48.039      ],
00:17:48.039      "product_name": "Malloc disk",
00:17:48.039      "block_size": 512,
00:17:48.039      "num_blocks": 65536,
00:17:48.039      "uuid": "406585e4-7c77-43fa-8f7f-66ac57fbddbf",
00:17:48.039      "assigned_rate_limits": {
00:17:48.039        "rw_ios_per_sec": 0,
00:17:48.039        "rw_mbytes_per_sec": 0,
00:17:48.039        "r_mbytes_per_sec": 0,
00:17:48.039        "w_mbytes_per_sec": 0
00:17:48.039      },
00:17:48.039      "claimed": true,
00:17:48.039      "claim_type": "exclusive_write",
00:17:48.039      "zoned": false,
00:17:48.039      "supported_io_types": {
00:17:48.039        "read": true,
00:17:48.039        "write": true,
00:17:48.039        "unmap": true,
00:17:48.039        "write_zeroes": true,
00:17:48.039        "flush": true,
00:17:48.039        "reset": true,
00:17:48.039        "compare": false,
00:17:48.039        "compare_and_write": false,
00:17:48.039        "abort": true,
00:17:48.039        "nvme_admin": false,
00:17:48.039        "nvme_io": false
00:17:48.039      },
00:17:48.039      "memory_domains": [
00:17:48.039        {
00:17:48.039          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:17:48.039          "dma_device_type": 2
00:17:48.039        }
00:17:48.039      ],
00:17:48.039      "driver_specific": {}
00:17:48.039    }
00:17:48.039  ]
00:17:48.039   17:01:40	-- common/autotest_common.sh@905 -- # return 0
00:17:48.039   17:01:40	-- bdev/bdev_raid.sh@254 -- # (( i++ ))
00:17:48.039   17:01:40	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:17:48.039   17:01:40	-- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4
00:17:48.039   17:01:40	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:17:48.039   17:01:40	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:17:48.039   17:01:40	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:17:48.039   17:01:40	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:17:48.039   17:01:40	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:17:48.039   17:01:40	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:17:48.039   17:01:40	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:17:48.039   17:01:40	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:17:48.039   17:01:40	-- bdev/bdev_raid.sh@125 -- # local tmp
00:17:48.039    17:01:40	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:17:48.039    17:01:40	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:17:48.298   17:01:41	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:17:48.298    "name": "Existed_Raid",
00:17:48.298    "uuid": "00000000-0000-0000-0000-000000000000",
00:17:48.298    "strip_size_kb": 0,
00:17:48.298    "state": "configuring",
00:17:48.298    "raid_level": "raid1",
00:17:48.298    "superblock": false,
00:17:48.298    "num_base_bdevs": 4,
00:17:48.298    "num_base_bdevs_discovered": 2,
00:17:48.298    "num_base_bdevs_operational": 4,
00:17:48.298    "base_bdevs_list": [
00:17:48.298      {
00:17:48.298        "name": "BaseBdev1",
00:17:48.298        "uuid": "79ff81ae-18d5-4c39-98ad-07fefad77956",
00:17:48.298        "is_configured": true,
00:17:48.298        "data_offset": 0,
00:17:48.298        "data_size": 65536
00:17:48.298      },
00:17:48.298      {
00:17:48.298        "name": "BaseBdev2",
00:17:48.298        "uuid": "406585e4-7c77-43fa-8f7f-66ac57fbddbf",
00:17:48.298        "is_configured": true,
00:17:48.298        "data_offset": 0,
00:17:48.298        "data_size": 65536
00:17:48.298      },
00:17:48.298      {
00:17:48.298        "name": "BaseBdev3",
00:17:48.298        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:48.298        "is_configured": false,
00:17:48.298        "data_offset": 0,
00:17:48.298        "data_size": 0
00:17:48.298      },
00:17:48.298      {
00:17:48.298        "name": "BaseBdev4",
00:17:48.298        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:48.298        "is_configured": false,
00:17:48.298        "data_offset": 0,
00:17:48.298        "data_size": 0
00:17:48.298      }
00:17:48.298    ]
00:17:48.298  }'
00:17:48.298   17:01:41	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:17:48.298   17:01:41	-- common/autotest_common.sh@10 -- # set +x
00:17:48.865   17:01:41	-- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3
00:17:49.124  [2024-11-19 17:01:41.930689] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:17:49.124  BaseBdev3
00:17:49.124   17:01:41	-- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3
00:17:49.124   17:01:41	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3
00:17:49.124   17:01:41	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:17:49.124   17:01:41	-- common/autotest_common.sh@899 -- # local i
00:17:49.124   17:01:41	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:17:49.124   17:01:41	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:17:49.124   17:01:41	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:17:49.382   17:01:42	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000
00:17:49.950  [
00:17:49.950    {
00:17:49.950      "name": "BaseBdev3",
00:17:49.950      "aliases": [
00:17:49.950        "42f952f0-2a6a-45e6-a170-d8a206d99798"
00:17:49.950      ],
00:17:49.950      "product_name": "Malloc disk",
00:17:49.950      "block_size": 512,
00:17:49.950      "num_blocks": 65536,
00:17:49.950      "uuid": "42f952f0-2a6a-45e6-a170-d8a206d99798",
00:17:49.950      "assigned_rate_limits": {
00:17:49.950        "rw_ios_per_sec": 0,
00:17:49.950        "rw_mbytes_per_sec": 0,
00:17:49.950        "r_mbytes_per_sec": 0,
00:17:49.950        "w_mbytes_per_sec": 0
00:17:49.950      },
00:17:49.950      "claimed": true,
00:17:49.950      "claim_type": "exclusive_write",
00:17:49.950      "zoned": false,
00:17:49.950      "supported_io_types": {
00:17:49.950        "read": true,
00:17:49.950        "write": true,
00:17:49.950        "unmap": true,
00:17:49.950        "write_zeroes": true,
00:17:49.950        "flush": true,
00:17:49.950        "reset": true,
00:17:49.950        "compare": false,
00:17:49.950        "compare_and_write": false,
00:17:49.950        "abort": true,
00:17:49.950        "nvme_admin": false,
00:17:49.950        "nvme_io": false
00:17:49.950      },
00:17:49.950      "memory_domains": [
00:17:49.950        {
00:17:49.950          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:17:49.950          "dma_device_type": 2
00:17:49.950        }
00:17:49.950      ],
00:17:49.950      "driver_specific": {}
00:17:49.950    }
00:17:49.950  ]
00:17:49.950   17:01:42	-- common/autotest_common.sh@905 -- # return 0
00:17:49.950   17:01:42	-- bdev/bdev_raid.sh@254 -- # (( i++ ))
00:17:49.950   17:01:42	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:17:49.950   17:01:42	-- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4
00:17:49.950   17:01:42	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:17:49.950   17:01:42	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:17:49.950   17:01:42	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:17:49.950   17:01:42	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:17:49.950   17:01:42	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:17:49.950   17:01:42	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:17:49.950   17:01:42	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:17:49.950   17:01:42	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:17:49.950   17:01:42	-- bdev/bdev_raid.sh@125 -- # local tmp
00:17:49.950    17:01:42	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:17:49.950    17:01:42	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:17:49.950   17:01:42	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:17:49.950    "name": "Existed_Raid",
00:17:49.950    "uuid": "00000000-0000-0000-0000-000000000000",
00:17:49.950    "strip_size_kb": 0,
00:17:49.950    "state": "configuring",
00:17:49.950    "raid_level": "raid1",
00:17:49.950    "superblock": false,
00:17:49.950    "num_base_bdevs": 4,
00:17:49.950    "num_base_bdevs_discovered": 3,
00:17:49.950    "num_base_bdevs_operational": 4,
00:17:49.950    "base_bdevs_list": [
00:17:49.950      {
00:17:49.950        "name": "BaseBdev1",
00:17:49.950        "uuid": "79ff81ae-18d5-4c39-98ad-07fefad77956",
00:17:49.950        "is_configured": true,
00:17:49.950        "data_offset": 0,
00:17:49.950        "data_size": 65536
00:17:49.950      },
00:17:49.950      {
00:17:49.950        "name": "BaseBdev2",
00:17:49.950        "uuid": "406585e4-7c77-43fa-8f7f-66ac57fbddbf",
00:17:49.950        "is_configured": true,
00:17:49.950        "data_offset": 0,
00:17:49.950        "data_size": 65536
00:17:49.950      },
00:17:49.950      {
00:17:49.950        "name": "BaseBdev3",
00:17:49.950        "uuid": "42f952f0-2a6a-45e6-a170-d8a206d99798",
00:17:49.950        "is_configured": true,
00:17:49.950        "data_offset": 0,
00:17:49.950        "data_size": 65536
00:17:49.950      },
00:17:49.950      {
00:17:49.950        "name": "BaseBdev4",
00:17:49.950        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:49.950        "is_configured": false,
00:17:49.950        "data_offset": 0,
00:17:49.950        "data_size": 0
00:17:49.950      }
00:17:49.950    ]
00:17:49.950  }'
00:17:49.950   17:01:42	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:17:49.950   17:01:42	-- common/autotest_common.sh@10 -- # set +x
00:17:50.885   17:01:43	-- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4
00:17:50.885  [2024-11-19 17:01:43.666940] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed
00:17:50.885  [2024-11-19 17:01:43.667256] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080
00:17:50.885  [2024-11-19 17:01:43.667303] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512
00:17:50.885  [2024-11-19 17:01:43.667573] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120
00:17:50.885  [2024-11-19 17:01:43.668108] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080
00:17:50.885  [2024-11-19 17:01:43.668227] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080
00:17:50.885  [2024-11-19 17:01:43.668573] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:17:50.885  BaseBdev4
00:17:50.885   17:01:43	-- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4
00:17:50.885   17:01:43	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4
00:17:50.885   17:01:43	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:17:50.885   17:01:43	-- common/autotest_common.sh@899 -- # local i
00:17:50.885   17:01:43	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:17:50.885   17:01:43	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:17:50.885   17:01:43	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:17:51.144   17:01:43	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000
00:17:51.402  [
00:17:51.402    {
00:17:51.402      "name": "BaseBdev4",
00:17:51.402      "aliases": [
00:17:51.402        "50fa9831-eb2d-41fc-90bd-9ff523c33399"
00:17:51.402      ],
00:17:51.402      "product_name": "Malloc disk",
00:17:51.402      "block_size": 512,
00:17:51.402      "num_blocks": 65536,
00:17:51.402      "uuid": "50fa9831-eb2d-41fc-90bd-9ff523c33399",
00:17:51.402      "assigned_rate_limits": {
00:17:51.402        "rw_ios_per_sec": 0,
00:17:51.402        "rw_mbytes_per_sec": 0,
00:17:51.402        "r_mbytes_per_sec": 0,
00:17:51.402        "w_mbytes_per_sec": 0
00:17:51.402      },
00:17:51.402      "claimed": true,
00:17:51.402      "claim_type": "exclusive_write",
00:17:51.402      "zoned": false,
00:17:51.402      "supported_io_types": {
00:17:51.402        "read": true,
00:17:51.402        "write": true,
00:17:51.402        "unmap": true,
00:17:51.402        "write_zeroes": true,
00:17:51.402        "flush": true,
00:17:51.402        "reset": true,
00:17:51.402        "compare": false,
00:17:51.402        "compare_and_write": false,
00:17:51.402        "abort": true,
00:17:51.402        "nvme_admin": false,
00:17:51.402        "nvme_io": false
00:17:51.402      },
00:17:51.402      "memory_domains": [
00:17:51.402        {
00:17:51.402          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:17:51.402          "dma_device_type": 2
00:17:51.402        }
00:17:51.402      ],
00:17:51.402      "driver_specific": {}
00:17:51.402    }
00:17:51.402  ]
00:17:51.402   17:01:44	-- common/autotest_common.sh@905 -- # return 0
00:17:51.402   17:01:44	-- bdev/bdev_raid.sh@254 -- # (( i++ ))
00:17:51.402   17:01:44	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:17:51.402   17:01:44	-- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4
00:17:51.402   17:01:44	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:17:51.402   17:01:44	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:17:51.402   17:01:44	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:17:51.402   17:01:44	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:17:51.402   17:01:44	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:17:51.402   17:01:44	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:17:51.402   17:01:44	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:17:51.402   17:01:44	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:17:51.402   17:01:44	-- bdev/bdev_raid.sh@125 -- # local tmp
00:17:51.402    17:01:44	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:17:51.402    17:01:44	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:17:51.970   17:01:44	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:17:51.970    "name": "Existed_Raid",
00:17:51.970    "uuid": "cec76beb-b39e-4a24-b031-b512817d41c2",
00:17:51.970    "strip_size_kb": 0,
00:17:51.970    "state": "online",
00:17:51.970    "raid_level": "raid1",
00:17:51.970    "superblock": false,
00:17:51.970    "num_base_bdevs": 4,
00:17:51.970    "num_base_bdevs_discovered": 4,
00:17:51.970    "num_base_bdevs_operational": 4,
00:17:51.970    "base_bdevs_list": [
00:17:51.970      {
00:17:51.970        "name": "BaseBdev1",
00:17:51.970        "uuid": "79ff81ae-18d5-4c39-98ad-07fefad77956",
00:17:51.970        "is_configured": true,
00:17:51.970        "data_offset": 0,
00:17:51.970        "data_size": 65536
00:17:51.970      },
00:17:51.970      {
00:17:51.970        "name": "BaseBdev2",
00:17:51.970        "uuid": "406585e4-7c77-43fa-8f7f-66ac57fbddbf",
00:17:51.970        "is_configured": true,
00:17:51.970        "data_offset": 0,
00:17:51.970        "data_size": 65536
00:17:51.970      },
00:17:51.970      {
00:17:51.970        "name": "BaseBdev3",
00:17:51.970        "uuid": "42f952f0-2a6a-45e6-a170-d8a206d99798",
00:17:51.970        "is_configured": true,
00:17:51.970        "data_offset": 0,
00:17:51.970        "data_size": 65536
00:17:51.970      },
00:17:51.970      {
00:17:51.970        "name": "BaseBdev4",
00:17:51.970        "uuid": "50fa9831-eb2d-41fc-90bd-9ff523c33399",
00:17:51.970        "is_configured": true,
00:17:51.970        "data_offset": 0,
00:17:51.970        "data_size": 65536
00:17:51.970      }
00:17:51.970    ]
00:17:51.970  }'
00:17:51.970   17:01:44	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:17:51.970   17:01:44	-- common/autotest_common.sh@10 -- # set +x
00:17:52.536   17:01:45	-- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1
00:17:52.795  [2024-11-19 17:01:45.395590] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:17:52.795   17:01:45	-- bdev/bdev_raid.sh@263 -- # local expected_state
00:17:52.795   17:01:45	-- bdev/bdev_raid.sh@264 -- # has_redundancy raid1
00:17:52.795   17:01:45	-- bdev/bdev_raid.sh@195 -- # case $1 in
00:17:52.795   17:01:45	-- bdev/bdev_raid.sh@196 -- # return 0
00:17:52.795   17:01:45	-- bdev/bdev_raid.sh@267 -- # expected_state=online
00:17:52.795   17:01:45	-- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3
00:17:52.795   17:01:45	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:17:52.795   17:01:45	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:17:52.795   17:01:45	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:17:52.795   17:01:45	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:17:52.795   17:01:45	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:17:52.795   17:01:45	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:17:52.795   17:01:45	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:17:52.795   17:01:45	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:17:52.795   17:01:45	-- bdev/bdev_raid.sh@125 -- # local tmp
00:17:52.795    17:01:45	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:17:52.795    17:01:45	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:17:53.054   17:01:45	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:17:53.054    "name": "Existed_Raid",
00:17:53.054    "uuid": "cec76beb-b39e-4a24-b031-b512817d41c2",
00:17:53.054    "strip_size_kb": 0,
00:17:53.054    "state": "online",
00:17:53.054    "raid_level": "raid1",
00:17:53.054    "superblock": false,
00:17:53.054    "num_base_bdevs": 4,
00:17:53.054    "num_base_bdevs_discovered": 3,
00:17:53.054    "num_base_bdevs_operational": 3,
00:17:53.054    "base_bdevs_list": [
00:17:53.054      {
00:17:53.054        "name": null,
00:17:53.054        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:53.054        "is_configured": false,
00:17:53.054        "data_offset": 0,
00:17:53.054        "data_size": 65536
00:17:53.054      },
00:17:53.054      {
00:17:53.054        "name": "BaseBdev2",
00:17:53.054        "uuid": "406585e4-7c77-43fa-8f7f-66ac57fbddbf",
00:17:53.054        "is_configured": true,
00:17:53.054        "data_offset": 0,
00:17:53.054        "data_size": 65536
00:17:53.054      },
00:17:53.054      {
00:17:53.054        "name": "BaseBdev3",
00:17:53.054        "uuid": "42f952f0-2a6a-45e6-a170-d8a206d99798",
00:17:53.054        "is_configured": true,
00:17:53.054        "data_offset": 0,
00:17:53.054        "data_size": 65536
00:17:53.054      },
00:17:53.054      {
00:17:53.054        "name": "BaseBdev4",
00:17:53.054        "uuid": "50fa9831-eb2d-41fc-90bd-9ff523c33399",
00:17:53.054        "is_configured": true,
00:17:53.054        "data_offset": 0,
00:17:53.054        "data_size": 65536
00:17:53.054      }
00:17:53.054    ]
00:17:53.054  }'
00:17:53.054   17:01:45	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:17:53.054   17:01:45	-- common/autotest_common.sh@10 -- # set +x
00:17:53.621   17:01:46	-- bdev/bdev_raid.sh@273 -- # (( i = 1 ))
00:17:53.621   17:01:46	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:17:53.621    17:01:46	-- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:17:53.621    17:01:46	-- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]'
00:17:53.880   17:01:46	-- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid
00:17:53.880   17:01:46	-- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:17:53.880   17:01:46	-- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2
00:17:54.138  [2024-11-19 17:01:46.827374] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2
00:17:54.138   17:01:46	-- bdev/bdev_raid.sh@273 -- # (( i++ ))
00:17:54.138   17:01:46	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:17:54.138    17:01:46	-- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:17:54.138    17:01:46	-- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]'
00:17:54.396   17:01:47	-- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid
00:17:54.396   17:01:47	-- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:17:54.396   17:01:47	-- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3
00:17:54.396  [2024-11-19 17:01:47.240154] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3
00:17:54.655   17:01:47	-- bdev/bdev_raid.sh@273 -- # (( i++ ))
00:17:54.655   17:01:47	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:17:54.655    17:01:47	-- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:17:54.655    17:01:47	-- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]'
00:17:54.655   17:01:47	-- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid
00:17:54.655   17:01:47	-- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:17:54.655   17:01:47	-- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4
00:17:54.913  [2024-11-19 17:01:47.700980] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4
00:17:54.913  [2024-11-19 17:01:47.701236] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:17:54.913  [2024-11-19 17:01:47.701409] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:17:54.913  [2024-11-19 17:01:47.714100] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:17:54.913  [2024-11-19 17:01:47.714283] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline
00:17:54.913   17:01:47	-- bdev/bdev_raid.sh@273 -- # (( i++ ))
00:17:54.913   17:01:47	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:17:54.913    17:01:47	-- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:17:54.913    17:01:47	-- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)'
00:17:55.171   17:01:47	-- bdev/bdev_raid.sh@281 -- # raid_bdev=
00:17:55.171   17:01:47	-- bdev/bdev_raid.sh@282 -- # '[' -n '' ']'
00:17:55.171   17:01:47	-- bdev/bdev_raid.sh@287 -- # killprocess 131123
00:17:55.171   17:01:47	-- common/autotest_common.sh@936 -- # '[' -z 131123 ']'
00:17:55.171   17:01:47	-- common/autotest_common.sh@940 -- # kill -0 131123
00:17:55.171    17:01:47	-- common/autotest_common.sh@941 -- # uname
00:17:55.171   17:01:47	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:17:55.171    17:01:47	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 131123
00:17:55.171   17:01:47	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:17:55.171   17:01:47	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:17:55.171   17:01:47	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 131123'
00:17:55.171  killing process with pid 131123
00:17:55.171   17:01:47	-- common/autotest_common.sh@955 -- # kill 131123
00:17:55.171   17:01:47	-- common/autotest_common.sh@960 -- # wait 131123
00:17:55.171  [2024-11-19 17:01:47.981413] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:17:55.171  [2024-11-19 17:01:47.981554] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:17:55.737   17:01:48	-- bdev/bdev_raid.sh@289 -- # return 0
00:17:55.737  
00:17:55.737  real	0m13.995s
00:17:55.737  user	0m25.121s
00:17:55.737  sys	0m2.317s
00:17:55.737   17:01:48	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:17:55.737   17:01:48	-- common/autotest_common.sh@10 -- # set +x
00:17:55.737  ************************************
00:17:55.737  END TEST raid_state_function_test
00:17:55.737  ************************************
00:17:55.737   17:01:48	-- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true
00:17:55.737   17:01:48	-- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']'
00:17:55.737   17:01:48	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:17:55.737   17:01:48	-- common/autotest_common.sh@10 -- # set +x
00:17:55.737  ************************************
00:17:55.737  START TEST raid_state_function_test_sb
00:17:55.737  ************************************
00:17:55.737   17:01:48	-- common/autotest_common.sh@1114 -- # raid_state_function_test raid1 4 true
00:17:55.737   17:01:48	-- bdev/bdev_raid.sh@202 -- # local raid_level=raid1
00:17:55.737   17:01:48	-- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4
00:17:55.737   17:01:48	-- bdev/bdev_raid.sh@204 -- # local superblock=true
00:17:55.737   17:01:48	-- bdev/bdev_raid.sh@205 -- # local raid_bdev
00:17:55.737    17:01:48	-- bdev/bdev_raid.sh@206 -- # (( i = 1 ))
00:17:55.737    17:01:48	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:17:55.737    17:01:48	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev1
00:17:55.737    17:01:48	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:17:55.737    17:01:48	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:17:55.737    17:01:48	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev2
00:17:55.737    17:01:48	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:17:55.737    17:01:48	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:17:55.737    17:01:48	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev3
00:17:55.737    17:01:48	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:17:55.737    17:01:48	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:17:55.737    17:01:48	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev4
00:17:55.737    17:01:48	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:17:55.737    17:01:48	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:17:55.737   17:01:48	-- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4')
00:17:55.737   17:01:48	-- bdev/bdev_raid.sh@206 -- # local base_bdevs
00:17:55.737   17:01:48	-- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid
00:17:55.737   17:01:48	-- bdev/bdev_raid.sh@208 -- # local strip_size
00:17:55.737   17:01:48	-- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg
00:17:55.737   17:01:48	-- bdev/bdev_raid.sh@210 -- # local superblock_create_arg
00:17:55.737   17:01:48	-- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']'
00:17:55.737   17:01:48	-- bdev/bdev_raid.sh@216 -- # strip_size=0
00:17:55.737   17:01:48	-- bdev/bdev_raid.sh@219 -- # '[' true = true ']'
00:17:55.737   17:01:48	-- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s
00:17:55.737   17:01:48	-- bdev/bdev_raid.sh@226 -- # raid_pid=131557
00:17:55.737   17:01:48	-- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 131557'
00:17:55.737  Process raid pid: 131557
00:17:55.737   17:01:48	-- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid
00:17:55.737   17:01:48	-- bdev/bdev_raid.sh@228 -- # waitforlisten 131557 /var/tmp/spdk-raid.sock
00:17:55.737   17:01:48	-- common/autotest_common.sh@829 -- # '[' -z 131557 ']'
00:17:55.737   17:01:48	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock
00:17:55.737   17:01:48	-- common/autotest_common.sh@834 -- # local max_retries=100
00:17:55.737   17:01:48	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...'
00:17:55.737  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...
00:17:55.737   17:01:48	-- common/autotest_common.sh@838 -- # xtrace_disable
00:17:55.737   17:01:48	-- common/autotest_common.sh@10 -- # set +x
00:17:55.737  [2024-11-19 17:01:48.548643] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:17:55.737  [2024-11-19 17:01:48.549080] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:17:55.996  [2024-11-19 17:01:48.712138] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:17:55.996  [2024-11-19 17:01:48.800911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:17:56.254  [2024-11-19 17:01:48.886358] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:17:56.820   17:01:49	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:17:56.820   17:01:49	-- common/autotest_common.sh@862 -- # return 0
00:17:56.820   17:01:49	-- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid
00:17:56.820  [2024-11-19 17:01:49.633975] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:17:56.820  [2024-11-19 17:01:49.634224] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:17:56.820  [2024-11-19 17:01:49.634309] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:17:56.820  [2024-11-19 17:01:49.634362] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:17:56.820  [2024-11-19 17:01:49.634387] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:17:56.820  [2024-11-19 17:01:49.634453] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:17:56.820  [2024-11-19 17:01:49.634687] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4
00:17:56.820  [2024-11-19 17:01:49.634748] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now
00:17:56.820   17:01:49	-- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4
00:17:56.820   17:01:49	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:17:56.820   17:01:49	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:17:56.820   17:01:49	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:17:56.820   17:01:49	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:17:56.820   17:01:49	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:17:56.821   17:01:49	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:17:56.821   17:01:49	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:17:56.821   17:01:49	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:17:56.821   17:01:49	-- bdev/bdev_raid.sh@125 -- # local tmp
00:17:56.821    17:01:49	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:17:56.821    17:01:49	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:17:57.408   17:01:49	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:17:57.408    "name": "Existed_Raid",
00:17:57.408    "uuid": "974591b2-0b9e-446a-bc73-0aeee5b7705d",
00:17:57.408    "strip_size_kb": 0,
00:17:57.408    "state": "configuring",
00:17:57.408    "raid_level": "raid1",
00:17:57.408    "superblock": true,
00:17:57.408    "num_base_bdevs": 4,
00:17:57.408    "num_base_bdevs_discovered": 0,
00:17:57.408    "num_base_bdevs_operational": 4,
00:17:57.408    "base_bdevs_list": [
00:17:57.408      {
00:17:57.408        "name": "BaseBdev1",
00:17:57.408        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:57.408        "is_configured": false,
00:17:57.408        "data_offset": 0,
00:17:57.408        "data_size": 0
00:17:57.408      },
00:17:57.408      {
00:17:57.408        "name": "BaseBdev2",
00:17:57.408        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:57.408        "is_configured": false,
00:17:57.408        "data_offset": 0,
00:17:57.408        "data_size": 0
00:17:57.408      },
00:17:57.408      {
00:17:57.408        "name": "BaseBdev3",
00:17:57.408        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:57.408        "is_configured": false,
00:17:57.408        "data_offset": 0,
00:17:57.408        "data_size": 0
00:17:57.408      },
00:17:57.408      {
00:17:57.408        "name": "BaseBdev4",
00:17:57.408        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:57.408        "is_configured": false,
00:17:57.408        "data_offset": 0,
00:17:57.408        "data_size": 0
00:17:57.408      }
00:17:57.408    ]
00:17:57.408  }'
00:17:57.408   17:01:49	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:17:57.408   17:01:49	-- common/autotest_common.sh@10 -- # set +x
00:17:57.989   17:01:50	-- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid
00:17:57.989  [2024-11-19 17:01:50.770016] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:17:57.989  [2024-11-19 17:01:50.770247] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring
00:17:57.989   17:01:50	-- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid
00:17:58.247  [2024-11-19 17:01:51.042163] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:17:58.247  [2024-11-19 17:01:51.042476] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:17:58.247  [2024-11-19 17:01:51.042570] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:17:58.247  [2024-11-19 17:01:51.042631] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:17:58.247  [2024-11-19 17:01:51.042658] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:17:58.247  [2024-11-19 17:01:51.042695] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:17:58.247  [2024-11-19 17:01:51.042776] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4
00:17:58.247  [2024-11-19 17:01:51.042832] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now
00:17:58.247   17:01:51	-- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1
00:17:58.506  [2024-11-19 17:01:51.262126] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:17:58.506  BaseBdev1
00:17:58.506   17:01:51	-- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1
00:17:58.506   17:01:51	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1
00:17:58.506   17:01:51	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:17:58.506   17:01:51	-- common/autotest_common.sh@899 -- # local i
00:17:58.506   17:01:51	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:17:58.506   17:01:51	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:17:58.506   17:01:51	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:17:58.764   17:01:51	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000
00:17:59.022  [
00:17:59.022    {
00:17:59.022      "name": "BaseBdev1",
00:17:59.022      "aliases": [
00:17:59.022        "070474bc-3b3f-4f8f-9c3b-5f07f974a4bb"
00:17:59.022      ],
00:17:59.022      "product_name": "Malloc disk",
00:17:59.022      "block_size": 512,
00:17:59.022      "num_blocks": 65536,
00:17:59.022      "uuid": "070474bc-3b3f-4f8f-9c3b-5f07f974a4bb",
00:17:59.022      "assigned_rate_limits": {
00:17:59.022        "rw_ios_per_sec": 0,
00:17:59.022        "rw_mbytes_per_sec": 0,
00:17:59.022        "r_mbytes_per_sec": 0,
00:17:59.022        "w_mbytes_per_sec": 0
00:17:59.022      },
00:17:59.022      "claimed": true,
00:17:59.022      "claim_type": "exclusive_write",
00:17:59.022      "zoned": false,
00:17:59.022      "supported_io_types": {
00:17:59.022        "read": true,
00:17:59.022        "write": true,
00:17:59.022        "unmap": true,
00:17:59.022        "write_zeroes": true,
00:17:59.022        "flush": true,
00:17:59.022        "reset": true,
00:17:59.022        "compare": false,
00:17:59.022        "compare_and_write": false,
00:17:59.022        "abort": true,
00:17:59.022        "nvme_admin": false,
00:17:59.022        "nvme_io": false
00:17:59.022      },
00:17:59.022      "memory_domains": [
00:17:59.022        {
00:17:59.022          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:17:59.022          "dma_device_type": 2
00:17:59.022        }
00:17:59.022      ],
00:17:59.022      "driver_specific": {}
00:17:59.022    }
00:17:59.022  ]
00:17:59.022   17:01:51	-- common/autotest_common.sh@905 -- # return 0
00:17:59.022   17:01:51	-- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4
00:17:59.022   17:01:51	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:17:59.022   17:01:51	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:17:59.022   17:01:51	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:17:59.022   17:01:51	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:17:59.022   17:01:51	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:17:59.022   17:01:51	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:17:59.022   17:01:51	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:17:59.022   17:01:51	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:17:59.022   17:01:51	-- bdev/bdev_raid.sh@125 -- # local tmp
00:17:59.022    17:01:51	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:17:59.022    17:01:51	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:17:59.280   17:01:51	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:17:59.280    "name": "Existed_Raid",
00:17:59.280    "uuid": "d1353a70-3fd2-40a4-aa11-c72123235b87",
00:17:59.280    "strip_size_kb": 0,
00:17:59.280    "state": "configuring",
00:17:59.280    "raid_level": "raid1",
00:17:59.280    "superblock": true,
00:17:59.280    "num_base_bdevs": 4,
00:17:59.280    "num_base_bdevs_discovered": 1,
00:17:59.280    "num_base_bdevs_operational": 4,
00:17:59.280    "base_bdevs_list": [
00:17:59.280      {
00:17:59.280        "name": "BaseBdev1",
00:17:59.280        "uuid": "070474bc-3b3f-4f8f-9c3b-5f07f974a4bb",
00:17:59.280        "is_configured": true,
00:17:59.280        "data_offset": 2048,
00:17:59.280        "data_size": 63488
00:17:59.280      },
00:17:59.280      {
00:17:59.280        "name": "BaseBdev2",
00:17:59.280        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:59.280        "is_configured": false,
00:17:59.280        "data_offset": 0,
00:17:59.280        "data_size": 0
00:17:59.280      },
00:17:59.280      {
00:17:59.280        "name": "BaseBdev3",
00:17:59.280        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:59.280        "is_configured": false,
00:17:59.280        "data_offset": 0,
00:17:59.280        "data_size": 0
00:17:59.280      },
00:17:59.280      {
00:17:59.280        "name": "BaseBdev4",
00:17:59.280        "uuid": "00000000-0000-0000-0000-000000000000",
00:17:59.280        "is_configured": false,
00:17:59.280        "data_offset": 0,
00:17:59.280        "data_size": 0
00:17:59.280      }
00:17:59.280    ]
00:17:59.280  }'
00:17:59.280   17:01:51	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:17:59.280   17:01:51	-- common/autotest_common.sh@10 -- # set +x
00:17:59.846   17:01:52	-- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid
00:18:00.104  [2024-11-19 17:01:52.882480] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:18:00.104  [2024-11-19 17:01:52.882810] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring
00:18:00.104   17:01:52	-- bdev/bdev_raid.sh@244 -- # '[' true = true ']'
00:18:00.104   17:01:52	-- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1
00:18:00.362   17:01:53	-- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1
00:18:00.620  BaseBdev1
00:18:00.620   17:01:53	-- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1
00:18:00.620   17:01:53	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1
00:18:00.620   17:01:53	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:18:00.620   17:01:53	-- common/autotest_common.sh@899 -- # local i
00:18:00.620   17:01:53	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:18:00.620   17:01:53	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:18:00.620   17:01:53	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:18:00.878   17:01:53	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000
00:18:01.137  [
00:18:01.137    {
00:18:01.137      "name": "BaseBdev1",
00:18:01.137      "aliases": [
00:18:01.137        "d98a6734-92cf-45db-915f-8379f5a26760"
00:18:01.137      ],
00:18:01.137      "product_name": "Malloc disk",
00:18:01.137      "block_size": 512,
00:18:01.137      "num_blocks": 65536,
00:18:01.137      "uuid": "d98a6734-92cf-45db-915f-8379f5a26760",
00:18:01.137      "assigned_rate_limits": {
00:18:01.137        "rw_ios_per_sec": 0,
00:18:01.137        "rw_mbytes_per_sec": 0,
00:18:01.137        "r_mbytes_per_sec": 0,
00:18:01.137        "w_mbytes_per_sec": 0
00:18:01.137      },
00:18:01.137      "claimed": false,
00:18:01.137      "zoned": false,
00:18:01.137      "supported_io_types": {
00:18:01.137        "read": true,
00:18:01.137        "write": true,
00:18:01.137        "unmap": true,
00:18:01.137        "write_zeroes": true,
00:18:01.137        "flush": true,
00:18:01.137        "reset": true,
00:18:01.137        "compare": false,
00:18:01.137        "compare_and_write": false,
00:18:01.137        "abort": true,
00:18:01.137        "nvme_admin": false,
00:18:01.137        "nvme_io": false
00:18:01.137      },
00:18:01.137      "memory_domains": [
00:18:01.137        {
00:18:01.137          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:18:01.137          "dma_device_type": 2
00:18:01.137        }
00:18:01.137      ],
00:18:01.137      "driver_specific": {}
00:18:01.137    }
00:18:01.137  ]
00:18:01.137   17:01:53	-- common/autotest_common.sh@905 -- # return 0
00:18:01.137   17:01:53	-- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid
00:18:01.396  [2024-11-19 17:01:54.161607] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:18:01.396  [2024-11-19 17:01:54.164312] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:18:01.396  [2024-11-19 17:01:54.164593] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:18:01.396  [2024-11-19 17:01:54.164693] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:18:01.396  [2024-11-19 17:01:54.164813] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:18:01.396  [2024-11-19 17:01:54.164904] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4
00:18:01.396  [2024-11-19 17:01:54.164956] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now
00:18:01.396   17:01:54	-- bdev/bdev_raid.sh@254 -- # (( i = 1 ))
00:18:01.396   17:01:54	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:18:01.396   17:01:54	-- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4
00:18:01.396   17:01:54	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:18:01.396   17:01:54	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:18:01.396   17:01:54	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:18:01.396   17:01:54	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:18:01.396   17:01:54	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:18:01.396   17:01:54	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:18:01.396   17:01:54	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:18:01.396   17:01:54	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:18:01.396   17:01:54	-- bdev/bdev_raid.sh@125 -- # local tmp
00:18:01.396    17:01:54	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:18:01.396    17:01:54	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:18:01.654   17:01:54	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:18:01.654    "name": "Existed_Raid",
00:18:01.654    "uuid": "f1e22a8c-9c74-4c88-95e8-7452ccb0a945",
00:18:01.654    "strip_size_kb": 0,
00:18:01.654    "state": "configuring",
00:18:01.654    "raid_level": "raid1",
00:18:01.654    "superblock": true,
00:18:01.654    "num_base_bdevs": 4,
00:18:01.654    "num_base_bdevs_discovered": 1,
00:18:01.654    "num_base_bdevs_operational": 4,
00:18:01.654    "base_bdevs_list": [
00:18:01.654      {
00:18:01.654        "name": "BaseBdev1",
00:18:01.654        "uuid": "d98a6734-92cf-45db-915f-8379f5a26760",
00:18:01.654        "is_configured": true,
00:18:01.654        "data_offset": 2048,
00:18:01.654        "data_size": 63488
00:18:01.654      },
00:18:01.654      {
00:18:01.654        "name": "BaseBdev2",
00:18:01.654        "uuid": "00000000-0000-0000-0000-000000000000",
00:18:01.654        "is_configured": false,
00:18:01.654        "data_offset": 0,
00:18:01.654        "data_size": 0
00:18:01.654      },
00:18:01.654      {
00:18:01.654        "name": "BaseBdev3",
00:18:01.654        "uuid": "00000000-0000-0000-0000-000000000000",
00:18:01.654        "is_configured": false,
00:18:01.654        "data_offset": 0,
00:18:01.654        "data_size": 0
00:18:01.654      },
00:18:01.654      {
00:18:01.654        "name": "BaseBdev4",
00:18:01.654        "uuid": "00000000-0000-0000-0000-000000000000",
00:18:01.654        "is_configured": false,
00:18:01.654        "data_offset": 0,
00:18:01.654        "data_size": 0
00:18:01.654      }
00:18:01.654    ]
00:18:01.654  }'
00:18:01.654   17:01:54	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:18:01.654   17:01:54	-- common/autotest_common.sh@10 -- # set +x
00:18:02.220   17:01:54	-- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2
00:18:02.478  [2024-11-19 17:01:55.146519] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:18:02.478  BaseBdev2
00:18:02.478   17:01:55	-- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2
00:18:02.478   17:01:55	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2
00:18:02.478   17:01:55	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:18:02.478   17:01:55	-- common/autotest_common.sh@899 -- # local i
00:18:02.478   17:01:55	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:18:02.478   17:01:55	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:18:02.478   17:01:55	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:18:02.736   17:01:55	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000
00:18:02.994  [
00:18:02.994    {
00:18:02.994      "name": "BaseBdev2",
00:18:02.994      "aliases": [
00:18:02.994        "f450aed7-4f4b-461e-b2ff-aa04d39b287d"
00:18:02.994      ],
00:18:02.994      "product_name": "Malloc disk",
00:18:02.994      "block_size": 512,
00:18:02.994      "num_blocks": 65536,
00:18:02.994      "uuid": "f450aed7-4f4b-461e-b2ff-aa04d39b287d",
00:18:02.994      "assigned_rate_limits": {
00:18:02.994        "rw_ios_per_sec": 0,
00:18:02.994        "rw_mbytes_per_sec": 0,
00:18:02.994        "r_mbytes_per_sec": 0,
00:18:02.994        "w_mbytes_per_sec": 0
00:18:02.994      },
00:18:02.994      "claimed": true,
00:18:02.994      "claim_type": "exclusive_write",
00:18:02.994      "zoned": false,
00:18:02.994      "supported_io_types": {
00:18:02.994        "read": true,
00:18:02.994        "write": true,
00:18:02.994        "unmap": true,
00:18:02.994        "write_zeroes": true,
00:18:02.994        "flush": true,
00:18:02.994        "reset": true,
00:18:02.994        "compare": false,
00:18:02.994        "compare_and_write": false,
00:18:02.994        "abort": true,
00:18:02.994        "nvme_admin": false,
00:18:02.994        "nvme_io": false
00:18:02.994      },
00:18:02.994      "memory_domains": [
00:18:02.994        {
00:18:02.994          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:18:02.994          "dma_device_type": 2
00:18:02.994        }
00:18:02.994      ],
00:18:02.994      "driver_specific": {}
00:18:02.994    }
00:18:02.994  ]
00:18:02.994   17:01:55	-- common/autotest_common.sh@905 -- # return 0
00:18:02.994   17:01:55	-- bdev/bdev_raid.sh@254 -- # (( i++ ))
00:18:02.994   17:01:55	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:18:02.994   17:01:55	-- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4
00:18:02.994   17:01:55	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:18:02.994   17:01:55	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:18:02.994   17:01:55	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:18:02.994   17:01:55	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:18:02.994   17:01:55	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:18:02.994   17:01:55	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:18:02.994   17:01:55	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:18:02.994   17:01:55	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:18:02.994   17:01:55	-- bdev/bdev_raid.sh@125 -- # local tmp
00:18:02.994    17:01:55	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:18:02.994    17:01:55	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:18:03.252   17:01:55	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:18:03.252    "name": "Existed_Raid",
00:18:03.252    "uuid": "f1e22a8c-9c74-4c88-95e8-7452ccb0a945",
00:18:03.252    "strip_size_kb": 0,
00:18:03.252    "state": "configuring",
00:18:03.252    "raid_level": "raid1",
00:18:03.252    "superblock": true,
00:18:03.252    "num_base_bdevs": 4,
00:18:03.252    "num_base_bdevs_discovered": 2,
00:18:03.252    "num_base_bdevs_operational": 4,
00:18:03.252    "base_bdevs_list": [
00:18:03.252      {
00:18:03.252        "name": "BaseBdev1",
00:18:03.252        "uuid": "d98a6734-92cf-45db-915f-8379f5a26760",
00:18:03.252        "is_configured": true,
00:18:03.252        "data_offset": 2048,
00:18:03.252        "data_size": 63488
00:18:03.252      },
00:18:03.252      {
00:18:03.252        "name": "BaseBdev2",
00:18:03.252        "uuid": "f450aed7-4f4b-461e-b2ff-aa04d39b287d",
00:18:03.252        "is_configured": true,
00:18:03.252        "data_offset": 2048,
00:18:03.252        "data_size": 63488
00:18:03.252      },
00:18:03.252      {
00:18:03.252        "name": "BaseBdev3",
00:18:03.252        "uuid": "00000000-0000-0000-0000-000000000000",
00:18:03.252        "is_configured": false,
00:18:03.252        "data_offset": 0,
00:18:03.252        "data_size": 0
00:18:03.252      },
00:18:03.252      {
00:18:03.252        "name": "BaseBdev4",
00:18:03.252        "uuid": "00000000-0000-0000-0000-000000000000",
00:18:03.252        "is_configured": false,
00:18:03.252        "data_offset": 0,
00:18:03.252        "data_size": 0
00:18:03.252      }
00:18:03.252    ]
00:18:03.252  }'
00:18:03.252   17:01:55	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:18:03.252   17:01:55	-- common/autotest_common.sh@10 -- # set +x
00:18:03.819   17:01:56	-- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3
00:18:04.077  [2024-11-19 17:01:56.775041] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:18:04.077  BaseBdev3
00:18:04.077   17:01:56	-- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3
00:18:04.077   17:01:56	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3
00:18:04.077   17:01:56	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:18:04.077   17:01:56	-- common/autotest_common.sh@899 -- # local i
00:18:04.077   17:01:56	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:18:04.077   17:01:56	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:18:04.077   17:01:56	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:18:04.335   17:01:57	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000
00:18:04.593  [
00:18:04.593    {
00:18:04.593      "name": "BaseBdev3",
00:18:04.593      "aliases": [
00:18:04.593        "637971ab-6413-4fa3-b046-633c418c5e8b"
00:18:04.593      ],
00:18:04.593      "product_name": "Malloc disk",
00:18:04.593      "block_size": 512,
00:18:04.593      "num_blocks": 65536,
00:18:04.593      "uuid": "637971ab-6413-4fa3-b046-633c418c5e8b",
00:18:04.593      "assigned_rate_limits": {
00:18:04.593        "rw_ios_per_sec": 0,
00:18:04.593        "rw_mbytes_per_sec": 0,
00:18:04.593        "r_mbytes_per_sec": 0,
00:18:04.593        "w_mbytes_per_sec": 0
00:18:04.593      },
00:18:04.593      "claimed": true,
00:18:04.593      "claim_type": "exclusive_write",
00:18:04.593      "zoned": false,
00:18:04.593      "supported_io_types": {
00:18:04.593        "read": true,
00:18:04.593        "write": true,
00:18:04.593        "unmap": true,
00:18:04.593        "write_zeroes": true,
00:18:04.593        "flush": true,
00:18:04.594        "reset": true,
00:18:04.594        "compare": false,
00:18:04.594        "compare_and_write": false,
00:18:04.594        "abort": true,
00:18:04.594        "nvme_admin": false,
00:18:04.594        "nvme_io": false
00:18:04.594      },
00:18:04.594      "memory_domains": [
00:18:04.594        {
00:18:04.594          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:18:04.594          "dma_device_type": 2
00:18:04.594        }
00:18:04.594      ],
00:18:04.594      "driver_specific": {}
00:18:04.594    }
00:18:04.594  ]
00:18:04.594   17:01:57	-- common/autotest_common.sh@905 -- # return 0
00:18:04.594   17:01:57	-- bdev/bdev_raid.sh@254 -- # (( i++ ))
00:18:04.594   17:01:57	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:18:04.594   17:01:57	-- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4
00:18:04.594   17:01:57	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:18:04.594   17:01:57	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:18:04.594   17:01:57	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:18:04.594   17:01:57	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:18:04.594   17:01:57	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:18:04.594   17:01:57	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:18:04.594   17:01:57	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:18:04.594   17:01:57	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:18:04.594   17:01:57	-- bdev/bdev_raid.sh@125 -- # local tmp
00:18:04.594    17:01:57	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:18:04.594    17:01:57	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:18:04.852   17:01:57	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:18:04.852    "name": "Existed_Raid",
00:18:04.852    "uuid": "f1e22a8c-9c74-4c88-95e8-7452ccb0a945",
00:18:04.852    "strip_size_kb": 0,
00:18:04.852    "state": "configuring",
00:18:04.852    "raid_level": "raid1",
00:18:04.852    "superblock": true,
00:18:04.852    "num_base_bdevs": 4,
00:18:04.852    "num_base_bdevs_discovered": 3,
00:18:04.852    "num_base_bdevs_operational": 4,
00:18:04.852    "base_bdevs_list": [
00:18:04.852      {
00:18:04.852        "name": "BaseBdev1",
00:18:04.852        "uuid": "d98a6734-92cf-45db-915f-8379f5a26760",
00:18:04.852        "is_configured": true,
00:18:04.852        "data_offset": 2048,
00:18:04.852        "data_size": 63488
00:18:04.852      },
00:18:04.852      {
00:18:04.852        "name": "BaseBdev2",
00:18:04.852        "uuid": "f450aed7-4f4b-461e-b2ff-aa04d39b287d",
00:18:04.852        "is_configured": true,
00:18:04.852        "data_offset": 2048,
00:18:04.852        "data_size": 63488
00:18:04.852      },
00:18:04.852      {
00:18:04.852        "name": "BaseBdev3",
00:18:04.852        "uuid": "637971ab-6413-4fa3-b046-633c418c5e8b",
00:18:04.852        "is_configured": true,
00:18:04.852        "data_offset": 2048,
00:18:04.852        "data_size": 63488
00:18:04.852      },
00:18:04.852      {
00:18:04.852        "name": "BaseBdev4",
00:18:04.852        "uuid": "00000000-0000-0000-0000-000000000000",
00:18:04.852        "is_configured": false,
00:18:04.852        "data_offset": 0,
00:18:04.852        "data_size": 0
00:18:04.852      }
00:18:04.852    ]
00:18:04.852  }'
00:18:04.852   17:01:57	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:18:04.852   17:01:57	-- common/autotest_common.sh@10 -- # set +x
00:18:05.418   17:01:58	-- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4
00:18:05.675  [2024-11-19 17:01:58.288784] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed
00:18:05.675  [2024-11-19 17:01:58.289328] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006680
00:18:05.675  [2024-11-19 17:01:58.289484] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512
00:18:05.675  [2024-11-19 17:01:58.289696] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000021f0
00:18:05.675  [2024-11-19 17:01:58.290249] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006680
00:18:05.676  [2024-11-19 17:01:58.290361] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006680
00:18:05.676  [2024-11-19 17:01:58.290618] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:18:05.676  BaseBdev4
00:18:05.676   17:01:58	-- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4
00:18:05.676   17:01:58	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4
00:18:05.676   17:01:58	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:18:05.676   17:01:58	-- common/autotest_common.sh@899 -- # local i
00:18:05.676   17:01:58	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:18:05.676   17:01:58	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:18:05.676   17:01:58	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:18:05.934   17:01:58	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000
00:18:05.934  [
00:18:05.934    {
00:18:05.934      "name": "BaseBdev4",
00:18:05.934      "aliases": [
00:18:05.934        "a4c2d45b-36c3-4d6b-ad08-0a80f32087f4"
00:18:05.934      ],
00:18:05.934      "product_name": "Malloc disk",
00:18:05.934      "block_size": 512,
00:18:05.934      "num_blocks": 65536,
00:18:05.934      "uuid": "a4c2d45b-36c3-4d6b-ad08-0a80f32087f4",
00:18:05.934      "assigned_rate_limits": {
00:18:05.934        "rw_ios_per_sec": 0,
00:18:05.934        "rw_mbytes_per_sec": 0,
00:18:05.934        "r_mbytes_per_sec": 0,
00:18:05.934        "w_mbytes_per_sec": 0
00:18:05.934      },
00:18:05.934      "claimed": true,
00:18:05.934      "claim_type": "exclusive_write",
00:18:05.934      "zoned": false,
00:18:05.934      "supported_io_types": {
00:18:05.934        "read": true,
00:18:05.934        "write": true,
00:18:05.934        "unmap": true,
00:18:05.934        "write_zeroes": true,
00:18:05.934        "flush": true,
00:18:05.934        "reset": true,
00:18:05.934        "compare": false,
00:18:05.934        "compare_and_write": false,
00:18:05.934        "abort": true,
00:18:05.934        "nvme_admin": false,
00:18:05.934        "nvme_io": false
00:18:05.934      },
00:18:05.934      "memory_domains": [
00:18:05.934        {
00:18:05.934          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:18:05.934          "dma_device_type": 2
00:18:05.934        }
00:18:05.934      ],
00:18:05.934      "driver_specific": {}
00:18:05.934    }
00:18:05.934  ]
00:18:05.934   17:01:58	-- common/autotest_common.sh@905 -- # return 0
00:18:05.934   17:01:58	-- bdev/bdev_raid.sh@254 -- # (( i++ ))
00:18:05.934   17:01:58	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:18:05.934   17:01:58	-- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4
00:18:05.934   17:01:58	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:18:05.934   17:01:58	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:18:05.934   17:01:58	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:18:05.934   17:01:58	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:18:05.934   17:01:58	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:18:05.934   17:01:58	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:18:05.934   17:01:58	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:18:05.934   17:01:58	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:18:05.934   17:01:58	-- bdev/bdev_raid.sh@125 -- # local tmp
00:18:05.934    17:01:58	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:18:05.934    17:01:58	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:18:06.191   17:01:58	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:18:06.192    "name": "Existed_Raid",
00:18:06.192    "uuid": "f1e22a8c-9c74-4c88-95e8-7452ccb0a945",
00:18:06.192    "strip_size_kb": 0,
00:18:06.192    "state": "online",
00:18:06.192    "raid_level": "raid1",
00:18:06.192    "superblock": true,
00:18:06.192    "num_base_bdevs": 4,
00:18:06.192    "num_base_bdevs_discovered": 4,
00:18:06.192    "num_base_bdevs_operational": 4,
00:18:06.192    "base_bdevs_list": [
00:18:06.192      {
00:18:06.192        "name": "BaseBdev1",
00:18:06.192        "uuid": "d98a6734-92cf-45db-915f-8379f5a26760",
00:18:06.192        "is_configured": true,
00:18:06.192        "data_offset": 2048,
00:18:06.192        "data_size": 63488
00:18:06.192      },
00:18:06.192      {
00:18:06.192        "name": "BaseBdev2",
00:18:06.192        "uuid": "f450aed7-4f4b-461e-b2ff-aa04d39b287d",
00:18:06.192        "is_configured": true,
00:18:06.192        "data_offset": 2048,
00:18:06.192        "data_size": 63488
00:18:06.192      },
00:18:06.192      {
00:18:06.192        "name": "BaseBdev3",
00:18:06.192        "uuid": "637971ab-6413-4fa3-b046-633c418c5e8b",
00:18:06.192        "is_configured": true,
00:18:06.192        "data_offset": 2048,
00:18:06.192        "data_size": 63488
00:18:06.192      },
00:18:06.192      {
00:18:06.192        "name": "BaseBdev4",
00:18:06.192        "uuid": "a4c2d45b-36c3-4d6b-ad08-0a80f32087f4",
00:18:06.192        "is_configured": true,
00:18:06.192        "data_offset": 2048,
00:18:06.192        "data_size": 63488
00:18:06.192      }
00:18:06.192    ]
00:18:06.192  }'
00:18:06.192   17:01:58	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:18:06.192   17:01:58	-- common/autotest_common.sh@10 -- # set +x
00:18:06.758   17:01:59	-- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1
00:18:07.015  [2024-11-19 17:01:59.837340] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:18:07.273   17:01:59	-- bdev/bdev_raid.sh@263 -- # local expected_state
00:18:07.273   17:01:59	-- bdev/bdev_raid.sh@264 -- # has_redundancy raid1
00:18:07.273   17:01:59	-- bdev/bdev_raid.sh@195 -- # case $1 in
00:18:07.273   17:01:59	-- bdev/bdev_raid.sh@196 -- # return 0
00:18:07.273   17:01:59	-- bdev/bdev_raid.sh@267 -- # expected_state=online
00:18:07.273   17:01:59	-- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3
00:18:07.273   17:01:59	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:18:07.273   17:01:59	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:18:07.273   17:01:59	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:18:07.273   17:01:59	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:18:07.273   17:01:59	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:18:07.273   17:01:59	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:18:07.273   17:01:59	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:18:07.273   17:01:59	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:18:07.273   17:01:59	-- bdev/bdev_raid.sh@125 -- # local tmp
00:18:07.273    17:01:59	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:18:07.273    17:01:59	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:18:07.273   17:02:00	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:18:07.273    "name": "Existed_Raid",
00:18:07.273    "uuid": "f1e22a8c-9c74-4c88-95e8-7452ccb0a945",
00:18:07.273    "strip_size_kb": 0,
00:18:07.273    "state": "online",
00:18:07.273    "raid_level": "raid1",
00:18:07.273    "superblock": true,
00:18:07.273    "num_base_bdevs": 4,
00:18:07.273    "num_base_bdevs_discovered": 3,
00:18:07.273    "num_base_bdevs_operational": 3,
00:18:07.273    "base_bdevs_list": [
00:18:07.273      {
00:18:07.273        "name": null,
00:18:07.273        "uuid": "00000000-0000-0000-0000-000000000000",
00:18:07.273        "is_configured": false,
00:18:07.273        "data_offset": 2048,
00:18:07.273        "data_size": 63488
00:18:07.273      },
00:18:07.273      {
00:18:07.273        "name": "BaseBdev2",
00:18:07.273        "uuid": "f450aed7-4f4b-461e-b2ff-aa04d39b287d",
00:18:07.273        "is_configured": true,
00:18:07.273        "data_offset": 2048,
00:18:07.273        "data_size": 63488
00:18:07.273      },
00:18:07.273      {
00:18:07.273        "name": "BaseBdev3",
00:18:07.273        "uuid": "637971ab-6413-4fa3-b046-633c418c5e8b",
00:18:07.273        "is_configured": true,
00:18:07.273        "data_offset": 2048,
00:18:07.274        "data_size": 63488
00:18:07.274      },
00:18:07.274      {
00:18:07.274        "name": "BaseBdev4",
00:18:07.274        "uuid": "a4c2d45b-36c3-4d6b-ad08-0a80f32087f4",
00:18:07.274        "is_configured": true,
00:18:07.274        "data_offset": 2048,
00:18:07.274        "data_size": 63488
00:18:07.274      }
00:18:07.274    ]
00:18:07.274  }'
00:18:07.274   17:02:00	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:18:07.274   17:02:00	-- common/autotest_common.sh@10 -- # set +x
00:18:08.233   17:02:00	-- bdev/bdev_raid.sh@273 -- # (( i = 1 ))
00:18:08.233   17:02:00	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:18:08.233    17:02:00	-- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:18:08.233    17:02:00	-- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]'
00:18:08.233   17:02:00	-- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid
00:18:08.233   17:02:00	-- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:18:08.233   17:02:00	-- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2
00:18:08.491  [2024-11-19 17:02:01.239441] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2
00:18:08.491   17:02:01	-- bdev/bdev_raid.sh@273 -- # (( i++ ))
00:18:08.491   17:02:01	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:18:08.491    17:02:01	-- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:18:08.491    17:02:01	-- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]'
00:18:08.749   17:02:01	-- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid
00:18:08.749   17:02:01	-- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:18:08.749   17:02:01	-- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3
00:18:09.007  [2024-11-19 17:02:01.688417] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3
00:18:09.007   17:02:01	-- bdev/bdev_raid.sh@273 -- # (( i++ ))
00:18:09.007   17:02:01	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:18:09.008    17:02:01	-- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:18:09.008    17:02:01	-- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]'
00:18:09.266   17:02:01	-- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid
00:18:09.266   17:02:01	-- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:18:09.266   17:02:01	-- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4
00:18:09.523  [2024-11-19 17:02:02.177238] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4
00:18:09.523  [2024-11-19 17:02:02.177495] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:18:09.523  [2024-11-19 17:02:02.177674] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:18:09.523  [2024-11-19 17:02:02.190638] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:18:09.523  [2024-11-19 17:02:02.190908] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state offline
00:18:09.523   17:02:02	-- bdev/bdev_raid.sh@273 -- # (( i++ ))
00:18:09.523   17:02:02	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:18:09.524    17:02:02	-- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:18:09.524    17:02:02	-- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)'
00:18:09.781   17:02:02	-- bdev/bdev_raid.sh@281 -- # raid_bdev=
00:18:09.781   17:02:02	-- bdev/bdev_raid.sh@282 -- # '[' -n '' ']'
00:18:09.781   17:02:02	-- bdev/bdev_raid.sh@287 -- # killprocess 131557
00:18:09.781   17:02:02	-- common/autotest_common.sh@936 -- # '[' -z 131557 ']'
00:18:09.781   17:02:02	-- common/autotest_common.sh@940 -- # kill -0 131557
00:18:09.781    17:02:02	-- common/autotest_common.sh@941 -- # uname
00:18:09.781   17:02:02	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:18:09.781    17:02:02	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 131557
00:18:09.781   17:02:02	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:18:09.781   17:02:02	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:18:09.781   17:02:02	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 131557'
00:18:09.781  killing process with pid 131557
00:18:09.781   17:02:02	-- common/autotest_common.sh@955 -- # kill 131557
00:18:09.781   17:02:02	-- common/autotest_common.sh@960 -- # wait 131557
00:18:09.781  [2024-11-19 17:02:02.582200] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:18:09.781  [2024-11-19 17:02:02.582289] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:18:10.039   17:02:02	-- bdev/bdev_raid.sh@289 -- # return 0
00:18:10.039  
00:18:10.039  real	0m14.384s
00:18:10.039  user	0m25.730s
00:18:10.039  sys	0m2.470s
00:18:10.039   17:02:02	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:18:10.039   17:02:02	-- common/autotest_common.sh@10 -- # set +x
00:18:10.039  ************************************
00:18:10.039  END TEST raid_state_function_test_sb
00:18:10.039  ************************************
00:18:10.297   17:02:02	-- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 4
00:18:10.297   17:02:02	-- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']'
00:18:10.297   17:02:02	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:18:10.297   17:02:02	-- common/autotest_common.sh@10 -- # set +x
00:18:10.297  ************************************
00:18:10.297  START TEST raid_superblock_test
00:18:10.297  ************************************
00:18:10.297   17:02:02	-- common/autotest_common.sh@1114 -- # raid_superblock_test raid1 4
00:18:10.297   17:02:02	-- bdev/bdev_raid.sh@338 -- # local raid_level=raid1
00:18:10.297   17:02:02	-- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4
00:18:10.297   17:02:02	-- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=()
00:18:10.297   17:02:02	-- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc
00:18:10.297   17:02:02	-- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=()
00:18:10.297   17:02:02	-- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt
00:18:10.297   17:02:02	-- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=()
00:18:10.297   17:02:02	-- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid
00:18:10.297   17:02:02	-- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1
00:18:10.297   17:02:02	-- bdev/bdev_raid.sh@344 -- # local strip_size
00:18:10.297   17:02:02	-- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg
00:18:10.297   17:02:02	-- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid
00:18:10.297   17:02:02	-- bdev/bdev_raid.sh@347 -- # local raid_bdev
00:18:10.297   17:02:02	-- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']'
00:18:10.297   17:02:02	-- bdev/bdev_raid.sh@353 -- # strip_size=0
00:18:10.297   17:02:02	-- bdev/bdev_raid.sh@357 -- # raid_pid=132006
00:18:10.297   17:02:02	-- bdev/bdev_raid.sh@358 -- # waitforlisten 132006 /var/tmp/spdk-raid.sock
00:18:10.297   17:02:02	-- common/autotest_common.sh@829 -- # '[' -z 132006 ']'
00:18:10.297   17:02:02	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock
00:18:10.297   17:02:02	-- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid
00:18:10.297   17:02:02	-- common/autotest_common.sh@834 -- # local max_retries=100
00:18:10.297   17:02:02	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...'
00:18:10.297  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...
00:18:10.297   17:02:02	-- common/autotest_common.sh@838 -- # xtrace_disable
00:18:10.297   17:02:02	-- common/autotest_common.sh@10 -- # set +x
00:18:10.297  [2024-11-19 17:02:02.973297] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:18:10.297  [2024-11-19 17:02:02.973683] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132006 ]
00:18:10.297  [2024-11-19 17:02:03.120665] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:18:10.556  [2024-11-19 17:02:03.174668] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:18:10.556  [2024-11-19 17:02:03.223262] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:18:11.122   17:02:03	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:18:11.122   17:02:03	-- common/autotest_common.sh@862 -- # return 0
00:18:11.122   17:02:03	-- bdev/bdev_raid.sh@361 -- # (( i = 1 ))
00:18:11.122   17:02:03	-- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs ))
00:18:11.122   17:02:03	-- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1
00:18:11.122   17:02:03	-- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1
00:18:11.122   17:02:03	-- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001
00:18:11.122   17:02:03	-- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc)
00:18:11.122   17:02:03	-- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt)
00:18:11.122   17:02:03	-- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:18:11.122   17:02:03	-- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1
00:18:11.380  malloc1
00:18:11.380   17:02:04	-- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001
00:18:11.638  [2024-11-19 17:02:04.285687] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1
00:18:11.638  [2024-11-19 17:02:04.286047] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:18:11.638  [2024-11-19 17:02:04.286153] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80
00:18:11.638  [2024-11-19 17:02:04.286306] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:18:11.638  [2024-11-19 17:02:04.289394] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:18:11.638  [2024-11-19 17:02:04.289644] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1
00:18:11.638  pt1
00:18:11.638   17:02:04	-- bdev/bdev_raid.sh@361 -- # (( i++ ))
00:18:11.638   17:02:04	-- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs ))
00:18:11.638   17:02:04	-- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2
00:18:11.638   17:02:04	-- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2
00:18:11.638   17:02:04	-- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002
00:18:11.638   17:02:04	-- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc)
00:18:11.638   17:02:04	-- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt)
00:18:11.638   17:02:04	-- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:18:11.638   17:02:04	-- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2
00:18:11.896  malloc2
00:18:11.896   17:02:04	-- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:18:11.896  [2024-11-19 17:02:04.739290] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:18:11.896  [2024-11-19 17:02:04.739749] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:18:11.896  [2024-11-19 17:02:04.739867] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680
00:18:11.896  [2024-11-19 17:02:04.740061] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:18:11.896  [2024-11-19 17:02:04.744006] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:18:11.896  [2024-11-19 17:02:04.744256] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:18:11.896  pt2
00:18:12.155   17:02:04	-- bdev/bdev_raid.sh@361 -- # (( i++ ))
00:18:12.155   17:02:04	-- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs ))
00:18:12.155   17:02:04	-- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3
00:18:12.155   17:02:04	-- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3
00:18:12.155   17:02:04	-- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003
00:18:12.155   17:02:04	-- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc)
00:18:12.155   17:02:04	-- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt)
00:18:12.155   17:02:04	-- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:18:12.155   17:02:04	-- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3
00:18:12.155  malloc3
00:18:12.155   17:02:04	-- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003
00:18:12.413  [2024-11-19 17:02:05.224795] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3
00:18:12.413  [2024-11-19 17:02:05.225167] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:18:12.413  [2024-11-19 17:02:05.225258] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280
00:18:12.413  [2024-11-19 17:02:05.225388] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:18:12.413  [2024-11-19 17:02:05.228336] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:18:12.413  [2024-11-19 17:02:05.228510] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3
00:18:12.413  pt3
00:18:12.413   17:02:05	-- bdev/bdev_raid.sh@361 -- # (( i++ ))
00:18:12.413   17:02:05	-- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs ))
00:18:12.413   17:02:05	-- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4
00:18:12.413   17:02:05	-- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4
00:18:12.413   17:02:05	-- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004
00:18:12.413   17:02:05	-- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc)
00:18:12.413   17:02:05	-- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt)
00:18:12.413   17:02:05	-- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:18:12.413   17:02:05	-- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4
00:18:12.671  malloc4
00:18:12.671   17:02:05	-- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004
00:18:12.930  [2024-11-19 17:02:05.713523] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4
00:18:12.930  [2024-11-19 17:02:05.713949] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:18:12.930  [2024-11-19 17:02:05.714039] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80
00:18:12.930  [2024-11-19 17:02:05.714189] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:18:12.930  [2024-11-19 17:02:05.717673] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:18:12.930  [2024-11-19 17:02:05.717892] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4
00:18:12.930  pt4
00:18:12.930   17:02:05	-- bdev/bdev_raid.sh@361 -- # (( i++ ))
00:18:12.930   17:02:05	-- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs ))
00:18:12.930   17:02:05	-- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s
00:18:13.189  [2024-11-19 17:02:05.918411] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed
00:18:13.189  [2024-11-19 17:02:05.921169] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:18:13.189  [2024-11-19 17:02:05.921388] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed
00:18:13.189  [2024-11-19 17:02:05.921462] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed
00:18:13.189  [2024-11-19 17:02:05.921788] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008480
00:18:13.189  [2024-11-19 17:02:05.921887] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512
00:18:13.189  [2024-11-19 17:02:05.922105] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460
00:18:13.189  [2024-11-19 17:02:05.922646] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008480
00:18:13.189  [2024-11-19 17:02:05.922759] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008480
00:18:13.189  [2024-11-19 17:02:05.923056] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:18:13.189   17:02:05	-- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4
00:18:13.189   17:02:05	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:18:13.189   17:02:05	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:18:13.189   17:02:05	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:18:13.189   17:02:05	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:18:13.189   17:02:05	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:18:13.189   17:02:05	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:18:13.189   17:02:05	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:18:13.189   17:02:05	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:18:13.189   17:02:05	-- bdev/bdev_raid.sh@125 -- # local tmp
00:18:13.189    17:02:05	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:18:13.189    17:02:05	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:18:13.448   17:02:06	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:18:13.448    "name": "raid_bdev1",
00:18:13.448    "uuid": "04c55518-e151-48c7-9ff4-89deffcf6b53",
00:18:13.448    "strip_size_kb": 0,
00:18:13.448    "state": "online",
00:18:13.448    "raid_level": "raid1",
00:18:13.448    "superblock": true,
00:18:13.448    "num_base_bdevs": 4,
00:18:13.448    "num_base_bdevs_discovered": 4,
00:18:13.448    "num_base_bdevs_operational": 4,
00:18:13.448    "base_bdevs_list": [
00:18:13.448      {
00:18:13.448        "name": "pt1",
00:18:13.448        "uuid": "1f6fa19d-f40c-5cb4-af8e-117d209f6153",
00:18:13.448        "is_configured": true,
00:18:13.448        "data_offset": 2048,
00:18:13.448        "data_size": 63488
00:18:13.448      },
00:18:13.448      {
00:18:13.448        "name": "pt2",
00:18:13.448        "uuid": "dbd139a2-f219-5140-aeec-bc8038a07318",
00:18:13.448        "is_configured": true,
00:18:13.448        "data_offset": 2048,
00:18:13.448        "data_size": 63488
00:18:13.448      },
00:18:13.448      {
00:18:13.448        "name": "pt3",
00:18:13.448        "uuid": "655c1801-2b22-53ea-a892-93ff167fa9cb",
00:18:13.448        "is_configured": true,
00:18:13.448        "data_offset": 2048,
00:18:13.448        "data_size": 63488
00:18:13.448      },
00:18:13.448      {
00:18:13.448        "name": "pt4",
00:18:13.448        "uuid": "8bbf4cc1-2b87-516e-86b5-e0c1ced11e83",
00:18:13.448        "is_configured": true,
00:18:13.448        "data_offset": 2048,
00:18:13.448        "data_size": 63488
00:18:13.448      }
00:18:13.448    ]
00:18:13.448  }'
00:18:13.448   17:02:06	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:18:13.448   17:02:06	-- common/autotest_common.sh@10 -- # set +x
00:18:14.014    17:02:06	-- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1
00:18:14.014    17:02:06	-- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid'
00:18:14.272  [2024-11-19 17:02:07.083554] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:18:14.272   17:02:07	-- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=04c55518-e151-48c7-9ff4-89deffcf6b53
00:18:14.272   17:02:07	-- bdev/bdev_raid.sh@380 -- # '[' -z 04c55518-e151-48c7-9ff4-89deffcf6b53 ']'
00:18:14.272   17:02:07	-- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1
00:18:14.530  [2024-11-19 17:02:07.291376] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:18:14.530  [2024-11-19 17:02:07.291696] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:18:14.530  [2024-11-19 17:02:07.291969] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:18:14.530  [2024-11-19 17:02:07.292202] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:18:14.530  [2024-11-19 17:02:07.292297] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state offline
00:18:14.530    17:02:07	-- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:18:14.530    17:02:07	-- bdev/bdev_raid.sh@386 -- # jq -r '.[]'
00:18:14.788   17:02:07	-- bdev/bdev_raid.sh@386 -- # raid_bdev=
00:18:14.788   17:02:07	-- bdev/bdev_raid.sh@387 -- # '[' -n '' ']'
00:18:14.788   17:02:07	-- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}"
00:18:14.788   17:02:07	-- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1
00:18:15.045   17:02:07	-- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}"
00:18:15.045   17:02:07	-- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2
00:18:15.303   17:02:07	-- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}"
00:18:15.303   17:02:07	-- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3
00:18:15.303   17:02:08	-- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}"
00:18:15.303   17:02:08	-- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4
00:18:15.561    17:02:08	-- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any'
00:18:15.561    17:02:08	-- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs
00:18:15.819   17:02:08	-- bdev/bdev_raid.sh@395 -- # '[' false == true ']'
00:18:16.077   17:02:08	-- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1
00:18:16.077   17:02:08	-- common/autotest_common.sh@650 -- # local es=0
00:18:16.077   17:02:08	-- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1
00:18:16.077   17:02:08	-- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:18:16.077   17:02:08	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:18:16.077    17:02:08	-- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:18:16.077   17:02:08	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:18:16.077    17:02:08	-- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:18:16.077   17:02:08	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:18:16.077   17:02:08	-- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:18:16.078   17:02:08	-- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]]
00:18:16.078   17:02:08	-- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1
00:18:16.078  [2024-11-19 17:02:08.855589] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed
00:18:16.078  [2024-11-19 17:02:08.858437] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed
00:18:16.078  [2024-11-19 17:02:08.858645] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed
00:18:16.078  [2024-11-19 17:02:08.858711] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed
00:18:16.078  [2024-11-19 17:02:08.858890] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1
00:18:16.078  [2024-11-19 17:02:08.859077] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2
00:18:16.078  [2024-11-19 17:02:08.859143] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3
00:18:16.078  [2024-11-19 17:02:08.859355] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4
00:18:16.078  [2024-11-19 17:02:08.859436] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:18:16.078  [2024-11-19 17:02:08.859468] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state configuring
00:18:16.078  request:
00:18:16.078  {
00:18:16.078    "name": "raid_bdev1",
00:18:16.078    "raid_level": "raid1",
00:18:16.078    "base_bdevs": [
00:18:16.078      "malloc1",
00:18:16.078      "malloc2",
00:18:16.078      "malloc3",
00:18:16.078      "malloc4"
00:18:16.078    ],
00:18:16.078    "superblock": false,
00:18:16.078    "method": "bdev_raid_create",
00:18:16.078    "req_id": 1
00:18:16.078  }
00:18:16.078  Got JSON-RPC error response
00:18:16.078  response:
00:18:16.078  {
00:18:16.078    "code": -17,
00:18:16.078    "message": "Failed to create RAID bdev raid_bdev1: File exists"
00:18:16.078  }
00:18:16.078   17:02:08	-- common/autotest_common.sh@653 -- # es=1
00:18:16.078   17:02:08	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:18:16.078   17:02:08	-- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:18:16.078   17:02:08	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:18:16.078    17:02:08	-- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:18:16.078    17:02:08	-- bdev/bdev_raid.sh@403 -- # jq -r '.[]'
00:18:16.336   17:02:09	-- bdev/bdev_raid.sh@403 -- # raid_bdev=
00:18:16.336   17:02:09	-- bdev/bdev_raid.sh@404 -- # '[' -n '' ']'
00:18:16.336   17:02:09	-- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001
00:18:16.594  [2024-11-19 17:02:09.315868] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1
00:18:16.594  [2024-11-19 17:02:09.316220] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:18:16.594  [2024-11-19 17:02:09.316296] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080
00:18:16.594  [2024-11-19 17:02:09.316413] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:18:16.594  [2024-11-19 17:02:09.319222] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:18:16.594  [2024-11-19 17:02:09.319424] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1
00:18:16.594  [2024-11-19 17:02:09.319616] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1
00:18:16.594  [2024-11-19 17:02:09.319809] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed
00:18:16.594  pt1
00:18:16.594   17:02:09	-- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4
00:18:16.594   17:02:09	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:18:16.594   17:02:09	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:18:16.594   17:02:09	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:18:16.594   17:02:09	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:18:16.594   17:02:09	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:18:16.594   17:02:09	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:18:16.594   17:02:09	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:18:16.595   17:02:09	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:18:16.595   17:02:09	-- bdev/bdev_raid.sh@125 -- # local tmp
00:18:16.595    17:02:09	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:18:16.595    17:02:09	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:18:16.853   17:02:09	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:18:16.853    "name": "raid_bdev1",
00:18:16.853    "uuid": "04c55518-e151-48c7-9ff4-89deffcf6b53",
00:18:16.853    "strip_size_kb": 0,
00:18:16.853    "state": "configuring",
00:18:16.853    "raid_level": "raid1",
00:18:16.853    "superblock": true,
00:18:16.853    "num_base_bdevs": 4,
00:18:16.853    "num_base_bdevs_discovered": 1,
00:18:16.853    "num_base_bdevs_operational": 4,
00:18:16.853    "base_bdevs_list": [
00:18:16.853      {
00:18:16.853        "name": "pt1",
00:18:16.853        "uuid": "1f6fa19d-f40c-5cb4-af8e-117d209f6153",
00:18:16.853        "is_configured": true,
00:18:16.853        "data_offset": 2048,
00:18:16.853        "data_size": 63488
00:18:16.853      },
00:18:16.853      {
00:18:16.853        "name": null,
00:18:16.853        "uuid": "dbd139a2-f219-5140-aeec-bc8038a07318",
00:18:16.853        "is_configured": false,
00:18:16.853        "data_offset": 2048,
00:18:16.853        "data_size": 63488
00:18:16.853      },
00:18:16.853      {
00:18:16.853        "name": null,
00:18:16.853        "uuid": "655c1801-2b22-53ea-a892-93ff167fa9cb",
00:18:16.853        "is_configured": false,
00:18:16.853        "data_offset": 2048,
00:18:16.853        "data_size": 63488
00:18:16.853      },
00:18:16.853      {
00:18:16.853        "name": null,
00:18:16.853        "uuid": "8bbf4cc1-2b87-516e-86b5-e0c1ced11e83",
00:18:16.853        "is_configured": false,
00:18:16.853        "data_offset": 2048,
00:18:16.853        "data_size": 63488
00:18:16.853      }
00:18:16.853    ]
00:18:16.853  }'
00:18:16.853   17:02:09	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:18:16.853   17:02:09	-- common/autotest_common.sh@10 -- # set +x
00:18:17.419   17:02:10	-- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']'
00:18:17.419   17:02:10	-- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:18:17.677  [2024-11-19 17:02:10.499470] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:18:17.677  [2024-11-19 17:02:10.499951] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:18:17.677  [2024-11-19 17:02:10.500087] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980
00:18:17.677  [2024-11-19 17:02:10.500380] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:18:17.677  [2024-11-19 17:02:10.501002] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:18:17.677  [2024-11-19 17:02:10.501142] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:18:17.677  [2024-11-19 17:02:10.501312] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2
00:18:17.677  [2024-11-19 17:02:10.501373] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:18:17.677  pt2
00:18:17.677   17:02:10	-- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2
00:18:17.936  [2024-11-19 17:02:10.767516] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2
00:18:17.936   17:02:10	-- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4
00:18:17.936   17:02:10	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:18:17.936   17:02:10	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:18:17.936   17:02:10	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:18:17.936   17:02:10	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:18:17.936   17:02:10	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:18:17.936   17:02:10	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:18:17.936   17:02:10	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:18:17.936   17:02:10	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:18:17.936   17:02:10	-- bdev/bdev_raid.sh@125 -- # local tmp
00:18:18.195    17:02:10	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:18:18.195    17:02:10	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:18:18.195   17:02:10	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:18:18.195    "name": "raid_bdev1",
00:18:18.195    "uuid": "04c55518-e151-48c7-9ff4-89deffcf6b53",
00:18:18.195    "strip_size_kb": 0,
00:18:18.195    "state": "configuring",
00:18:18.195    "raid_level": "raid1",
00:18:18.195    "superblock": true,
00:18:18.195    "num_base_bdevs": 4,
00:18:18.195    "num_base_bdevs_discovered": 1,
00:18:18.195    "num_base_bdevs_operational": 4,
00:18:18.195    "base_bdevs_list": [
00:18:18.195      {
00:18:18.195        "name": "pt1",
00:18:18.195        "uuid": "1f6fa19d-f40c-5cb4-af8e-117d209f6153",
00:18:18.195        "is_configured": true,
00:18:18.195        "data_offset": 2048,
00:18:18.195        "data_size": 63488
00:18:18.195      },
00:18:18.195      {
00:18:18.195        "name": null,
00:18:18.195        "uuid": "dbd139a2-f219-5140-aeec-bc8038a07318",
00:18:18.195        "is_configured": false,
00:18:18.195        "data_offset": 2048,
00:18:18.195        "data_size": 63488
00:18:18.195      },
00:18:18.195      {
00:18:18.195        "name": null,
00:18:18.195        "uuid": "655c1801-2b22-53ea-a892-93ff167fa9cb",
00:18:18.195        "is_configured": false,
00:18:18.195        "data_offset": 2048,
00:18:18.195        "data_size": 63488
00:18:18.195      },
00:18:18.195      {
00:18:18.195        "name": null,
00:18:18.195        "uuid": "8bbf4cc1-2b87-516e-86b5-e0c1ced11e83",
00:18:18.195        "is_configured": false,
00:18:18.195        "data_offset": 2048,
00:18:18.195        "data_size": 63488
00:18:18.195      }
00:18:18.195    ]
00:18:18.195  }'
00:18:18.195   17:02:10	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:18:18.195   17:02:10	-- common/autotest_common.sh@10 -- # set +x
00:18:19.128   17:02:11	-- bdev/bdev_raid.sh@422 -- # (( i = 1 ))
00:18:19.128   17:02:11	-- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs ))
00:18:19.128   17:02:11	-- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:18:19.128  [2024-11-19 17:02:11.803656] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:18:19.128  [2024-11-19 17:02:11.803986] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:18:19.128  [2024-11-19 17:02:11.804065] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80
00:18:19.128  [2024-11-19 17:02:11.804179] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:18:19.128  [2024-11-19 17:02:11.804651] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:18:19.128  [2024-11-19 17:02:11.804799] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:18:19.128  [2024-11-19 17:02:11.804912] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2
00:18:19.128  [2024-11-19 17:02:11.804957] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:18:19.128  pt2
00:18:19.128   17:02:11	-- bdev/bdev_raid.sh@422 -- # (( i++ ))
00:18:19.128   17:02:11	-- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs ))
00:18:19.128   17:02:11	-- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003
00:18:19.128  [2024-11-19 17:02:11.983737] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3
00:18:19.128  [2024-11-19 17:02:11.984062] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:18:19.128  [2024-11-19 17:02:11.984136] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80
00:18:19.128  [2024-11-19 17:02:11.984259] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:18:19.128  [2024-11-19 17:02:11.984724] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:18:19.128  [2024-11-19 17:02:11.984898] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3
00:18:19.128  [2024-11-19 17:02:11.985069] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3
00:18:19.128  [2024-11-19 17:02:11.985175] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed
00:18:19.387  pt3
00:18:19.387   17:02:11	-- bdev/bdev_raid.sh@422 -- # (( i++ ))
00:18:19.387   17:02:11	-- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs ))
00:18:19.387   17:02:11	-- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004
00:18:19.387  [2024-11-19 17:02:12.163738] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4
00:18:19.387  [2024-11-19 17:02:12.164030] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:18:19.387  [2024-11-19 17:02:12.164139] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280
00:18:19.387  [2024-11-19 17:02:12.164230] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:18:19.387  [2024-11-19 17:02:12.164677] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:18:19.387  [2024-11-19 17:02:12.164861] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4
00:18:19.387  [2024-11-19 17:02:12.165050] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4
00:18:19.387  [2024-11-19 17:02:12.165166] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed
00:18:19.387  [2024-11-19 17:02:12.165349] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680
00:18:19.387  [2024-11-19 17:02:12.165452] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512
00:18:19.387  [2024-11-19 17:02:12.165574] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002940
00:18:19.387  [2024-11-19 17:02:12.166022] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680
00:18:19.387  [2024-11-19 17:02:12.166123] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680
00:18:19.387  [2024-11-19 17:02:12.166282] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:18:19.387  pt4
00:18:19.387   17:02:12	-- bdev/bdev_raid.sh@422 -- # (( i++ ))
00:18:19.387   17:02:12	-- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs ))
00:18:19.387   17:02:12	-- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4
00:18:19.387   17:02:12	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:18:19.387   17:02:12	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:18:19.387   17:02:12	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:18:19.387   17:02:12	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:18:19.387   17:02:12	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:18:19.387   17:02:12	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:18:19.387   17:02:12	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:18:19.387   17:02:12	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:18:19.387   17:02:12	-- bdev/bdev_raid.sh@125 -- # local tmp
00:18:19.387    17:02:12	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:18:19.387    17:02:12	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:18:19.646   17:02:12	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:18:19.646    "name": "raid_bdev1",
00:18:19.646    "uuid": "04c55518-e151-48c7-9ff4-89deffcf6b53",
00:18:19.646    "strip_size_kb": 0,
00:18:19.646    "state": "online",
00:18:19.646    "raid_level": "raid1",
00:18:19.646    "superblock": true,
00:18:19.646    "num_base_bdevs": 4,
00:18:19.646    "num_base_bdevs_discovered": 4,
00:18:19.646    "num_base_bdevs_operational": 4,
00:18:19.646    "base_bdevs_list": [
00:18:19.646      {
00:18:19.646        "name": "pt1",
00:18:19.646        "uuid": "1f6fa19d-f40c-5cb4-af8e-117d209f6153",
00:18:19.646        "is_configured": true,
00:18:19.646        "data_offset": 2048,
00:18:19.646        "data_size": 63488
00:18:19.646      },
00:18:19.646      {
00:18:19.646        "name": "pt2",
00:18:19.646        "uuid": "dbd139a2-f219-5140-aeec-bc8038a07318",
00:18:19.646        "is_configured": true,
00:18:19.646        "data_offset": 2048,
00:18:19.646        "data_size": 63488
00:18:19.646      },
00:18:19.646      {
00:18:19.646        "name": "pt3",
00:18:19.647        "uuid": "655c1801-2b22-53ea-a892-93ff167fa9cb",
00:18:19.647        "is_configured": true,
00:18:19.647        "data_offset": 2048,
00:18:19.647        "data_size": 63488
00:18:19.647      },
00:18:19.647      {
00:18:19.647        "name": "pt4",
00:18:19.647        "uuid": "8bbf4cc1-2b87-516e-86b5-e0c1ced11e83",
00:18:19.647        "is_configured": true,
00:18:19.647        "data_offset": 2048,
00:18:19.647        "data_size": 63488
00:18:19.647      }
00:18:19.647    ]
00:18:19.647  }'
00:18:19.647   17:02:12	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:18:19.647   17:02:12	-- common/autotest_common.sh@10 -- # set +x
00:18:20.582    17:02:13	-- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1
00:18:20.582    17:02:13	-- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid'
00:18:20.582  [2024-11-19 17:02:13.291528] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:18:20.582   17:02:13	-- bdev/bdev_raid.sh@430 -- # '[' 04c55518-e151-48c7-9ff4-89deffcf6b53 '!=' 04c55518-e151-48c7-9ff4-89deffcf6b53 ']'
00:18:20.582   17:02:13	-- bdev/bdev_raid.sh@434 -- # has_redundancy raid1
00:18:20.582   17:02:13	-- bdev/bdev_raid.sh@195 -- # case $1 in
00:18:20.582   17:02:13	-- bdev/bdev_raid.sh@196 -- # return 0
00:18:20.582   17:02:13	-- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1
00:18:20.840  [2024-11-19 17:02:13.491388] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1
00:18:20.840   17:02:13	-- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3
00:18:20.840   17:02:13	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:18:20.840   17:02:13	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:18:20.840   17:02:13	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:18:20.840   17:02:13	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:18:20.840   17:02:13	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:18:20.840   17:02:13	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:18:20.840   17:02:13	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:18:20.840   17:02:13	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:18:20.840   17:02:13	-- bdev/bdev_raid.sh@125 -- # local tmp
00:18:20.840    17:02:13	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:18:20.840    17:02:13	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:18:21.098   17:02:13	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:18:21.098    "name": "raid_bdev1",
00:18:21.098    "uuid": "04c55518-e151-48c7-9ff4-89deffcf6b53",
00:18:21.098    "strip_size_kb": 0,
00:18:21.098    "state": "online",
00:18:21.098    "raid_level": "raid1",
00:18:21.098    "superblock": true,
00:18:21.098    "num_base_bdevs": 4,
00:18:21.098    "num_base_bdevs_discovered": 3,
00:18:21.098    "num_base_bdevs_operational": 3,
00:18:21.098    "base_bdevs_list": [
00:18:21.098      {
00:18:21.098        "name": null,
00:18:21.098        "uuid": "00000000-0000-0000-0000-000000000000",
00:18:21.098        "is_configured": false,
00:18:21.098        "data_offset": 2048,
00:18:21.098        "data_size": 63488
00:18:21.098      },
00:18:21.098      {
00:18:21.098        "name": "pt2",
00:18:21.098        "uuid": "dbd139a2-f219-5140-aeec-bc8038a07318",
00:18:21.098        "is_configured": true,
00:18:21.098        "data_offset": 2048,
00:18:21.098        "data_size": 63488
00:18:21.098      },
00:18:21.098      {
00:18:21.098        "name": "pt3",
00:18:21.098        "uuid": "655c1801-2b22-53ea-a892-93ff167fa9cb",
00:18:21.098        "is_configured": true,
00:18:21.098        "data_offset": 2048,
00:18:21.098        "data_size": 63488
00:18:21.098      },
00:18:21.098      {
00:18:21.098        "name": "pt4",
00:18:21.098        "uuid": "8bbf4cc1-2b87-516e-86b5-e0c1ced11e83",
00:18:21.098        "is_configured": true,
00:18:21.098        "data_offset": 2048,
00:18:21.098        "data_size": 63488
00:18:21.098      }
00:18:21.098    ]
00:18:21.098  }'
00:18:21.098   17:02:13	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:18:21.098   17:02:13	-- common/autotest_common.sh@10 -- # set +x
00:18:21.665   17:02:14	-- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1
00:18:21.924  [2024-11-19 17:02:14.643572] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:18:21.924  [2024-11-19 17:02:14.643848] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:18:21.924  [2024-11-19 17:02:14.644013] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:18:21.924  [2024-11-19 17:02:14.644127] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:18:21.924  [2024-11-19 17:02:14.644445] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline
00:18:21.924    17:02:14	-- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:18:21.924    17:02:14	-- bdev/bdev_raid.sh@443 -- # jq -r '.[]'
00:18:22.237   17:02:14	-- bdev/bdev_raid.sh@443 -- # raid_bdev=
00:18:22.237   17:02:14	-- bdev/bdev_raid.sh@444 -- # '[' -n '' ']'
00:18:22.237   17:02:14	-- bdev/bdev_raid.sh@449 -- # (( i = 1 ))
00:18:22.237   17:02:14	-- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs ))
00:18:22.237   17:02:14	-- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2
00:18:22.496   17:02:15	-- bdev/bdev_raid.sh@449 -- # (( i++ ))
00:18:22.496   17:02:15	-- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs ))
00:18:22.496   17:02:15	-- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3
00:18:22.754   17:02:15	-- bdev/bdev_raid.sh@449 -- # (( i++ ))
00:18:22.754   17:02:15	-- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs ))
00:18:22.754   17:02:15	-- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4
00:18:23.012   17:02:15	-- bdev/bdev_raid.sh@449 -- # (( i++ ))
00:18:23.012   17:02:15	-- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs ))
00:18:23.012   17:02:15	-- bdev/bdev_raid.sh@454 -- # (( i = 1 ))
00:18:23.012   17:02:15	-- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 ))
00:18:23.012   17:02:15	-- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:18:23.270  [2024-11-19 17:02:15.933194] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:18:23.270  [2024-11-19 17:02:15.933645] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:18:23.270  [2024-11-19 17:02:15.933729] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580
00:18:23.270  [2024-11-19 17:02:15.933867] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:18:23.270  [2024-11-19 17:02:15.936725] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:18:23.270  [2024-11-19 17:02:15.936968] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:18:23.270  [2024-11-19 17:02:15.937224] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2
00:18:23.270  [2024-11-19 17:02:15.937365] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:18:23.270  pt2
00:18:23.270   17:02:15	-- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3
00:18:23.270   17:02:15	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:18:23.270   17:02:15	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:18:23.270   17:02:15	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:18:23.270   17:02:15	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:18:23.270   17:02:15	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:18:23.270   17:02:15	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:18:23.270   17:02:15	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:18:23.270   17:02:15	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:18:23.270   17:02:15	-- bdev/bdev_raid.sh@125 -- # local tmp
00:18:23.270    17:02:15	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:18:23.270    17:02:15	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:18:23.528   17:02:16	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:18:23.528    "name": "raid_bdev1",
00:18:23.528    "uuid": "04c55518-e151-48c7-9ff4-89deffcf6b53",
00:18:23.528    "strip_size_kb": 0,
00:18:23.528    "state": "configuring",
00:18:23.528    "raid_level": "raid1",
00:18:23.528    "superblock": true,
00:18:23.528    "num_base_bdevs": 4,
00:18:23.528    "num_base_bdevs_discovered": 1,
00:18:23.528    "num_base_bdevs_operational": 3,
00:18:23.528    "base_bdevs_list": [
00:18:23.528      {
00:18:23.528        "name": null,
00:18:23.528        "uuid": "00000000-0000-0000-0000-000000000000",
00:18:23.528        "is_configured": false,
00:18:23.528        "data_offset": 2048,
00:18:23.528        "data_size": 63488
00:18:23.528      },
00:18:23.528      {
00:18:23.528        "name": "pt2",
00:18:23.528        "uuid": "dbd139a2-f219-5140-aeec-bc8038a07318",
00:18:23.528        "is_configured": true,
00:18:23.528        "data_offset": 2048,
00:18:23.528        "data_size": 63488
00:18:23.528      },
00:18:23.528      {
00:18:23.528        "name": null,
00:18:23.528        "uuid": "655c1801-2b22-53ea-a892-93ff167fa9cb",
00:18:23.528        "is_configured": false,
00:18:23.528        "data_offset": 2048,
00:18:23.528        "data_size": 63488
00:18:23.528      },
00:18:23.528      {
00:18:23.528        "name": null,
00:18:23.528        "uuid": "8bbf4cc1-2b87-516e-86b5-e0c1ced11e83",
00:18:23.528        "is_configured": false,
00:18:23.528        "data_offset": 2048,
00:18:23.528        "data_size": 63488
00:18:23.528      }
00:18:23.528    ]
00:18:23.528  }'
00:18:23.528   17:02:16	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:18:23.528   17:02:16	-- common/autotest_common.sh@10 -- # set +x
00:18:24.095   17:02:16	-- bdev/bdev_raid.sh@454 -- # (( i++ ))
00:18:24.095   17:02:16	-- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 ))
00:18:24.095   17:02:16	-- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003
00:18:24.354  [2024-11-19 17:02:16.977514] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3
00:18:24.354  [2024-11-19 17:02:16.977921] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:18:24.354  [2024-11-19 17:02:16.978033] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80
00:18:24.354  [2024-11-19 17:02:16.978153] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:18:24.354  [2024-11-19 17:02:16.978740] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:18:24.354  [2024-11-19 17:02:16.978950] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3
00:18:24.354  [2024-11-19 17:02:16.979188] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3
00:18:24.354  [2024-11-19 17:02:16.979315] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed
00:18:24.354  pt3
00:18:24.354   17:02:16	-- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3
00:18:24.354   17:02:16	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:18:24.354   17:02:16	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:18:24.354   17:02:16	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:18:24.354   17:02:16	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:18:24.354   17:02:16	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:18:24.354   17:02:16	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:18:24.354   17:02:16	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:18:24.354   17:02:17	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:18:24.354   17:02:17	-- bdev/bdev_raid.sh@125 -- # local tmp
00:18:24.354    17:02:17	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:18:24.354    17:02:17	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:18:24.613   17:02:17	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:18:24.613    "name": "raid_bdev1",
00:18:24.613    "uuid": "04c55518-e151-48c7-9ff4-89deffcf6b53",
00:18:24.613    "strip_size_kb": 0,
00:18:24.613    "state": "configuring",
00:18:24.613    "raid_level": "raid1",
00:18:24.613    "superblock": true,
00:18:24.613    "num_base_bdevs": 4,
00:18:24.613    "num_base_bdevs_discovered": 2,
00:18:24.613    "num_base_bdevs_operational": 3,
00:18:24.613    "base_bdevs_list": [
00:18:24.613      {
00:18:24.613        "name": null,
00:18:24.613        "uuid": "00000000-0000-0000-0000-000000000000",
00:18:24.613        "is_configured": false,
00:18:24.613        "data_offset": 2048,
00:18:24.613        "data_size": 63488
00:18:24.613      },
00:18:24.613      {
00:18:24.613        "name": "pt2",
00:18:24.613        "uuid": "dbd139a2-f219-5140-aeec-bc8038a07318",
00:18:24.613        "is_configured": true,
00:18:24.613        "data_offset": 2048,
00:18:24.613        "data_size": 63488
00:18:24.613      },
00:18:24.613      {
00:18:24.613        "name": "pt3",
00:18:24.613        "uuid": "655c1801-2b22-53ea-a892-93ff167fa9cb",
00:18:24.613        "is_configured": true,
00:18:24.613        "data_offset": 2048,
00:18:24.613        "data_size": 63488
00:18:24.613      },
00:18:24.613      {
00:18:24.613        "name": null,
00:18:24.613        "uuid": "8bbf4cc1-2b87-516e-86b5-e0c1ced11e83",
00:18:24.613        "is_configured": false,
00:18:24.613        "data_offset": 2048,
00:18:24.613        "data_size": 63488
00:18:24.613      }
00:18:24.613    ]
00:18:24.613  }'
00:18:24.613   17:02:17	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:18:24.613   17:02:17	-- common/autotest_common.sh@10 -- # set +x
00:18:25.180   17:02:17	-- bdev/bdev_raid.sh@454 -- # (( i++ ))
00:18:25.180   17:02:17	-- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 ))
00:18:25.180   17:02:17	-- bdev/bdev_raid.sh@462 -- # i=3
00:18:25.180   17:02:17	-- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004
00:18:25.180  [2024-11-19 17:02:17.977714] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4
00:18:25.180  [2024-11-19 17:02:17.978105] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:18:25.180  [2024-11-19 17:02:17.978215] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180
00:18:25.180  [2024-11-19 17:02:17.978339] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:18:25.180  [2024-11-19 17:02:17.978945] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:18:25.180  [2024-11-19 17:02:17.979146] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4
00:18:25.180  [2024-11-19 17:02:17.979364] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4
00:18:25.180  [2024-11-19 17:02:17.979483] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed
00:18:25.180  [2024-11-19 17:02:17.979771] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80
00:18:25.180  [2024-11-19 17:02:17.979891] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512
00:18:25.180  [2024-11-19 17:02:17.980028] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002c80
00:18:25.180  [2024-11-19 17:02:17.980558] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80
00:18:25.180  [2024-11-19 17:02:17.980683] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80
00:18:25.180  [2024-11-19 17:02:17.980910] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:18:25.180  pt4
00:18:25.180   17:02:17	-- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3
00:18:25.180   17:02:17	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:18:25.180   17:02:17	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:18:25.180   17:02:17	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:18:25.180   17:02:17	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:18:25.180   17:02:17	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:18:25.180   17:02:17	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:18:25.180   17:02:17	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:18:25.180   17:02:17	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:18:25.180   17:02:17	-- bdev/bdev_raid.sh@125 -- # local tmp
00:18:25.180    17:02:17	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:18:25.180    17:02:17	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:18:25.439   17:02:18	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:18:25.439    "name": "raid_bdev1",
00:18:25.439    "uuid": "04c55518-e151-48c7-9ff4-89deffcf6b53",
00:18:25.439    "strip_size_kb": 0,
00:18:25.439    "state": "online",
00:18:25.439    "raid_level": "raid1",
00:18:25.439    "superblock": true,
00:18:25.439    "num_base_bdevs": 4,
00:18:25.439    "num_base_bdevs_discovered": 3,
00:18:25.439    "num_base_bdevs_operational": 3,
00:18:25.439    "base_bdevs_list": [
00:18:25.439      {
00:18:25.439        "name": null,
00:18:25.439        "uuid": "00000000-0000-0000-0000-000000000000",
00:18:25.439        "is_configured": false,
00:18:25.439        "data_offset": 2048,
00:18:25.439        "data_size": 63488
00:18:25.439      },
00:18:25.439      {
00:18:25.439        "name": "pt2",
00:18:25.439        "uuid": "dbd139a2-f219-5140-aeec-bc8038a07318",
00:18:25.439        "is_configured": true,
00:18:25.439        "data_offset": 2048,
00:18:25.439        "data_size": 63488
00:18:25.439      },
00:18:25.439      {
00:18:25.439        "name": "pt3",
00:18:25.439        "uuid": "655c1801-2b22-53ea-a892-93ff167fa9cb",
00:18:25.439        "is_configured": true,
00:18:25.439        "data_offset": 2048,
00:18:25.439        "data_size": 63488
00:18:25.439      },
00:18:25.439      {
00:18:25.439        "name": "pt4",
00:18:25.439        "uuid": "8bbf4cc1-2b87-516e-86b5-e0c1ced11e83",
00:18:25.439        "is_configured": true,
00:18:25.439        "data_offset": 2048,
00:18:25.439        "data_size": 63488
00:18:25.439      }
00:18:25.439    ]
00:18:25.439  }'
00:18:25.439   17:02:18	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:18:25.439   17:02:18	-- common/autotest_common.sh@10 -- # set +x
00:18:26.003   17:02:18	-- bdev/bdev_raid.sh@468 -- # '[' 4 -gt 2 ']'
00:18:26.003   17:02:18	-- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1
00:18:26.274  [2024-11-19 17:02:18.997923] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:18:26.274  [2024-11-19 17:02:18.998242] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:18:26.274  [2024-11-19 17:02:18.998445] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:18:26.274  [2024-11-19 17:02:18.998577] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:18:26.275  [2024-11-19 17:02:18.998774] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline
00:18:26.275    17:02:19	-- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:18:26.275    17:02:19	-- bdev/bdev_raid.sh@471 -- # jq -r '.[]'
00:18:26.540   17:02:19	-- bdev/bdev_raid.sh@471 -- # raid_bdev=
00:18:26.540   17:02:19	-- bdev/bdev_raid.sh@472 -- # '[' -n '' ']'
00:18:26.540   17:02:19	-- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001
00:18:26.808  [2024-11-19 17:02:19.542033] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1
00:18:26.809  [2024-11-19 17:02:19.542426] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:18:26.809  [2024-11-19 17:02:19.542533] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480
00:18:26.809  [2024-11-19 17:02:19.542664] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:18:26.809  [2024-11-19 17:02:19.545801] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:18:26.809  [2024-11-19 17:02:19.546036] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1
00:18:26.809  [2024-11-19 17:02:19.546247] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1
00:18:26.809  [2024-11-19 17:02:19.546394] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed
00:18:26.809  pt1
00:18:26.809   17:02:19	-- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4
00:18:26.809   17:02:19	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:18:26.809   17:02:19	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:18:26.809   17:02:19	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:18:26.809   17:02:19	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:18:26.809   17:02:19	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:18:26.809   17:02:19	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:18:26.809   17:02:19	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:18:26.809   17:02:19	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:18:26.809   17:02:19	-- bdev/bdev_raid.sh@125 -- # local tmp
00:18:26.809    17:02:19	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:18:26.809    17:02:19	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:18:27.067   17:02:19	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:18:27.067    "name": "raid_bdev1",
00:18:27.067    "uuid": "04c55518-e151-48c7-9ff4-89deffcf6b53",
00:18:27.067    "strip_size_kb": 0,
00:18:27.067    "state": "configuring",
00:18:27.067    "raid_level": "raid1",
00:18:27.067    "superblock": true,
00:18:27.067    "num_base_bdevs": 4,
00:18:27.067    "num_base_bdevs_discovered": 1,
00:18:27.067    "num_base_bdevs_operational": 4,
00:18:27.067    "base_bdevs_list": [
00:18:27.067      {
00:18:27.067        "name": "pt1",
00:18:27.067        "uuid": "1f6fa19d-f40c-5cb4-af8e-117d209f6153",
00:18:27.067        "is_configured": true,
00:18:27.067        "data_offset": 2048,
00:18:27.067        "data_size": 63488
00:18:27.067      },
00:18:27.067      {
00:18:27.067        "name": null,
00:18:27.067        "uuid": "dbd139a2-f219-5140-aeec-bc8038a07318",
00:18:27.067        "is_configured": false,
00:18:27.067        "data_offset": 2048,
00:18:27.067        "data_size": 63488
00:18:27.067      },
00:18:27.067      {
00:18:27.067        "name": null,
00:18:27.067        "uuid": "655c1801-2b22-53ea-a892-93ff167fa9cb",
00:18:27.067        "is_configured": false,
00:18:27.067        "data_offset": 2048,
00:18:27.067        "data_size": 63488
00:18:27.067      },
00:18:27.067      {
00:18:27.067        "name": null,
00:18:27.067        "uuid": "8bbf4cc1-2b87-516e-86b5-e0c1ced11e83",
00:18:27.067        "is_configured": false,
00:18:27.067        "data_offset": 2048,
00:18:27.067        "data_size": 63488
00:18:27.067      }
00:18:27.067    ]
00:18:27.067  }'
00:18:27.067   17:02:19	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:18:27.067   17:02:19	-- common/autotest_common.sh@10 -- # set +x
00:18:27.634   17:02:20	-- bdev/bdev_raid.sh@484 -- # (( i = 1 ))
00:18:27.634   17:02:20	-- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs ))
00:18:27.634   17:02:20	-- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2
00:18:27.892   17:02:20	-- bdev/bdev_raid.sh@484 -- # (( i++ ))
00:18:27.892   17:02:20	-- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs ))
00:18:27.892   17:02:20	-- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3
00:18:28.151   17:02:20	-- bdev/bdev_raid.sh@484 -- # (( i++ ))
00:18:28.151   17:02:20	-- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs ))
00:18:28.151   17:02:20	-- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4
00:18:28.409   17:02:21	-- bdev/bdev_raid.sh@484 -- # (( i++ ))
00:18:28.409   17:02:21	-- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs ))
00:18:28.409   17:02:21	-- bdev/bdev_raid.sh@489 -- # i=3
00:18:28.409   17:02:21	-- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004
00:18:28.667  [2024-11-19 17:02:21.338704] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4
00:18:28.667  [2024-11-19 17:02:21.339167] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:18:28.667  [2024-11-19 17:02:21.339279] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80
00:18:28.667  [2024-11-19 17:02:21.339421] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:18:28.667  [2024-11-19 17:02:21.340025] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:18:28.667  [2024-11-19 17:02:21.340227] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4
00:18:28.667  [2024-11-19 17:02:21.340434] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4
00:18:28.667  [2024-11-19 17:02:21.340537] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt4 (4) greater than existing raid bdev raid_bdev1 (2)
00:18:28.667  [2024-11-19 17:02:21.340620] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:18:28.667  [2024-11-19 17:02:21.340693] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ba80 name raid_bdev1, state configuring
00:18:28.667  [2024-11-19 17:02:21.340959] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed
00:18:28.667  pt4
00:18:28.667   17:02:21	-- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3
00:18:28.667   17:02:21	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:18:28.667   17:02:21	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:18:28.667   17:02:21	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:18:28.667   17:02:21	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:18:28.667   17:02:21	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:18:28.667   17:02:21	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:18:28.667   17:02:21	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:18:28.667   17:02:21	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:18:28.667   17:02:21	-- bdev/bdev_raid.sh@125 -- # local tmp
00:18:28.667    17:02:21	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:18:28.667    17:02:21	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:18:28.925   17:02:21	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:18:28.925    "name": "raid_bdev1",
00:18:28.925    "uuid": "04c55518-e151-48c7-9ff4-89deffcf6b53",
00:18:28.925    "strip_size_kb": 0,
00:18:28.925    "state": "configuring",
00:18:28.925    "raid_level": "raid1",
00:18:28.925    "superblock": true,
00:18:28.925    "num_base_bdevs": 4,
00:18:28.925    "num_base_bdevs_discovered": 1,
00:18:28.925    "num_base_bdevs_operational": 3,
00:18:28.925    "base_bdevs_list": [
00:18:28.925      {
00:18:28.925        "name": null,
00:18:28.925        "uuid": "00000000-0000-0000-0000-000000000000",
00:18:28.925        "is_configured": false,
00:18:28.925        "data_offset": 2048,
00:18:28.925        "data_size": 63488
00:18:28.925      },
00:18:28.925      {
00:18:28.925        "name": null,
00:18:28.925        "uuid": "dbd139a2-f219-5140-aeec-bc8038a07318",
00:18:28.925        "is_configured": false,
00:18:28.925        "data_offset": 2048,
00:18:28.925        "data_size": 63488
00:18:28.925      },
00:18:28.925      {
00:18:28.925        "name": null,
00:18:28.925        "uuid": "655c1801-2b22-53ea-a892-93ff167fa9cb",
00:18:28.925        "is_configured": false,
00:18:28.925        "data_offset": 2048,
00:18:28.925        "data_size": 63488
00:18:28.925      },
00:18:28.925      {
00:18:28.925        "name": "pt4",
00:18:28.925        "uuid": "8bbf4cc1-2b87-516e-86b5-e0c1ced11e83",
00:18:28.925        "is_configured": true,
00:18:28.925        "data_offset": 2048,
00:18:28.925        "data_size": 63488
00:18:28.925      }
00:18:28.925    ]
00:18:28.925  }'
00:18:28.925   17:02:21	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:18:28.925   17:02:21	-- common/autotest_common.sh@10 -- # set +x
00:18:29.491   17:02:22	-- bdev/bdev_raid.sh@497 -- # (( i = 1 ))
00:18:29.491   17:02:22	-- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 ))
00:18:29.491   17:02:22	-- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:18:29.750  [2024-11-19 17:02:22.599058] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:18:29.750  [2024-11-19 17:02:22.599494] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:18:29.750  [2024-11-19 17:02:22.599682] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380
00:18:29.750  [2024-11-19 17:02:22.599831] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:18:29.750  [2024-11-19 17:02:22.600416] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:18:29.750  [2024-11-19 17:02:22.600605] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:18:29.750  [2024-11-19 17:02:22.600782] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2
00:18:29.750  [2024-11-19 17:02:22.600892] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:18:29.750  pt2
00:18:30.009   17:02:22	-- bdev/bdev_raid.sh@497 -- # (( i++ ))
00:18:30.009   17:02:22	-- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 ))
00:18:30.009   17:02:22	-- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003
00:18:30.267  [2024-11-19 17:02:22.911148] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3
00:18:30.267  [2024-11-19 17:02:22.911545] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:18:30.267  [2024-11-19 17:02:22.911628] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680
00:18:30.268  [2024-11-19 17:02:22.911911] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:18:30.268  [2024-11-19 17:02:22.912427] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:18:30.268  [2024-11-19 17:02:22.912616] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3
00:18:30.268  [2024-11-19 17:02:22.912813] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3
00:18:30.268  [2024-11-19 17:02:22.912920] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed
00:18:30.268  [2024-11-19 17:02:22.913098] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c080
00:18:30.268  [2024-11-19 17:02:22.913238] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512
00:18:30.268  [2024-11-19 17:02:22.913364] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000003090
00:18:30.268  [2024-11-19 17:02:22.913880] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c080
00:18:30.268  [2024-11-19 17:02:22.914003] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c080
00:18:30.268  [2024-11-19 17:02:22.914218] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:18:30.268  pt3
00:18:30.268   17:02:22	-- bdev/bdev_raid.sh@497 -- # (( i++ ))
00:18:30.268   17:02:22	-- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 ))
00:18:30.268   17:02:22	-- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3
00:18:30.268   17:02:22	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:18:30.268   17:02:22	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:18:30.268   17:02:22	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:18:30.268   17:02:22	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:18:30.268   17:02:22	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:18:30.268   17:02:22	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:18:30.268   17:02:22	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:18:30.268   17:02:22	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:18:30.268   17:02:22	-- bdev/bdev_raid.sh@125 -- # local tmp
00:18:30.268    17:02:22	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:18:30.268    17:02:22	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:18:30.527   17:02:23	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:18:30.527    "name": "raid_bdev1",
00:18:30.527    "uuid": "04c55518-e151-48c7-9ff4-89deffcf6b53",
00:18:30.527    "strip_size_kb": 0,
00:18:30.527    "state": "online",
00:18:30.527    "raid_level": "raid1",
00:18:30.527    "superblock": true,
00:18:30.527    "num_base_bdevs": 4,
00:18:30.527    "num_base_bdevs_discovered": 3,
00:18:30.527    "num_base_bdevs_operational": 3,
00:18:30.527    "base_bdevs_list": [
00:18:30.527      {
00:18:30.527        "name": null,
00:18:30.527        "uuid": "00000000-0000-0000-0000-000000000000",
00:18:30.527        "is_configured": false,
00:18:30.527        "data_offset": 2048,
00:18:30.527        "data_size": 63488
00:18:30.527      },
00:18:30.527      {
00:18:30.527        "name": "pt2",
00:18:30.527        "uuid": "dbd139a2-f219-5140-aeec-bc8038a07318",
00:18:30.527        "is_configured": true,
00:18:30.527        "data_offset": 2048,
00:18:30.527        "data_size": 63488
00:18:30.527      },
00:18:30.527      {
00:18:30.527        "name": "pt3",
00:18:30.527        "uuid": "655c1801-2b22-53ea-a892-93ff167fa9cb",
00:18:30.527        "is_configured": true,
00:18:30.527        "data_offset": 2048,
00:18:30.527        "data_size": 63488
00:18:30.527      },
00:18:30.527      {
00:18:30.527        "name": "pt4",
00:18:30.527        "uuid": "8bbf4cc1-2b87-516e-86b5-e0c1ced11e83",
00:18:30.527        "is_configured": true,
00:18:30.527        "data_offset": 2048,
00:18:30.527        "data_size": 63488
00:18:30.527      }
00:18:30.527    ]
00:18:30.527  }'
00:18:30.527   17:02:23	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:18:30.527   17:02:23	-- common/autotest_common.sh@10 -- # set +x
00:18:31.093    17:02:23	-- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1
00:18:31.093    17:02:23	-- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid'
00:18:31.351  [2024-11-19 17:02:24.049604] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:18:31.351   17:02:24	-- bdev/bdev_raid.sh@506 -- # '[' 04c55518-e151-48c7-9ff4-89deffcf6b53 '!=' 04c55518-e151-48c7-9ff4-89deffcf6b53 ']'
00:18:31.351   17:02:24	-- bdev/bdev_raid.sh@511 -- # killprocess 132006
00:18:31.351   17:02:24	-- common/autotest_common.sh@936 -- # '[' -z 132006 ']'
00:18:31.351   17:02:24	-- common/autotest_common.sh@940 -- # kill -0 132006
00:18:31.351    17:02:24	-- common/autotest_common.sh@941 -- # uname
00:18:31.351   17:02:24	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:18:31.351    17:02:24	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 132006
00:18:31.351  killing process with pid 132006
00:18:31.351   17:02:24	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:18:31.351   17:02:24	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:18:31.351   17:02:24	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 132006'
00:18:31.351   17:02:24	-- common/autotest_common.sh@955 -- # kill 132006
00:18:31.351   17:02:24	-- common/autotest_common.sh@960 -- # wait 132006
00:18:31.351  [2024-11-19 17:02:24.105038] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:18:31.351  [2024-11-19 17:02:24.105158] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:18:31.351  [2024-11-19 17:02:24.105253] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:18:31.351  [2024-11-19 17:02:24.105389] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c080 name raid_bdev1, state offline
00:18:31.351  [2024-11-19 17:02:24.190352] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:18:31.920  ************************************
00:18:31.920  END TEST raid_superblock_test
00:18:31.920  ************************************
00:18:31.920   17:02:24	-- bdev/bdev_raid.sh@513 -- # return 0
00:18:31.920  
00:18:31.920  real	0m21.663s
00:18:31.920  user	0m39.468s
00:18:31.920  sys	0m3.529s
00:18:31.920   17:02:24	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:18:31.920   17:02:24	-- common/autotest_common.sh@10 -- # set +x
00:18:31.920   17:02:24	-- bdev/bdev_raid.sh@733 -- # '[' true = true ']'
00:18:31.920   17:02:24	-- bdev/bdev_raid.sh@734 -- # for n in 2 4
00:18:31.920   17:02:24	-- bdev/bdev_raid.sh@735 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false
00:18:31.920   17:02:24	-- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']'
00:18:31.920   17:02:24	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:18:31.920   17:02:24	-- common/autotest_common.sh@10 -- # set +x
00:18:31.920  ************************************
00:18:31.920  START TEST raid_rebuild_test
00:18:31.920  ************************************
00:18:31.920   17:02:24	-- common/autotest_common.sh@1114 -- # raid_rebuild_test raid1 2 false false
00:18:31.920   17:02:24	-- bdev/bdev_raid.sh@517 -- # local raid_level=raid1
00:18:31.920   17:02:24	-- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2
00:18:31.920   17:02:24	-- bdev/bdev_raid.sh@519 -- # local superblock=false
00:18:31.920   17:02:24	-- bdev/bdev_raid.sh@520 -- # local background_io=false
00:18:31.920    17:02:24	-- bdev/bdev_raid.sh@521 -- # (( i = 1 ))
00:18:31.920    17:02:24	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:18:31.920    17:02:24	-- bdev/bdev_raid.sh@521 -- # echo BaseBdev1
00:18:31.920    17:02:24	-- bdev/bdev_raid.sh@521 -- # (( i++ ))
00:18:31.920    17:02:24	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:18:31.920    17:02:24	-- bdev/bdev_raid.sh@521 -- # echo BaseBdev2
00:18:31.920    17:02:24	-- bdev/bdev_raid.sh@521 -- # (( i++ ))
00:18:31.920    17:02:24	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:18:31.920   17:02:24	-- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2')
00:18:31.920   17:02:24	-- bdev/bdev_raid.sh@521 -- # local base_bdevs
00:18:31.920   17:02:24	-- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1
00:18:31.920   17:02:24	-- bdev/bdev_raid.sh@523 -- # local strip_size
00:18:31.920   17:02:24	-- bdev/bdev_raid.sh@524 -- # local create_arg
00:18:31.920   17:02:24	-- bdev/bdev_raid.sh@525 -- # local raid_bdev_size
00:18:31.920   17:02:24	-- bdev/bdev_raid.sh@526 -- # local data_offset
00:18:31.920   17:02:24	-- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']'
00:18:31.920   17:02:24	-- bdev/bdev_raid.sh@536 -- # strip_size=0
00:18:31.920   17:02:24	-- bdev/bdev_raid.sh@539 -- # '[' false = true ']'
00:18:31.920   17:02:24	-- bdev/bdev_raid.sh@544 -- # raid_pid=132673
00:18:31.920   17:02:24	-- bdev/bdev_raid.sh@545 -- # waitforlisten 132673 /var/tmp/spdk-raid.sock
00:18:31.920   17:02:24	-- common/autotest_common.sh@829 -- # '[' -z 132673 ']'
00:18:31.920   17:02:24	-- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid
00:18:31.920   17:02:24	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock
00:18:31.920   17:02:24	-- common/autotest_common.sh@834 -- # local max_retries=100
00:18:31.920  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...
00:18:31.920   17:02:24	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...'
00:18:31.920   17:02:24	-- common/autotest_common.sh@838 -- # xtrace_disable
00:18:31.920   17:02:24	-- common/autotest_common.sh@10 -- # set +x
00:18:31.920  [2024-11-19 17:02:24.743493] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:18:31.920  [2024-11-19 17:02:24.744035] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132673 ]
00:18:31.920  I/O size of 3145728 is greater than zero copy threshold (65536).
00:18:31.920  Zero copy mechanism will not be used.
00:18:32.178  [2024-11-19 17:02:24.903033] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:18:32.178  [2024-11-19 17:02:24.998732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:18:32.436  [2024-11-19 17:02:25.090404] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:18:33.003   17:02:25	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:18:33.003   17:02:25	-- common/autotest_common.sh@862 -- # return 0
00:18:33.003   17:02:25	-- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}"
00:18:33.003   17:02:25	-- bdev/bdev_raid.sh@549 -- # '[' false = true ']'
00:18:33.003   17:02:25	-- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1
00:18:33.261  BaseBdev1
00:18:33.261   17:02:26	-- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}"
00:18:33.261   17:02:26	-- bdev/bdev_raid.sh@549 -- # '[' false = true ']'
00:18:33.261   17:02:26	-- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2
00:18:33.519  BaseBdev2
00:18:33.519   17:02:26	-- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc
00:18:33.778  spare_malloc
00:18:33.778   17:02:26	-- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000
00:18:34.345  spare_delay
00:18:34.345   17:02:26	-- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare
00:18:34.345  [2024-11-19 17:02:27.166800] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay
00:18:34.345  [2024-11-19 17:02:27.167230] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:18:34.345  [2024-11-19 17:02:27.167362] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80
00:18:34.345  [2024-11-19 17:02:27.167744] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:18:34.345  [2024-11-19 17:02:27.170755] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:18:34.345  [2024-11-19 17:02:27.170991] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare
00:18:34.345  spare
00:18:34.345   17:02:27	-- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1
00:18:34.612  [2024-11-19 17:02:27.399516] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:18:34.612  [2024-11-19 17:02:27.402232] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:18:34.612  [2024-11-19 17:02:27.402534] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280
00:18:34.612  [2024-11-19 17:02:27.402648] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512
00:18:34.612  [2024-11-19 17:02:27.402881] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120
00:18:34.612  [2024-11-19 17:02:27.403448] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280
00:18:34.612  [2024-11-19 17:02:27.403577] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007280
00:18:34.612  [2024-11-19 17:02:27.403922] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:18:34.612   17:02:27	-- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2
00:18:34.612   17:02:27	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:18:34.612   17:02:27	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:18:34.612   17:02:27	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:18:34.612   17:02:27	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:18:34.612   17:02:27	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:18:34.612   17:02:27	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:18:34.612   17:02:27	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:18:34.612   17:02:27	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:18:34.612   17:02:27	-- bdev/bdev_raid.sh@125 -- # local tmp
00:18:34.612    17:02:27	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:18:34.612    17:02:27	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:18:34.885   17:02:27	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:18:34.885    "name": "raid_bdev1",
00:18:34.885    "uuid": "98a32244-bad0-4b7a-8169-ebba9d8c25a4",
00:18:34.885    "strip_size_kb": 0,
00:18:34.885    "state": "online",
00:18:34.885    "raid_level": "raid1",
00:18:34.885    "superblock": false,
00:18:34.885    "num_base_bdevs": 2,
00:18:34.885    "num_base_bdevs_discovered": 2,
00:18:34.885    "num_base_bdevs_operational": 2,
00:18:34.885    "base_bdevs_list": [
00:18:34.885      {
00:18:34.885        "name": "BaseBdev1",
00:18:34.885        "uuid": "f467454e-fb73-4d99-9b2f-ab04e21abc8c",
00:18:34.885        "is_configured": true,
00:18:34.885        "data_offset": 0,
00:18:34.885        "data_size": 65536
00:18:34.885      },
00:18:34.885      {
00:18:34.885        "name": "BaseBdev2",
00:18:34.885        "uuid": "05525e9f-46b4-4f6c-a64f-1ae20125c5d8",
00:18:34.885        "is_configured": true,
00:18:34.885        "data_offset": 0,
00:18:34.885        "data_size": 65536
00:18:34.885      }
00:18:34.885    ]
00:18:34.885  }'
00:18:34.885   17:02:27	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:18:34.885   17:02:27	-- common/autotest_common.sh@10 -- # set +x
00:18:35.821    17:02:28	-- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks'
00:18:35.821    17:02:28	-- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1
00:18:36.081  [2024-11-19 17:02:28.692476] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:18:36.081   17:02:28	-- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536
00:18:36.081    17:02:28	-- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:18:36.081    17:02:28	-- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset'
00:18:36.339   17:02:28	-- bdev/bdev_raid.sh@570 -- # data_offset=0
00:18:36.339   17:02:28	-- bdev/bdev_raid.sh@572 -- # '[' false = true ']'
00:18:36.339   17:02:28	-- bdev/bdev_raid.sh@576 -- # local write_unit_size
00:18:36.339   17:02:28	-- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0
00:18:36.339   17:02:28	-- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:18:36.339   17:02:28	-- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1')
00:18:36.339   17:02:28	-- bdev/nbd_common.sh@10 -- # local bdev_list
00:18:36.339   17:02:28	-- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0')
00:18:36.339   17:02:28	-- bdev/nbd_common.sh@11 -- # local nbd_list
00:18:36.340   17:02:28	-- bdev/nbd_common.sh@12 -- # local i
00:18:36.340   17:02:28	-- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:18:36.340   17:02:28	-- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:18:36.340   17:02:28	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0
00:18:36.598  [2024-11-19 17:02:29.224526] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0
00:18:36.598  /dev/nbd0
00:18:36.598    17:02:29	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:18:36.598   17:02:29	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:18:36.598   17:02:29	-- common/autotest_common.sh@866 -- # local nbd_name=nbd0
00:18:36.598   17:02:29	-- common/autotest_common.sh@867 -- # local i
00:18:36.598   17:02:29	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:18:36.598   17:02:29	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:18:36.598   17:02:29	-- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions
00:18:36.598   17:02:29	-- common/autotest_common.sh@871 -- # break
00:18:36.598   17:02:29	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:18:36.598   17:02:29	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:18:36.598   17:02:29	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:18:36.598  1+0 records in
00:18:36.598  1+0 records out
00:18:36.598  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000869799 s, 4.7 MB/s
00:18:36.598    17:02:29	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:18:36.598   17:02:29	-- common/autotest_common.sh@884 -- # size=4096
00:18:36.598   17:02:29	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:18:36.598   17:02:29	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:18:36.598   17:02:29	-- common/autotest_common.sh@887 -- # return 0
00:18:36.598   17:02:29	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:18:36.598   17:02:29	-- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:18:36.598   17:02:29	-- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']'
00:18:36.598   17:02:29	-- bdev/bdev_raid.sh@584 -- # write_unit_size=1
00:18:36.598   17:02:29	-- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct
00:18:41.869  65536+0 records in
00:18:41.869  65536+0 records out
00:18:41.869  33554432 bytes (34 MB, 32 MiB) copied, 4.52055 s, 7.4 MB/s
00:18:41.869   17:02:33	-- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0
00:18:41.869   17:02:33	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:18:41.869   17:02:33	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:18:41.869   17:02:33	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:18:41.869   17:02:33	-- bdev/nbd_common.sh@51 -- # local i
00:18:41.869   17:02:33	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:18:41.869   17:02:33	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0
00:18:41.870  [2024-11-19 17:02:34.136911] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:18:41.870    17:02:34	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:18:41.870   17:02:34	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:18:41.870   17:02:34	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:18:41.870   17:02:34	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:18:41.870   17:02:34	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:18:41.870   17:02:34	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:18:41.870   17:02:34	-- bdev/nbd_common.sh@41 -- # break
00:18:41.870   17:02:34	-- bdev/nbd_common.sh@45 -- # return 0
00:18:41.870   17:02:34	-- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1
00:18:41.870  [2024-11-19 17:02:34.380629] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:18:41.870   17:02:34	-- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1
00:18:41.870   17:02:34	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:18:41.870   17:02:34	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:18:41.870   17:02:34	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:18:41.870   17:02:34	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:18:41.870   17:02:34	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1
00:18:41.870   17:02:34	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:18:41.870   17:02:34	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:18:41.870   17:02:34	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:18:41.870   17:02:34	-- bdev/bdev_raid.sh@125 -- # local tmp
00:18:41.870    17:02:34	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:18:41.870    17:02:34	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:18:41.870   17:02:34	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:18:41.870    "name": "raid_bdev1",
00:18:41.870    "uuid": "98a32244-bad0-4b7a-8169-ebba9d8c25a4",
00:18:41.870    "strip_size_kb": 0,
00:18:41.870    "state": "online",
00:18:41.870    "raid_level": "raid1",
00:18:41.870    "superblock": false,
00:18:41.870    "num_base_bdevs": 2,
00:18:41.870    "num_base_bdevs_discovered": 1,
00:18:41.870    "num_base_bdevs_operational": 1,
00:18:41.870    "base_bdevs_list": [
00:18:41.870      {
00:18:41.870        "name": null,
00:18:41.870        "uuid": "00000000-0000-0000-0000-000000000000",
00:18:41.870        "is_configured": false,
00:18:41.870        "data_offset": 0,
00:18:41.870        "data_size": 65536
00:18:41.870      },
00:18:41.870      {
00:18:41.870        "name": "BaseBdev2",
00:18:41.870        "uuid": "05525e9f-46b4-4f6c-a64f-1ae20125c5d8",
00:18:41.870        "is_configured": true,
00:18:41.870        "data_offset": 0,
00:18:41.870        "data_size": 65536
00:18:41.870      }
00:18:41.870    ]
00:18:41.870  }'
00:18:41.870   17:02:34	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:18:41.870   17:02:34	-- common/autotest_common.sh@10 -- # set +x
00:18:42.437   17:02:35	-- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare
00:18:42.697  [2024-11-19 17:02:35.372829] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare
00:18:42.697  [2024-11-19 17:02:35.373136] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:18:42.697  [2024-11-19 17:02:35.377655] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d05ee0
00:18:42.697  [2024-11-19 17:02:35.380343] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1
00:18:42.697   17:02:35	-- bdev/bdev_raid.sh@598 -- # sleep 1
00:18:43.633   17:02:36	-- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:18:43.633   17:02:36	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:18:43.633   17:02:36	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:18:43.633   17:02:36	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:18:43.633   17:02:36	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:18:43.633    17:02:36	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:18:43.633    17:02:36	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:18:43.892   17:02:36	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:18:43.892    "name": "raid_bdev1",
00:18:43.892    "uuid": "98a32244-bad0-4b7a-8169-ebba9d8c25a4",
00:18:43.892    "strip_size_kb": 0,
00:18:43.892    "state": "online",
00:18:43.892    "raid_level": "raid1",
00:18:43.892    "superblock": false,
00:18:43.892    "num_base_bdevs": 2,
00:18:43.892    "num_base_bdevs_discovered": 2,
00:18:43.892    "num_base_bdevs_operational": 2,
00:18:43.892    "process": {
00:18:43.892      "type": "rebuild",
00:18:43.892      "target": "spare",
00:18:43.892      "progress": {
00:18:43.892        "blocks": 24576,
00:18:43.892        "percent": 37
00:18:43.892      }
00:18:43.892    },
00:18:43.892    "base_bdevs_list": [
00:18:43.892      {
00:18:43.892        "name": "spare",
00:18:43.892        "uuid": "2422d600-e9a2-5df1-b5c7-aa0e06ea45bd",
00:18:43.892        "is_configured": true,
00:18:43.892        "data_offset": 0,
00:18:43.892        "data_size": 65536
00:18:43.892      },
00:18:43.892      {
00:18:43.892        "name": "BaseBdev2",
00:18:43.892        "uuid": "05525e9f-46b4-4f6c-a64f-1ae20125c5d8",
00:18:43.892        "is_configured": true,
00:18:43.892        "data_offset": 0,
00:18:43.892        "data_size": 65536
00:18:43.892      }
00:18:43.892    ]
00:18:43.892  }'
00:18:43.892    17:02:36	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:18:43.892   17:02:36	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:18:43.892    17:02:36	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:18:43.892   17:02:36	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:18:43.892   17:02:36	-- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare
00:18:44.460  [2024-11-19 17:02:37.022634] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:18:44.460  [2024-11-19 17:02:37.091807] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device
00:18:44.460  [2024-11-19 17:02:37.092122] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:18:44.460   17:02:37	-- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1
00:18:44.460   17:02:37	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:18:44.460   17:02:37	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:18:44.460   17:02:37	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:18:44.460   17:02:37	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:18:44.460   17:02:37	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1
00:18:44.460   17:02:37	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:18:44.460   17:02:37	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:18:44.460   17:02:37	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:18:44.460   17:02:37	-- bdev/bdev_raid.sh@125 -- # local tmp
00:18:44.460    17:02:37	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:18:44.460    17:02:37	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:18:44.719   17:02:37	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:18:44.719    "name": "raid_bdev1",
00:18:44.719    "uuid": "98a32244-bad0-4b7a-8169-ebba9d8c25a4",
00:18:44.719    "strip_size_kb": 0,
00:18:44.719    "state": "online",
00:18:44.719    "raid_level": "raid1",
00:18:44.719    "superblock": false,
00:18:44.719    "num_base_bdevs": 2,
00:18:44.719    "num_base_bdevs_discovered": 1,
00:18:44.719    "num_base_bdevs_operational": 1,
00:18:44.719    "base_bdevs_list": [
00:18:44.719      {
00:18:44.719        "name": null,
00:18:44.719        "uuid": "00000000-0000-0000-0000-000000000000",
00:18:44.719        "is_configured": false,
00:18:44.719        "data_offset": 0,
00:18:44.719        "data_size": 65536
00:18:44.719      },
00:18:44.719      {
00:18:44.719        "name": "BaseBdev2",
00:18:44.719        "uuid": "05525e9f-46b4-4f6c-a64f-1ae20125c5d8",
00:18:44.719        "is_configured": true,
00:18:44.719        "data_offset": 0,
00:18:44.719        "data_size": 65536
00:18:44.719      }
00:18:44.719    ]
00:18:44.719  }'
00:18:44.719   17:02:37	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:18:44.719   17:02:37	-- common/autotest_common.sh@10 -- # set +x
00:18:45.286   17:02:37	-- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none
00:18:45.286   17:02:37	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:18:45.286   17:02:37	-- bdev/bdev_raid.sh@184 -- # local process_type=none
00:18:45.286   17:02:37	-- bdev/bdev_raid.sh@185 -- # local target=none
00:18:45.286   17:02:37	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:18:45.286    17:02:37	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:18:45.286    17:02:37	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:18:45.603   17:02:38	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:18:45.603    "name": "raid_bdev1",
00:18:45.603    "uuid": "98a32244-bad0-4b7a-8169-ebba9d8c25a4",
00:18:45.603    "strip_size_kb": 0,
00:18:45.603    "state": "online",
00:18:45.603    "raid_level": "raid1",
00:18:45.603    "superblock": false,
00:18:45.603    "num_base_bdevs": 2,
00:18:45.603    "num_base_bdevs_discovered": 1,
00:18:45.603    "num_base_bdevs_operational": 1,
00:18:45.603    "base_bdevs_list": [
00:18:45.603      {
00:18:45.603        "name": null,
00:18:45.603        "uuid": "00000000-0000-0000-0000-000000000000",
00:18:45.603        "is_configured": false,
00:18:45.603        "data_offset": 0,
00:18:45.603        "data_size": 65536
00:18:45.603      },
00:18:45.603      {
00:18:45.603        "name": "BaseBdev2",
00:18:45.603        "uuid": "05525e9f-46b4-4f6c-a64f-1ae20125c5d8",
00:18:45.603        "is_configured": true,
00:18:45.603        "data_offset": 0,
00:18:45.603        "data_size": 65536
00:18:45.603      }
00:18:45.603    ]
00:18:45.603  }'
00:18:45.603    17:02:38	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:18:45.603   17:02:38	-- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]]
00:18:45.603    17:02:38	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:18:45.603   17:02:38	-- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]]
00:18:45.603   17:02:38	-- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare
00:18:45.861  [2024-11-19 17:02:38.465419] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare
00:18:45.861  [2024-11-19 17:02:38.466934] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:18:45.861  [2024-11-19 17:02:38.471449] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d06080
00:18:45.861  [2024-11-19 17:02:38.473888] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1
00:18:45.861   17:02:38	-- bdev/bdev_raid.sh@614 -- # sleep 1
00:18:46.795   17:02:39	-- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:18:46.795   17:02:39	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:18:46.795   17:02:39	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:18:46.795   17:02:39	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:18:46.795   17:02:39	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:18:46.795    17:02:39	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:18:46.795    17:02:39	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:18:47.054   17:02:39	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:18:47.054    "name": "raid_bdev1",
00:18:47.054    "uuid": "98a32244-bad0-4b7a-8169-ebba9d8c25a4",
00:18:47.054    "strip_size_kb": 0,
00:18:47.054    "state": "online",
00:18:47.054    "raid_level": "raid1",
00:18:47.054    "superblock": false,
00:18:47.054    "num_base_bdevs": 2,
00:18:47.054    "num_base_bdevs_discovered": 2,
00:18:47.054    "num_base_bdevs_operational": 2,
00:18:47.054    "process": {
00:18:47.054      "type": "rebuild",
00:18:47.054      "target": "spare",
00:18:47.054      "progress": {
00:18:47.054        "blocks": 24576,
00:18:47.054        "percent": 37
00:18:47.054      }
00:18:47.054    },
00:18:47.054    "base_bdevs_list": [
00:18:47.054      {
00:18:47.054        "name": "spare",
00:18:47.054        "uuid": "2422d600-e9a2-5df1-b5c7-aa0e06ea45bd",
00:18:47.054        "is_configured": true,
00:18:47.054        "data_offset": 0,
00:18:47.054        "data_size": 65536
00:18:47.054      },
00:18:47.054      {
00:18:47.054        "name": "BaseBdev2",
00:18:47.054        "uuid": "05525e9f-46b4-4f6c-a64f-1ae20125c5d8",
00:18:47.054        "is_configured": true,
00:18:47.054        "data_offset": 0,
00:18:47.054        "data_size": 65536
00:18:47.054      }
00:18:47.054    ]
00:18:47.054  }'
00:18:47.054    17:02:39	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:18:47.054   17:02:39	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:18:47.054    17:02:39	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:18:47.054   17:02:39	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:18:47.054   17:02:39	-- bdev/bdev_raid.sh@617 -- # '[' false = true ']'
00:18:47.054   17:02:39	-- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2
00:18:47.054   17:02:39	-- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']'
00:18:47.054   17:02:39	-- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']'
00:18:47.054   17:02:39	-- bdev/bdev_raid.sh@657 -- # local timeout=360
00:18:47.054   17:02:39	-- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout ))
00:18:47.054   17:02:39	-- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:18:47.054   17:02:39	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:18:47.054   17:02:39	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:18:47.054   17:02:39	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:18:47.054   17:02:39	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:18:47.055    17:02:39	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:18:47.055    17:02:39	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:18:47.314   17:02:40	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:18:47.314    "name": "raid_bdev1",
00:18:47.314    "uuid": "98a32244-bad0-4b7a-8169-ebba9d8c25a4",
00:18:47.314    "strip_size_kb": 0,
00:18:47.314    "state": "online",
00:18:47.314    "raid_level": "raid1",
00:18:47.314    "superblock": false,
00:18:47.314    "num_base_bdevs": 2,
00:18:47.314    "num_base_bdevs_discovered": 2,
00:18:47.314    "num_base_bdevs_operational": 2,
00:18:47.314    "process": {
00:18:47.314      "type": "rebuild",
00:18:47.314      "target": "spare",
00:18:47.314      "progress": {
00:18:47.314        "blocks": 30720,
00:18:47.314        "percent": 46
00:18:47.314      }
00:18:47.314    },
00:18:47.314    "base_bdevs_list": [
00:18:47.314      {
00:18:47.314        "name": "spare",
00:18:47.314        "uuid": "2422d600-e9a2-5df1-b5c7-aa0e06ea45bd",
00:18:47.314        "is_configured": true,
00:18:47.314        "data_offset": 0,
00:18:47.314        "data_size": 65536
00:18:47.314      },
00:18:47.314      {
00:18:47.314        "name": "BaseBdev2",
00:18:47.314        "uuid": "05525e9f-46b4-4f6c-a64f-1ae20125c5d8",
00:18:47.314        "is_configured": true,
00:18:47.314        "data_offset": 0,
00:18:47.314        "data_size": 65536
00:18:47.314      }
00:18:47.314    ]
00:18:47.314  }'
00:18:47.314    17:02:40	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:18:47.314   17:02:40	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:18:47.314    17:02:40	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:18:47.314   17:02:40	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:18:47.314   17:02:40	-- bdev/bdev_raid.sh@662 -- # sleep 1
00:18:48.693   17:02:41	-- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout ))
00:18:48.693   17:02:41	-- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:18:48.693   17:02:41	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:18:48.693   17:02:41	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:18:48.693   17:02:41	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:18:48.693   17:02:41	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:18:48.693    17:02:41	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:18:48.693    17:02:41	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:18:48.693   17:02:41	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:18:48.693    "name": "raid_bdev1",
00:18:48.693    "uuid": "98a32244-bad0-4b7a-8169-ebba9d8c25a4",
00:18:48.693    "strip_size_kb": 0,
00:18:48.693    "state": "online",
00:18:48.693    "raid_level": "raid1",
00:18:48.693    "superblock": false,
00:18:48.693    "num_base_bdevs": 2,
00:18:48.693    "num_base_bdevs_discovered": 2,
00:18:48.693    "num_base_bdevs_operational": 2,
00:18:48.693    "process": {
00:18:48.693      "type": "rebuild",
00:18:48.693      "target": "spare",
00:18:48.693      "progress": {
00:18:48.693        "blocks": 57344,
00:18:48.693        "percent": 87
00:18:48.693      }
00:18:48.693    },
00:18:48.693    "base_bdevs_list": [
00:18:48.693      {
00:18:48.693        "name": "spare",
00:18:48.693        "uuid": "2422d600-e9a2-5df1-b5c7-aa0e06ea45bd",
00:18:48.693        "is_configured": true,
00:18:48.693        "data_offset": 0,
00:18:48.693        "data_size": 65536
00:18:48.693      },
00:18:48.693      {
00:18:48.693        "name": "BaseBdev2",
00:18:48.693        "uuid": "05525e9f-46b4-4f6c-a64f-1ae20125c5d8",
00:18:48.693        "is_configured": true,
00:18:48.693        "data_offset": 0,
00:18:48.693        "data_size": 65536
00:18:48.693      }
00:18:48.693    ]
00:18:48.693  }'
00:18:48.693    17:02:41	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:18:48.693   17:02:41	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:18:48.693    17:02:41	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:18:48.693   17:02:41	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:18:48.693   17:02:41	-- bdev/bdev_raid.sh@662 -- # sleep 1
00:18:48.951  [2024-11-19 17:02:41.692381] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1
00:18:48.951  [2024-11-19 17:02:41.692728] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1
00:18:48.951  [2024-11-19 17:02:41.692961] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:18:49.886   17:02:42	-- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout ))
00:18:49.886   17:02:42	-- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:18:49.886   17:02:42	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:18:49.886   17:02:42	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:18:49.886   17:02:42	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:18:49.886   17:02:42	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:18:49.886    17:02:42	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:18:49.886    17:02:42	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:18:50.144   17:02:42	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:18:50.144    "name": "raid_bdev1",
00:18:50.144    "uuid": "98a32244-bad0-4b7a-8169-ebba9d8c25a4",
00:18:50.144    "strip_size_kb": 0,
00:18:50.144    "state": "online",
00:18:50.144    "raid_level": "raid1",
00:18:50.144    "superblock": false,
00:18:50.144    "num_base_bdevs": 2,
00:18:50.144    "num_base_bdevs_discovered": 2,
00:18:50.144    "num_base_bdevs_operational": 2,
00:18:50.144    "base_bdevs_list": [
00:18:50.144      {
00:18:50.144        "name": "spare",
00:18:50.144        "uuid": "2422d600-e9a2-5df1-b5c7-aa0e06ea45bd",
00:18:50.144        "is_configured": true,
00:18:50.144        "data_offset": 0,
00:18:50.144        "data_size": 65536
00:18:50.144      },
00:18:50.144      {
00:18:50.144        "name": "BaseBdev2",
00:18:50.144        "uuid": "05525e9f-46b4-4f6c-a64f-1ae20125c5d8",
00:18:50.144        "is_configured": true,
00:18:50.144        "data_offset": 0,
00:18:50.144        "data_size": 65536
00:18:50.144      }
00:18:50.144    ]
00:18:50.144  }'
00:18:50.144    17:02:42	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:18:50.144   17:02:42	-- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]]
00:18:50.144    17:02:42	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:18:50.144   17:02:42	-- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]]
00:18:50.144   17:02:42	-- bdev/bdev_raid.sh@660 -- # break
00:18:50.144   17:02:42	-- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none
00:18:50.144   17:02:42	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:18:50.144   17:02:42	-- bdev/bdev_raid.sh@184 -- # local process_type=none
00:18:50.144   17:02:42	-- bdev/bdev_raid.sh@185 -- # local target=none
00:18:50.144   17:02:42	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:18:50.144    17:02:42	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:18:50.144    17:02:42	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:18:50.402   17:02:43	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:18:50.402    "name": "raid_bdev1",
00:18:50.402    "uuid": "98a32244-bad0-4b7a-8169-ebba9d8c25a4",
00:18:50.402    "strip_size_kb": 0,
00:18:50.402    "state": "online",
00:18:50.402    "raid_level": "raid1",
00:18:50.402    "superblock": false,
00:18:50.402    "num_base_bdevs": 2,
00:18:50.402    "num_base_bdevs_discovered": 2,
00:18:50.402    "num_base_bdevs_operational": 2,
00:18:50.402    "base_bdevs_list": [
00:18:50.402      {
00:18:50.402        "name": "spare",
00:18:50.402        "uuid": "2422d600-e9a2-5df1-b5c7-aa0e06ea45bd",
00:18:50.402        "is_configured": true,
00:18:50.402        "data_offset": 0,
00:18:50.402        "data_size": 65536
00:18:50.402      },
00:18:50.402      {
00:18:50.402        "name": "BaseBdev2",
00:18:50.402        "uuid": "05525e9f-46b4-4f6c-a64f-1ae20125c5d8",
00:18:50.402        "is_configured": true,
00:18:50.402        "data_offset": 0,
00:18:50.402        "data_size": 65536
00:18:50.402      }
00:18:50.402    ]
00:18:50.402  }'
00:18:50.402    17:02:43	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:18:50.402   17:02:43	-- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]]
00:18:50.402    17:02:43	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:18:50.661   17:02:43	-- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]]
00:18:50.661   17:02:43	-- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2
00:18:50.661   17:02:43	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:18:50.661   17:02:43	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:18:50.661   17:02:43	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:18:50.661   17:02:43	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:18:50.661   17:02:43	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:18:50.661   17:02:43	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:18:50.661   17:02:43	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:18:50.661   17:02:43	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:18:50.661   17:02:43	-- bdev/bdev_raid.sh@125 -- # local tmp
00:18:50.661    17:02:43	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:18:50.661    17:02:43	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:18:50.920   17:02:43	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:18:50.920    "name": "raid_bdev1",
00:18:50.920    "uuid": "98a32244-bad0-4b7a-8169-ebba9d8c25a4",
00:18:50.920    "strip_size_kb": 0,
00:18:50.920    "state": "online",
00:18:50.920    "raid_level": "raid1",
00:18:50.920    "superblock": false,
00:18:50.920    "num_base_bdevs": 2,
00:18:50.920    "num_base_bdevs_discovered": 2,
00:18:50.920    "num_base_bdevs_operational": 2,
00:18:50.920    "base_bdevs_list": [
00:18:50.920      {
00:18:50.920        "name": "spare",
00:18:50.920        "uuid": "2422d600-e9a2-5df1-b5c7-aa0e06ea45bd",
00:18:50.920        "is_configured": true,
00:18:50.920        "data_offset": 0,
00:18:50.920        "data_size": 65536
00:18:50.920      },
00:18:50.920      {
00:18:50.920        "name": "BaseBdev2",
00:18:50.920        "uuid": "05525e9f-46b4-4f6c-a64f-1ae20125c5d8",
00:18:50.920        "is_configured": true,
00:18:50.920        "data_offset": 0,
00:18:50.920        "data_size": 65536
00:18:50.920      }
00:18:50.920    ]
00:18:50.920  }'
00:18:50.920   17:02:43	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:18:50.920   17:02:43	-- common/autotest_common.sh@10 -- # set +x
00:18:51.487   17:02:44	-- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1
00:18:51.745  [2024-11-19 17:02:44.366494] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:18:51.745  [2024-11-19 17:02:44.366986] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:18:51.745  [2024-11-19 17:02:44.367306] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:18:51.745  [2024-11-19 17:02:44.367598] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:18:51.745  [2024-11-19 17:02:44.367750] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name raid_bdev1, state offline
00:18:51.745    17:02:44	-- bdev/bdev_raid.sh@671 -- # jq length
00:18:51.745    17:02:44	-- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:18:52.003   17:02:44	-- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]]
00:18:52.003   17:02:44	-- bdev/bdev_raid.sh@673 -- # '[' false = true ']'
00:18:52.003   17:02:44	-- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1'
00:18:52.003   17:02:44	-- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:18:52.003   17:02:44	-- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare')
00:18:52.003   17:02:44	-- bdev/nbd_common.sh@10 -- # local bdev_list
00:18:52.003   17:02:44	-- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:18:52.003   17:02:44	-- bdev/nbd_common.sh@11 -- # local nbd_list
00:18:52.003   17:02:44	-- bdev/nbd_common.sh@12 -- # local i
00:18:52.003   17:02:44	-- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:18:52.003   17:02:44	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:18:52.003   17:02:44	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0
00:18:52.261  /dev/nbd0
00:18:52.261    17:02:44	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:18:52.261   17:02:44	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:18:52.261   17:02:44	-- common/autotest_common.sh@866 -- # local nbd_name=nbd0
00:18:52.261   17:02:44	-- common/autotest_common.sh@867 -- # local i
00:18:52.261   17:02:44	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:18:52.261   17:02:44	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:18:52.261   17:02:44	-- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions
00:18:52.261   17:02:44	-- common/autotest_common.sh@871 -- # break
00:18:52.261   17:02:44	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:18:52.261   17:02:44	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:18:52.261   17:02:44	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:18:52.261  1+0 records in
00:18:52.261  1+0 records out
00:18:52.261  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000471993 s, 8.7 MB/s
00:18:52.261    17:02:44	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:18:52.261   17:02:44	-- common/autotest_common.sh@884 -- # size=4096
00:18:52.261   17:02:44	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:18:52.261   17:02:44	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:18:52.261   17:02:44	-- common/autotest_common.sh@887 -- # return 0
00:18:52.261   17:02:44	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:18:52.261   17:02:44	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:18:52.261   17:02:44	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1
00:18:52.520  /dev/nbd1
00:18:52.520    17:02:45	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:18:52.520   17:02:45	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:18:52.520   17:02:45	-- common/autotest_common.sh@866 -- # local nbd_name=nbd1
00:18:52.520   17:02:45	-- common/autotest_common.sh@867 -- # local i
00:18:52.520   17:02:45	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:18:52.520   17:02:45	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:18:52.520   17:02:45	-- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions
00:18:52.520   17:02:45	-- common/autotest_common.sh@871 -- # break
00:18:52.520   17:02:45	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:18:52.520   17:02:45	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:18:52.520   17:02:45	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:18:52.520  1+0 records in
00:18:52.520  1+0 records out
00:18:52.520  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000647228 s, 6.3 MB/s
00:18:52.520    17:02:45	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:18:52.520   17:02:45	-- common/autotest_common.sh@884 -- # size=4096
00:18:52.520   17:02:45	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:18:52.520   17:02:45	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:18:52.520   17:02:45	-- common/autotest_common.sh@887 -- # return 0
00:18:52.520   17:02:45	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:18:52.520   17:02:45	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:18:52.520   17:02:45	-- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1
00:18:52.520   17:02:45	-- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1'
00:18:52.520   17:02:45	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:18:52.520   17:02:45	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:18:52.520   17:02:45	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:18:52.520   17:02:45	-- bdev/nbd_common.sh@51 -- # local i
00:18:52.520   17:02:45	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:18:52.520   17:02:45	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0
00:18:53.111    17:02:45	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:18:53.111   17:02:45	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:18:53.111   17:02:45	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:18:53.111   17:02:45	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:18:53.111   17:02:45	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:18:53.111   17:02:45	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:18:53.111   17:02:45	-- bdev/nbd_common.sh@41 -- # break
00:18:53.111   17:02:45	-- bdev/nbd_common.sh@45 -- # return 0
00:18:53.111   17:02:45	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:18:53.111   17:02:45	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1
00:18:53.383    17:02:45	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:18:53.383   17:02:46	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:18:53.383   17:02:46	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:18:53.383   17:02:46	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:18:53.383   17:02:46	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:18:53.383   17:02:46	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:18:53.383   17:02:46	-- bdev/nbd_common.sh@41 -- # break
00:18:53.383   17:02:46	-- bdev/nbd_common.sh@45 -- # return 0
00:18:53.383   17:02:46	-- bdev/bdev_raid.sh@692 -- # '[' false = true ']'
00:18:53.383   17:02:46	-- bdev/bdev_raid.sh@709 -- # killprocess 132673
00:18:53.383   17:02:46	-- common/autotest_common.sh@936 -- # '[' -z 132673 ']'
00:18:53.383   17:02:46	-- common/autotest_common.sh@940 -- # kill -0 132673
00:18:53.383    17:02:46	-- common/autotest_common.sh@941 -- # uname
00:18:53.383   17:02:46	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:18:53.383    17:02:46	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 132673
00:18:53.383   17:02:46	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:18:53.383   17:02:46	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:18:53.383   17:02:46	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 132673'
00:18:53.383  killing process with pid 132673
00:18:53.383   17:02:46	-- common/autotest_common.sh@955 -- # kill 132673
00:18:53.383  Received shutdown signal, test time was about 60.000000 seconds
00:18:53.383  
00:18:53.383                                                                                                  Latency(us)
00:18:53.383  
[2024-11-19T17:02:46.247Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:18:53.383  
[2024-11-19T17:02:46.247Z]  ===================================================================================================================
00:18:53.383  
[2024-11-19T17:02:46.247Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00 18446744073709551616.00       0.00
00:18:53.383   17:02:46	-- common/autotest_common.sh@960 -- # wait 132673
00:18:53.383  [2024-11-19 17:02:46.048615] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:18:53.383  [2024-11-19 17:02:46.107736] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:18:53.950   17:02:46	-- bdev/bdev_raid.sh@711 -- # return 0
00:18:53.950  
00:18:53.950  real	0m21.872s
00:18:53.950  user	0m29.683s
00:18:53.950  sys	0m4.877s
00:18:53.950   17:02:46	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:18:53.950  ************************************
00:18:53.950  END TEST raid_rebuild_test
00:18:53.950   17:02:46	-- common/autotest_common.sh@10 -- # set +x
00:18:53.950  ************************************
00:18:53.950   17:02:46	-- bdev/bdev_raid.sh@736 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false
00:18:53.950   17:02:46	-- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']'
00:18:53.950   17:02:46	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:18:53.950   17:02:46	-- common/autotest_common.sh@10 -- # set +x
00:18:53.950  ************************************
00:18:53.950  START TEST raid_rebuild_test_sb
00:18:53.950  ************************************
00:18:53.950   17:02:46	-- common/autotest_common.sh@1114 -- # raid_rebuild_test raid1 2 true false
00:18:53.950   17:02:46	-- bdev/bdev_raid.sh@517 -- # local raid_level=raid1
00:18:53.950   17:02:46	-- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2
00:18:53.950   17:02:46	-- bdev/bdev_raid.sh@519 -- # local superblock=true
00:18:53.950   17:02:46	-- bdev/bdev_raid.sh@520 -- # local background_io=false
00:18:53.950    17:02:46	-- bdev/bdev_raid.sh@521 -- # (( i = 1 ))
00:18:53.950    17:02:46	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:18:53.950    17:02:46	-- bdev/bdev_raid.sh@521 -- # echo BaseBdev1
00:18:53.950    17:02:46	-- bdev/bdev_raid.sh@521 -- # (( i++ ))
00:18:53.950    17:02:46	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:18:53.950    17:02:46	-- bdev/bdev_raid.sh@521 -- # echo BaseBdev2
00:18:53.950    17:02:46	-- bdev/bdev_raid.sh@521 -- # (( i++ ))
00:18:53.950    17:02:46	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:18:53.950   17:02:46	-- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2')
00:18:53.950   17:02:46	-- bdev/bdev_raid.sh@521 -- # local base_bdevs
00:18:53.950   17:02:46	-- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1
00:18:53.950   17:02:46	-- bdev/bdev_raid.sh@523 -- # local strip_size
00:18:53.950   17:02:46	-- bdev/bdev_raid.sh@524 -- # local create_arg
00:18:53.950   17:02:46	-- bdev/bdev_raid.sh@525 -- # local raid_bdev_size
00:18:53.950   17:02:46	-- bdev/bdev_raid.sh@526 -- # local data_offset
00:18:53.950   17:02:46	-- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']'
00:18:53.950   17:02:46	-- bdev/bdev_raid.sh@536 -- # strip_size=0
00:18:53.950   17:02:46	-- bdev/bdev_raid.sh@539 -- # '[' true = true ']'
00:18:53.950   17:02:46	-- bdev/bdev_raid.sh@540 -- # create_arg+=' -s'
00:18:53.950   17:02:46	-- bdev/bdev_raid.sh@544 -- # raid_pid=133214
00:18:53.950   17:02:46	-- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid
00:18:53.950   17:02:46	-- bdev/bdev_raid.sh@545 -- # waitforlisten 133214 /var/tmp/spdk-raid.sock
00:18:53.950   17:02:46	-- common/autotest_common.sh@829 -- # '[' -z 133214 ']'
00:18:53.950   17:02:46	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock
00:18:53.950   17:02:46	-- common/autotest_common.sh@834 -- # local max_retries=100
00:18:53.950   17:02:46	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...'
00:18:53.950  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...
00:18:53.950   17:02:46	-- common/autotest_common.sh@838 -- # xtrace_disable
00:18:53.950   17:02:46	-- common/autotest_common.sh@10 -- # set +x
00:18:53.950  [2024-11-19 17:02:46.691867] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:18:53.950  [2024-11-19 17:02:46.692405] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133214 ]
00:18:53.950  I/O size of 3145728 is greater than zero copy threshold (65536).
00:18:53.950  Zero copy mechanism will not be used.
00:18:54.208  [2024-11-19 17:02:46.850937] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:18:54.208  [2024-11-19 17:02:46.932576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:18:54.208  [2024-11-19 17:02:47.012748] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:18:54.776   17:02:47	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:18:54.776   17:02:47	-- common/autotest_common.sh@862 -- # return 0
00:18:54.776   17:02:47	-- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}"
00:18:54.776   17:02:47	-- bdev/bdev_raid.sh@549 -- # '[' true = true ']'
00:18:54.776   17:02:47	-- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc
00:18:55.343  BaseBdev1_malloc
00:18:55.343   17:02:47	-- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1
00:18:55.343  [2024-11-19 17:02:48.132273] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc
00:18:55.343  [2024-11-19 17:02:48.132672] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:18:55.343  [2024-11-19 17:02:48.132854] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80
00:18:55.343  [2024-11-19 17:02:48.132994] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:18:55.343  [2024-11-19 17:02:48.136425] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:18:55.343  [2024-11-19 17:02:48.136672] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1
00:18:55.343  BaseBdev1
00:18:55.343   17:02:48	-- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}"
00:18:55.343   17:02:48	-- bdev/bdev_raid.sh@549 -- # '[' true = true ']'
00:18:55.343   17:02:48	-- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc
00:18:55.602  BaseBdev2_malloc
00:18:55.602   17:02:48	-- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2
00:18:55.860  [2024-11-19 17:02:48.638341] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc
00:18:55.860  [2024-11-19 17:02:48.638745] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:18:55.860  [2024-11-19 17:02:48.638832] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680
00:18:55.860  [2024-11-19 17:02:48.639165] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:18:55.860  [2024-11-19 17:02:48.642146] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:18:55.860  [2024-11-19 17:02:48.642366] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2
00:18:55.860  BaseBdev2
00:18:55.860   17:02:48	-- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc
00:18:56.118  spare_malloc
00:18:56.376   17:02:48	-- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000
00:18:56.376  spare_delay
00:18:56.376   17:02:49	-- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare
00:18:56.943  [2024-11-19 17:02:49.496283] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay
00:18:56.943  [2024-11-19 17:02:49.496706] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:18:56.943  [2024-11-19 17:02:49.496804] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880
00:18:56.943  [2024-11-19 17:02:49.496940] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:18:56.943  [2024-11-19 17:02:49.500313] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:18:56.943  [2024-11-19 17:02:49.500568] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare
00:18:56.943  spare
00:18:56.943   17:02:49	-- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1
00:18:56.943  [2024-11-19 17:02:49.781205] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:18:56.943  [2024-11-19 17:02:49.784468] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:18:56.943  [2024-11-19 17:02:49.785003] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80
00:18:56.943  [2024-11-19 17:02:49.785132] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512
00:18:56.943  [2024-11-19 17:02:49.785411] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0
00:18:56.943  [2024-11-19 17:02:49.785989] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80
00:18:56.943  [2024-11-19 17:02:49.786112] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80
00:18:56.943  [2024-11-19 17:02:49.786513] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:18:57.202   17:02:49	-- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2
00:18:57.202   17:02:49	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:18:57.202   17:02:49	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:18:57.202   17:02:49	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:18:57.202   17:02:49	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:18:57.202   17:02:49	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:18:57.202   17:02:49	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:18:57.202   17:02:49	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:18:57.202   17:02:49	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:18:57.202   17:02:49	-- bdev/bdev_raid.sh@125 -- # local tmp
00:18:57.202    17:02:49	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:18:57.202    17:02:49	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:18:57.460   17:02:50	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:18:57.460    "name": "raid_bdev1",
00:18:57.461    "uuid": "603e6eae-180e-4292-abfb-d2d14e72e2d3",
00:18:57.461    "strip_size_kb": 0,
00:18:57.461    "state": "online",
00:18:57.461    "raid_level": "raid1",
00:18:57.461    "superblock": true,
00:18:57.461    "num_base_bdevs": 2,
00:18:57.461    "num_base_bdevs_discovered": 2,
00:18:57.461    "num_base_bdevs_operational": 2,
00:18:57.461    "base_bdevs_list": [
00:18:57.461      {
00:18:57.461        "name": "BaseBdev1",
00:18:57.461        "uuid": "b651cdb1-97a0-51cd-b9c0-93f2aab59e73",
00:18:57.461        "is_configured": true,
00:18:57.461        "data_offset": 2048,
00:18:57.461        "data_size": 63488
00:18:57.461      },
00:18:57.461      {
00:18:57.461        "name": "BaseBdev2",
00:18:57.461        "uuid": "667f8d68-68a2-55c0-aec9-0aea507f2059",
00:18:57.461        "is_configured": true,
00:18:57.461        "data_offset": 2048,
00:18:57.461        "data_size": 63488
00:18:57.461      }
00:18:57.461    ]
00:18:57.461  }'
00:18:57.461   17:02:50	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:18:57.461   17:02:50	-- common/autotest_common.sh@10 -- # set +x
00:18:58.028    17:02:50	-- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1
00:18:58.028    17:02:50	-- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks'
00:18:58.287  [2024-11-19 17:02:50.934514] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:18:58.287   17:02:50	-- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488
00:18:58.287    17:02:50	-- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset'
00:18:58.287    17:02:50	-- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:18:58.546   17:02:51	-- bdev/bdev_raid.sh@570 -- # data_offset=2048
00:18:58.546   17:02:51	-- bdev/bdev_raid.sh@572 -- # '[' false = true ']'
00:18:58.546   17:02:51	-- bdev/bdev_raid.sh@576 -- # local write_unit_size
00:18:58.546   17:02:51	-- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0
00:18:58.546   17:02:51	-- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:18:58.546   17:02:51	-- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1')
00:18:58.546   17:02:51	-- bdev/nbd_common.sh@10 -- # local bdev_list
00:18:58.546   17:02:51	-- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0')
00:18:58.546   17:02:51	-- bdev/nbd_common.sh@11 -- # local nbd_list
00:18:58.546   17:02:51	-- bdev/nbd_common.sh@12 -- # local i
00:18:58.546   17:02:51	-- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:18:58.546   17:02:51	-- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:18:58.546   17:02:51	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0
00:18:58.804  [2024-11-19 17:02:51.458725] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460
00:18:58.804  /dev/nbd0
00:18:58.804    17:02:51	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:18:58.804   17:02:51	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:18:58.804   17:02:51	-- common/autotest_common.sh@866 -- # local nbd_name=nbd0
00:18:58.804   17:02:51	-- common/autotest_common.sh@867 -- # local i
00:18:58.804   17:02:51	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:18:58.804   17:02:51	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:18:58.804   17:02:51	-- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions
00:18:58.804   17:02:51	-- common/autotest_common.sh@871 -- # break
00:18:58.804   17:02:51	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:18:58.804   17:02:51	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:18:58.804   17:02:51	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:18:58.804  1+0 records in
00:18:58.804  1+0 records out
00:18:58.804  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000413882 s, 9.9 MB/s
00:18:58.804    17:02:51	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:18:58.804   17:02:51	-- common/autotest_common.sh@884 -- # size=4096
00:18:58.804   17:02:51	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:18:58.804   17:02:51	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:18:58.804   17:02:51	-- common/autotest_common.sh@887 -- # return 0
00:18:58.805   17:02:51	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:18:58.805   17:02:51	-- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:18:58.805   17:02:51	-- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']'
00:18:58.805   17:02:51	-- bdev/bdev_raid.sh@584 -- # write_unit_size=1
00:18:58.805   17:02:51	-- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct
00:19:05.367  63488+0 records in
00:19:05.367  63488+0 records out
00:19:05.367  32505856 bytes (33 MB, 31 MiB) copied, 5.58287 s, 5.8 MB/s
00:19:05.367   17:02:57	-- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0
00:19:05.367   17:02:57	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:19:05.367   17:02:57	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:19:05.367   17:02:57	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:19:05.367   17:02:57	-- bdev/nbd_common.sh@51 -- # local i
00:19:05.367   17:02:57	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:19:05.367   17:02:57	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0
00:19:05.367  [2024-11-19 17:02:57.427959] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:19:05.367    17:02:57	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:19:05.367   17:02:57	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:19:05.367   17:02:57	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:19:05.367   17:02:57	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:19:05.367   17:02:57	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:19:05.368   17:02:57	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:19:05.368   17:02:57	-- bdev/nbd_common.sh@41 -- # break
00:19:05.368   17:02:57	-- bdev/nbd_common.sh@45 -- # return 0
00:19:05.368   17:02:57	-- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1
00:19:05.368  [2024-11-19 17:02:57.671189] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:19:05.368   17:02:57	-- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1
00:19:05.368   17:02:57	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:19:05.368   17:02:57	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:19:05.368   17:02:57	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:19:05.368   17:02:57	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:19:05.368   17:02:57	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1
00:19:05.368   17:02:57	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:19:05.368   17:02:57	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:19:05.368   17:02:57	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:19:05.368   17:02:57	-- bdev/bdev_raid.sh@125 -- # local tmp
00:19:05.368    17:02:57	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:19:05.368    17:02:57	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:19:05.368   17:02:57	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:19:05.368    "name": "raid_bdev1",
00:19:05.368    "uuid": "603e6eae-180e-4292-abfb-d2d14e72e2d3",
00:19:05.368    "strip_size_kb": 0,
00:19:05.368    "state": "online",
00:19:05.368    "raid_level": "raid1",
00:19:05.368    "superblock": true,
00:19:05.368    "num_base_bdevs": 2,
00:19:05.368    "num_base_bdevs_discovered": 1,
00:19:05.368    "num_base_bdevs_operational": 1,
00:19:05.368    "base_bdevs_list": [
00:19:05.368      {
00:19:05.368        "name": null,
00:19:05.368        "uuid": "00000000-0000-0000-0000-000000000000",
00:19:05.368        "is_configured": false,
00:19:05.368        "data_offset": 2048,
00:19:05.368        "data_size": 63488
00:19:05.368      },
00:19:05.368      {
00:19:05.368        "name": "BaseBdev2",
00:19:05.368        "uuid": "667f8d68-68a2-55c0-aec9-0aea507f2059",
00:19:05.368        "is_configured": true,
00:19:05.368        "data_offset": 2048,
00:19:05.368        "data_size": 63488
00:19:05.368      }
00:19:05.368    ]
00:19:05.368  }'
00:19:05.368   17:02:57	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:19:05.368   17:02:57	-- common/autotest_common.sh@10 -- # set +x
00:19:05.934   17:02:58	-- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare
00:19:06.499  [2024-11-19 17:02:59.071621] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare
00:19:06.499  [2024-11-19 17:02:59.071699] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:19:06.499  [2024-11-19 17:02:59.076405] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000c3e0e0
00:19:06.499  [2024-11-19 17:02:59.078961] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1
00:19:06.499   17:02:59	-- bdev/bdev_raid.sh@598 -- # sleep 1
00:19:07.433   17:03:00	-- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:19:07.433   17:03:00	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:19:07.433   17:03:00	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:19:07.433   17:03:00	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:19:07.433   17:03:00	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:19:07.433    17:03:00	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:19:07.433    17:03:00	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:19:07.691   17:03:00	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:19:07.691    "name": "raid_bdev1",
00:19:07.691    "uuid": "603e6eae-180e-4292-abfb-d2d14e72e2d3",
00:19:07.691    "strip_size_kb": 0,
00:19:07.691    "state": "online",
00:19:07.691    "raid_level": "raid1",
00:19:07.691    "superblock": true,
00:19:07.691    "num_base_bdevs": 2,
00:19:07.691    "num_base_bdevs_discovered": 2,
00:19:07.691    "num_base_bdevs_operational": 2,
00:19:07.691    "process": {
00:19:07.691      "type": "rebuild",
00:19:07.691      "target": "spare",
00:19:07.691      "progress": {
00:19:07.691        "blocks": 24576,
00:19:07.691        "percent": 38
00:19:07.691      }
00:19:07.691    },
00:19:07.691    "base_bdevs_list": [
00:19:07.691      {
00:19:07.691        "name": "spare",
00:19:07.691        "uuid": "b4eaf356-ca28-5630-8725-0f7662b6e85a",
00:19:07.691        "is_configured": true,
00:19:07.691        "data_offset": 2048,
00:19:07.691        "data_size": 63488
00:19:07.691      },
00:19:07.691      {
00:19:07.691        "name": "BaseBdev2",
00:19:07.691        "uuid": "667f8d68-68a2-55c0-aec9-0aea507f2059",
00:19:07.691        "is_configured": true,
00:19:07.691        "data_offset": 2048,
00:19:07.691        "data_size": 63488
00:19:07.691      }
00:19:07.691    ]
00:19:07.691  }'
00:19:07.691    17:03:00	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:19:07.691   17:03:00	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:19:07.691    17:03:00	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:19:07.691   17:03:00	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:19:07.691   17:03:00	-- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare
00:19:07.951  [2024-11-19 17:03:00.689855] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:19:07.951  [2024-11-19 17:03:00.690425] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device
00:19:07.951  [2024-11-19 17:03:00.690518] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:19:07.951   17:03:00	-- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1
00:19:07.951   17:03:00	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:19:07.951   17:03:00	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:19:07.951   17:03:00	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:19:07.951   17:03:00	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:19:07.951   17:03:00	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1
00:19:07.951   17:03:00	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:19:07.951   17:03:00	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:19:07.951   17:03:00	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:19:07.951   17:03:00	-- bdev/bdev_raid.sh@125 -- # local tmp
00:19:07.951    17:03:00	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:19:07.951    17:03:00	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:19:08.209   17:03:01	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:19:08.209    "name": "raid_bdev1",
00:19:08.209    "uuid": "603e6eae-180e-4292-abfb-d2d14e72e2d3",
00:19:08.209    "strip_size_kb": 0,
00:19:08.209    "state": "online",
00:19:08.209    "raid_level": "raid1",
00:19:08.209    "superblock": true,
00:19:08.209    "num_base_bdevs": 2,
00:19:08.209    "num_base_bdevs_discovered": 1,
00:19:08.209    "num_base_bdevs_operational": 1,
00:19:08.209    "base_bdevs_list": [
00:19:08.209      {
00:19:08.209        "name": null,
00:19:08.209        "uuid": "00000000-0000-0000-0000-000000000000",
00:19:08.209        "is_configured": false,
00:19:08.209        "data_offset": 2048,
00:19:08.209        "data_size": 63488
00:19:08.209      },
00:19:08.210      {
00:19:08.210        "name": "BaseBdev2",
00:19:08.210        "uuid": "667f8d68-68a2-55c0-aec9-0aea507f2059",
00:19:08.210        "is_configured": true,
00:19:08.210        "data_offset": 2048,
00:19:08.210        "data_size": 63488
00:19:08.210      }
00:19:08.210    ]
00:19:08.210  }'
00:19:08.210   17:03:01	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:19:08.210   17:03:01	-- common/autotest_common.sh@10 -- # set +x
00:19:09.144   17:03:01	-- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none
00:19:09.144   17:03:01	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:19:09.144   17:03:01	-- bdev/bdev_raid.sh@184 -- # local process_type=none
00:19:09.144   17:03:01	-- bdev/bdev_raid.sh@185 -- # local target=none
00:19:09.144   17:03:01	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:19:09.144    17:03:01	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:19:09.144    17:03:01	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:19:09.144   17:03:01	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:19:09.144    "name": "raid_bdev1",
00:19:09.144    "uuid": "603e6eae-180e-4292-abfb-d2d14e72e2d3",
00:19:09.144    "strip_size_kb": 0,
00:19:09.144    "state": "online",
00:19:09.144    "raid_level": "raid1",
00:19:09.144    "superblock": true,
00:19:09.144    "num_base_bdevs": 2,
00:19:09.144    "num_base_bdevs_discovered": 1,
00:19:09.144    "num_base_bdevs_operational": 1,
00:19:09.144    "base_bdevs_list": [
00:19:09.144      {
00:19:09.144        "name": null,
00:19:09.144        "uuid": "00000000-0000-0000-0000-000000000000",
00:19:09.144        "is_configured": false,
00:19:09.144        "data_offset": 2048,
00:19:09.144        "data_size": 63488
00:19:09.144      },
00:19:09.144      {
00:19:09.144        "name": "BaseBdev2",
00:19:09.144        "uuid": "667f8d68-68a2-55c0-aec9-0aea507f2059",
00:19:09.144        "is_configured": true,
00:19:09.144        "data_offset": 2048,
00:19:09.144        "data_size": 63488
00:19:09.144      }
00:19:09.144    ]
00:19:09.144  }'
00:19:09.144    17:03:01	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:19:09.402   17:03:02	-- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]]
00:19:09.402    17:03:02	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:19:09.402   17:03:02	-- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]]
00:19:09.402   17:03:02	-- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare
00:19:09.660  [2024-11-19 17:03:02.284004] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare
00:19:09.660  [2024-11-19 17:03:02.284072] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:19:09.660  [2024-11-19 17:03:02.288614] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000c3e280
00:19:09.660  [2024-11-19 17:03:02.291114] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1
00:19:09.660   17:03:02	-- bdev/bdev_raid.sh@614 -- # sleep 1
00:19:10.594   17:03:03	-- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:19:10.594   17:03:03	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:19:10.594   17:03:03	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:19:10.594   17:03:03	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:19:10.594   17:03:03	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:19:10.594    17:03:03	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:19:10.594    17:03:03	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:19:10.853   17:03:03	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:19:10.853    "name": "raid_bdev1",
00:19:10.853    "uuid": "603e6eae-180e-4292-abfb-d2d14e72e2d3",
00:19:10.853    "strip_size_kb": 0,
00:19:10.853    "state": "online",
00:19:10.853    "raid_level": "raid1",
00:19:10.853    "superblock": true,
00:19:10.853    "num_base_bdevs": 2,
00:19:10.853    "num_base_bdevs_discovered": 2,
00:19:10.853    "num_base_bdevs_operational": 2,
00:19:10.853    "process": {
00:19:10.854      "type": "rebuild",
00:19:10.854      "target": "spare",
00:19:10.854      "progress": {
00:19:10.854        "blocks": 26624,
00:19:10.854        "percent": 41
00:19:10.854      }
00:19:10.854    },
00:19:10.854    "base_bdevs_list": [
00:19:10.854      {
00:19:10.854        "name": "spare",
00:19:10.854        "uuid": "b4eaf356-ca28-5630-8725-0f7662b6e85a",
00:19:10.854        "is_configured": true,
00:19:10.854        "data_offset": 2048,
00:19:10.854        "data_size": 63488
00:19:10.854      },
00:19:10.854      {
00:19:10.854        "name": "BaseBdev2",
00:19:10.854        "uuid": "667f8d68-68a2-55c0-aec9-0aea507f2059",
00:19:10.854        "is_configured": true,
00:19:10.854        "data_offset": 2048,
00:19:10.854        "data_size": 63488
00:19:10.854      }
00:19:10.854    ]
00:19:10.854  }'
00:19:10.854    17:03:03	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:19:11.111   17:03:03	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:19:11.111    17:03:03	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:19:11.111   17:03:03	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:19:11.111   17:03:03	-- bdev/bdev_raid.sh@617 -- # '[' true = true ']'
00:19:11.111   17:03:03	-- bdev/bdev_raid.sh@617 -- # '[' = false ']'
00:19:11.111  /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected
00:19:11.111   17:03:03	-- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2
00:19:11.111   17:03:03	-- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']'
00:19:11.111   17:03:03	-- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']'
00:19:11.111   17:03:03	-- bdev/bdev_raid.sh@657 -- # local timeout=384
00:19:11.111   17:03:03	-- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout ))
00:19:11.111   17:03:03	-- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:19:11.111   17:03:03	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:19:11.111   17:03:03	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:19:11.111   17:03:03	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:19:11.111   17:03:03	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:19:11.111    17:03:03	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:19:11.111    17:03:03	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:19:11.369   17:03:04	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:19:11.369    "name": "raid_bdev1",
00:19:11.369    "uuid": "603e6eae-180e-4292-abfb-d2d14e72e2d3",
00:19:11.369    "strip_size_kb": 0,
00:19:11.369    "state": "online",
00:19:11.369    "raid_level": "raid1",
00:19:11.369    "superblock": true,
00:19:11.369    "num_base_bdevs": 2,
00:19:11.369    "num_base_bdevs_discovered": 2,
00:19:11.369    "num_base_bdevs_operational": 2,
00:19:11.369    "process": {
00:19:11.369      "type": "rebuild",
00:19:11.369      "target": "spare",
00:19:11.369      "progress": {
00:19:11.369        "blocks": 34816,
00:19:11.369        "percent": 54
00:19:11.369      }
00:19:11.369    },
00:19:11.369    "base_bdevs_list": [
00:19:11.369      {
00:19:11.369        "name": "spare",
00:19:11.369        "uuid": "b4eaf356-ca28-5630-8725-0f7662b6e85a",
00:19:11.369        "is_configured": true,
00:19:11.369        "data_offset": 2048,
00:19:11.369        "data_size": 63488
00:19:11.369      },
00:19:11.369      {
00:19:11.369        "name": "BaseBdev2",
00:19:11.369        "uuid": "667f8d68-68a2-55c0-aec9-0aea507f2059",
00:19:11.369        "is_configured": true,
00:19:11.369        "data_offset": 2048,
00:19:11.369        "data_size": 63488
00:19:11.369      }
00:19:11.369    ]
00:19:11.369  }'
00:19:11.369    17:03:04	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:19:11.369   17:03:04	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:19:11.369    17:03:04	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:19:11.369   17:03:04	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:19:11.369   17:03:04	-- bdev/bdev_raid.sh@662 -- # sleep 1
00:19:12.744   17:03:05	-- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout ))
00:19:12.744   17:03:05	-- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:19:12.744   17:03:05	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:19:12.744   17:03:05	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:19:12.744   17:03:05	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:19:12.744   17:03:05	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:19:12.744    17:03:05	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:19:12.744    17:03:05	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:19:12.744  [2024-11-19 17:03:05.410741] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1
00:19:12.744  [2024-11-19 17:03:05.410845] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1
00:19:12.744  [2024-11-19 17:03:05.411030] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:19:12.744   17:03:05	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:19:12.744    "name": "raid_bdev1",
00:19:12.744    "uuid": "603e6eae-180e-4292-abfb-d2d14e72e2d3",
00:19:12.744    "strip_size_kb": 0,
00:19:12.744    "state": "online",
00:19:12.744    "raid_level": "raid1",
00:19:12.744    "superblock": true,
00:19:12.744    "num_base_bdevs": 2,
00:19:12.744    "num_base_bdevs_discovered": 2,
00:19:12.744    "num_base_bdevs_operational": 2,
00:19:12.744    "base_bdevs_list": [
00:19:12.744      {
00:19:12.744        "name": "spare",
00:19:12.744        "uuid": "b4eaf356-ca28-5630-8725-0f7662b6e85a",
00:19:12.744        "is_configured": true,
00:19:12.744        "data_offset": 2048,
00:19:12.744        "data_size": 63488
00:19:12.744      },
00:19:12.744      {
00:19:12.744        "name": "BaseBdev2",
00:19:12.744        "uuid": "667f8d68-68a2-55c0-aec9-0aea507f2059",
00:19:12.744        "is_configured": true,
00:19:12.744        "data_offset": 2048,
00:19:12.744        "data_size": 63488
00:19:12.744      }
00:19:12.744    ]
00:19:12.744  }'
00:19:12.744    17:03:05	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:19:12.744   17:03:05	-- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]]
00:19:12.744    17:03:05	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:19:12.744   17:03:05	-- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]]
00:19:12.744   17:03:05	-- bdev/bdev_raid.sh@660 -- # break
00:19:12.744   17:03:05	-- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none
00:19:12.744   17:03:05	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:19:12.744   17:03:05	-- bdev/bdev_raid.sh@184 -- # local process_type=none
00:19:12.744   17:03:05	-- bdev/bdev_raid.sh@185 -- # local target=none
00:19:12.744   17:03:05	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:19:12.744    17:03:05	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:19:12.744    17:03:05	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:19:13.002   17:03:05	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:19:13.002    "name": "raid_bdev1",
00:19:13.002    "uuid": "603e6eae-180e-4292-abfb-d2d14e72e2d3",
00:19:13.002    "strip_size_kb": 0,
00:19:13.002    "state": "online",
00:19:13.002    "raid_level": "raid1",
00:19:13.002    "superblock": true,
00:19:13.002    "num_base_bdevs": 2,
00:19:13.002    "num_base_bdevs_discovered": 2,
00:19:13.002    "num_base_bdevs_operational": 2,
00:19:13.002    "base_bdevs_list": [
00:19:13.002      {
00:19:13.002        "name": "spare",
00:19:13.002        "uuid": "b4eaf356-ca28-5630-8725-0f7662b6e85a",
00:19:13.002        "is_configured": true,
00:19:13.002        "data_offset": 2048,
00:19:13.002        "data_size": 63488
00:19:13.002      },
00:19:13.002      {
00:19:13.002        "name": "BaseBdev2",
00:19:13.002        "uuid": "667f8d68-68a2-55c0-aec9-0aea507f2059",
00:19:13.002        "is_configured": true,
00:19:13.002        "data_offset": 2048,
00:19:13.002        "data_size": 63488
00:19:13.002      }
00:19:13.002    ]
00:19:13.002  }'
00:19:13.002    17:03:05	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:19:13.261   17:03:05	-- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]]
00:19:13.261    17:03:05	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:19:13.261   17:03:05	-- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]]
00:19:13.261   17:03:05	-- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2
00:19:13.261   17:03:05	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:19:13.261   17:03:05	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:19:13.261   17:03:05	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:19:13.261   17:03:05	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:19:13.261   17:03:05	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:19:13.261   17:03:05	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:19:13.261   17:03:05	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:19:13.261   17:03:05	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:19:13.261   17:03:05	-- bdev/bdev_raid.sh@125 -- # local tmp
00:19:13.261    17:03:05	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:19:13.261    17:03:05	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:19:13.520   17:03:06	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:19:13.520    "name": "raid_bdev1",
00:19:13.520    "uuid": "603e6eae-180e-4292-abfb-d2d14e72e2d3",
00:19:13.520    "strip_size_kb": 0,
00:19:13.520    "state": "online",
00:19:13.520    "raid_level": "raid1",
00:19:13.520    "superblock": true,
00:19:13.520    "num_base_bdevs": 2,
00:19:13.520    "num_base_bdevs_discovered": 2,
00:19:13.520    "num_base_bdevs_operational": 2,
00:19:13.520    "base_bdevs_list": [
00:19:13.520      {
00:19:13.520        "name": "spare",
00:19:13.520        "uuid": "b4eaf356-ca28-5630-8725-0f7662b6e85a",
00:19:13.520        "is_configured": true,
00:19:13.520        "data_offset": 2048,
00:19:13.520        "data_size": 63488
00:19:13.520      },
00:19:13.520      {
00:19:13.520        "name": "BaseBdev2",
00:19:13.520        "uuid": "667f8d68-68a2-55c0-aec9-0aea507f2059",
00:19:13.520        "is_configured": true,
00:19:13.520        "data_offset": 2048,
00:19:13.520        "data_size": 63488
00:19:13.520      }
00:19:13.520    ]
00:19:13.520  }'
00:19:13.520   17:03:06	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:19:13.520   17:03:06	-- common/autotest_common.sh@10 -- # set +x
00:19:14.086   17:03:06	-- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1
00:19:14.344  [2024-11-19 17:03:07.036172] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:19:14.344  [2024-11-19 17:03:07.036229] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:19:14.344  [2024-11-19 17:03:07.036357] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:19:14.344  [2024-11-19 17:03:07.036449] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:19:14.344  [2024-11-19 17:03:07.036463] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline
00:19:14.344    17:03:07	-- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:19:14.344    17:03:07	-- bdev/bdev_raid.sh@671 -- # jq length
00:19:14.602   17:03:07	-- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]]
00:19:14.602   17:03:07	-- bdev/bdev_raid.sh@673 -- # '[' false = true ']'
00:19:14.602   17:03:07	-- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1'
00:19:14.602   17:03:07	-- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:19:14.602   17:03:07	-- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare')
00:19:14.602   17:03:07	-- bdev/nbd_common.sh@10 -- # local bdev_list
00:19:14.602   17:03:07	-- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:19:14.602   17:03:07	-- bdev/nbd_common.sh@11 -- # local nbd_list
00:19:14.602   17:03:07	-- bdev/nbd_common.sh@12 -- # local i
00:19:14.602   17:03:07	-- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:19:14.602   17:03:07	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:19:14.602   17:03:07	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0
00:19:14.860  /dev/nbd0
00:19:14.860    17:03:07	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:19:14.860   17:03:07	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:19:14.860   17:03:07	-- common/autotest_common.sh@866 -- # local nbd_name=nbd0
00:19:14.860   17:03:07	-- common/autotest_common.sh@867 -- # local i
00:19:14.860   17:03:07	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:19:14.860   17:03:07	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:19:14.860   17:03:07	-- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions
00:19:14.860   17:03:07	-- common/autotest_common.sh@871 -- # break
00:19:14.860   17:03:07	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:19:14.860   17:03:07	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:19:14.860   17:03:07	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:19:14.860  1+0 records in
00:19:14.860  1+0 records out
00:19:14.860  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000324344 s, 12.6 MB/s
00:19:14.860    17:03:07	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:19:14.860   17:03:07	-- common/autotest_common.sh@884 -- # size=4096
00:19:14.860   17:03:07	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:19:14.860   17:03:07	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:19:14.860   17:03:07	-- common/autotest_common.sh@887 -- # return 0
00:19:14.860   17:03:07	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:19:14.860   17:03:07	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:19:14.860   17:03:07	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1
00:19:15.118  /dev/nbd1
00:19:15.118    17:03:07	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:19:15.118   17:03:07	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:19:15.118   17:03:07	-- common/autotest_common.sh@866 -- # local nbd_name=nbd1
00:19:15.118   17:03:07	-- common/autotest_common.sh@867 -- # local i
00:19:15.118   17:03:07	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:19:15.118   17:03:07	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:19:15.118   17:03:07	-- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions
00:19:15.118   17:03:07	-- common/autotest_common.sh@871 -- # break
00:19:15.118   17:03:07	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:19:15.118   17:03:07	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:19:15.118   17:03:07	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:19:15.118  1+0 records in
00:19:15.118  1+0 records out
00:19:15.118  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00042029 s, 9.7 MB/s
00:19:15.118    17:03:07	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:19:15.118   17:03:07	-- common/autotest_common.sh@884 -- # size=4096
00:19:15.118   17:03:07	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:19:15.118   17:03:07	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:19:15.118   17:03:07	-- common/autotest_common.sh@887 -- # return 0
00:19:15.118   17:03:07	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:19:15.118   17:03:07	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:19:15.118   17:03:07	-- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1
00:19:15.437   17:03:07	-- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1'
00:19:15.437   17:03:07	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:19:15.437   17:03:07	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:19:15.437   17:03:07	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:19:15.437   17:03:07	-- bdev/nbd_common.sh@51 -- # local i
00:19:15.437   17:03:07	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:19:15.437   17:03:07	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0
00:19:15.710    17:03:08	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:19:15.710   17:03:08	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:19:15.710   17:03:08	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:19:15.710   17:03:08	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:19:15.710   17:03:08	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:19:15.710   17:03:08	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:19:15.710   17:03:08	-- bdev/nbd_common.sh@41 -- # break
00:19:15.710   17:03:08	-- bdev/nbd_common.sh@45 -- # return 0
00:19:15.710   17:03:08	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:19:15.710   17:03:08	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1
00:19:15.968    17:03:08	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:19:15.968   17:03:08	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:19:15.968   17:03:08	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:19:15.968   17:03:08	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:19:15.968   17:03:08	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:19:15.968   17:03:08	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:19:15.968   17:03:08	-- bdev/nbd_common.sh@41 -- # break
00:19:15.968   17:03:08	-- bdev/nbd_common.sh@45 -- # return 0
00:19:15.968   17:03:08	-- bdev/bdev_raid.sh@692 -- # '[' true = true ']'
00:19:15.968   17:03:08	-- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}"
00:19:15.968   17:03:08	-- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']'
00:19:15.968   17:03:08	-- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1
00:19:16.226   17:03:08	-- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1
00:19:16.485  [2024-11-19 17:03:09.166631] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc
00:19:16.485  [2024-11-19 17:03:09.166756] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:19:16.485  [2024-11-19 17:03:09.166815] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80
00:19:16.485  [2024-11-19 17:03:09.166886] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:19:16.485  [2024-11-19 17:03:09.169655] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:19:16.485  [2024-11-19 17:03:09.169758] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1
00:19:16.485  [2024-11-19 17:03:09.169863] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1
00:19:16.485  [2024-11-19 17:03:09.169968] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:19:16.485  BaseBdev1
00:19:16.485   17:03:09	-- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}"
00:19:16.485   17:03:09	-- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']'
00:19:16.485   17:03:09	-- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2
00:19:16.743   17:03:09	-- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2
00:19:17.001  [2024-11-19 17:03:09.690745] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc
00:19:17.001  [2024-11-19 17:03:09.690915] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:19:17.001  [2024-11-19 17:03:09.690993] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680
00:19:17.001  [2024-11-19 17:03:09.691025] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:19:17.001  [2024-11-19 17:03:09.691485] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:19:17.001  [2024-11-19 17:03:09.691555] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2
00:19:17.001  [2024-11-19 17:03:09.691652] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2
00:19:17.001  [2024-11-19 17:03:09.691667] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1)
00:19:17.001  [2024-11-19 17:03:09.691676] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:19:17.001  [2024-11-19 17:03:09.691711] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state configuring
00:19:17.001  [2024-11-19 17:03:09.691776] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:19:17.001  BaseBdev2
00:19:17.001   17:03:09	-- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare
00:19:17.260   17:03:10	-- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare
00:19:17.518  [2024-11-19 17:03:10.298941] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay
00:19:17.518  [2024-11-19 17:03:10.299064] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:19:17.518  [2024-11-19 17:03:10.299116] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80
00:19:17.518  [2024-11-19 17:03:10.299144] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:19:17.518  [2024-11-19 17:03:10.299641] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:19:17.518  [2024-11-19 17:03:10.299704] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare
00:19:17.518  [2024-11-19 17:03:10.299810] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare
00:19:17.518  [2024-11-19 17:03:10.299844] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:19:17.518  spare
00:19:17.518   17:03:10	-- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2
00:19:17.518   17:03:10	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:19:17.518   17:03:10	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:19:17.518   17:03:10	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:19:17.518   17:03:10	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:19:17.518   17:03:10	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:19:17.518   17:03:10	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:19:17.518   17:03:10	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:19:17.518   17:03:10	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:19:17.518   17:03:10	-- bdev/bdev_raid.sh@125 -- # local tmp
00:19:17.518    17:03:10	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:19:17.518    17:03:10	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:19:17.777  [2024-11-19 17:03:10.399982] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009980
00:19:17.777  [2024-11-19 17:03:10.400062] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512
00:19:17.777  [2024-11-19 17:03:10.400315] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caeca0
00:19:17.777  [2024-11-19 17:03:10.400955] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009980
00:19:17.777  [2024-11-19 17:03:10.400995] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009980
00:19:17.777  [2024-11-19 17:03:10.401194] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:19:18.036   17:03:10	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:19:18.036    "name": "raid_bdev1",
00:19:18.036    "uuid": "603e6eae-180e-4292-abfb-d2d14e72e2d3",
00:19:18.036    "strip_size_kb": 0,
00:19:18.036    "state": "online",
00:19:18.036    "raid_level": "raid1",
00:19:18.036    "superblock": true,
00:19:18.036    "num_base_bdevs": 2,
00:19:18.036    "num_base_bdevs_discovered": 2,
00:19:18.036    "num_base_bdevs_operational": 2,
00:19:18.036    "base_bdevs_list": [
00:19:18.036      {
00:19:18.036        "name": "spare",
00:19:18.036        "uuid": "b4eaf356-ca28-5630-8725-0f7662b6e85a",
00:19:18.036        "is_configured": true,
00:19:18.036        "data_offset": 2048,
00:19:18.036        "data_size": 63488
00:19:18.036      },
00:19:18.036      {
00:19:18.036        "name": "BaseBdev2",
00:19:18.036        "uuid": "667f8d68-68a2-55c0-aec9-0aea507f2059",
00:19:18.036        "is_configured": true,
00:19:18.036        "data_offset": 2048,
00:19:18.036        "data_size": 63488
00:19:18.036      }
00:19:18.036    ]
00:19:18.036  }'
00:19:18.036   17:03:10	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:19:18.036   17:03:10	-- common/autotest_common.sh@10 -- # set +x
00:19:18.603   17:03:11	-- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none
00:19:18.603   17:03:11	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:19:18.603   17:03:11	-- bdev/bdev_raid.sh@184 -- # local process_type=none
00:19:18.603   17:03:11	-- bdev/bdev_raid.sh@185 -- # local target=none
00:19:18.603   17:03:11	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:19:18.603    17:03:11	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:19:18.603    17:03:11	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:19:18.861   17:03:11	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:19:18.861    "name": "raid_bdev1",
00:19:18.861    "uuid": "603e6eae-180e-4292-abfb-d2d14e72e2d3",
00:19:18.861    "strip_size_kb": 0,
00:19:18.861    "state": "online",
00:19:18.861    "raid_level": "raid1",
00:19:18.861    "superblock": true,
00:19:18.861    "num_base_bdevs": 2,
00:19:18.861    "num_base_bdevs_discovered": 2,
00:19:18.861    "num_base_bdevs_operational": 2,
00:19:18.861    "base_bdevs_list": [
00:19:18.861      {
00:19:18.861        "name": "spare",
00:19:18.861        "uuid": "b4eaf356-ca28-5630-8725-0f7662b6e85a",
00:19:18.861        "is_configured": true,
00:19:18.861        "data_offset": 2048,
00:19:18.861        "data_size": 63488
00:19:18.861      },
00:19:18.861      {
00:19:18.861        "name": "BaseBdev2",
00:19:18.861        "uuid": "667f8d68-68a2-55c0-aec9-0aea507f2059",
00:19:18.861        "is_configured": true,
00:19:18.861        "data_offset": 2048,
00:19:18.861        "data_size": 63488
00:19:18.861      }
00:19:18.861    ]
00:19:18.861  }'
00:19:18.861    17:03:11	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:19:18.861   17:03:11	-- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]]
00:19:18.861    17:03:11	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:19:18.861   17:03:11	-- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]]
00:19:18.861    17:03:11	-- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:19:18.861    17:03:11	-- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name'
00:19:19.119   17:03:11	-- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]]
00:19:19.119   17:03:11	-- bdev/bdev_raid.sh@709 -- # killprocess 133214
00:19:19.119   17:03:11	-- common/autotest_common.sh@936 -- # '[' -z 133214 ']'
00:19:19.119   17:03:11	-- common/autotest_common.sh@940 -- # kill -0 133214
00:19:19.120    17:03:11	-- common/autotest_common.sh@941 -- # uname
00:19:19.120   17:03:11	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:19:19.120    17:03:11	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 133214
00:19:19.378  killing process with pid 133214
00:19:19.378  Received shutdown signal, test time was about 60.000000 seconds
00:19:19.378  
00:19:19.378                                                                                                  Latency(us)
00:19:19.378  
[2024-11-19T17:03:12.242Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:19:19.378  
[2024-11-19T17:03:12.242Z]  ===================================================================================================================
00:19:19.378  
[2024-11-19T17:03:12.242Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00 18446744073709551616.00       0.00
00:19:19.378   17:03:11	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:19:19.378   17:03:11	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:19:19.378   17:03:11	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 133214'
00:19:19.378   17:03:11	-- common/autotest_common.sh@955 -- # kill 133214
00:19:19.378   17:03:11	-- common/autotest_common.sh@960 -- # wait 133214
00:19:19.378  [2024-11-19 17:03:11.979521] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:19:19.378  [2024-11-19 17:03:11.979639] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:19:19.378  [2024-11-19 17:03:11.979713] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:19:19.378  [2024-11-19 17:03:11.979724] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state offline
00:19:19.378  [2024-11-19 17:03:12.013095] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:19:19.637   17:03:12	-- bdev/bdev_raid.sh@711 -- # return 0
00:19:19.637  
00:19:19.637  real	0m25.736s
00:19:19.637  user	0m37.338s
00:19:19.637  sys	0m5.710s
00:19:19.637   17:03:12	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:19:19.637   17:03:12	-- common/autotest_common.sh@10 -- # set +x
00:19:19.637  ************************************
00:19:19.637  END TEST raid_rebuild_test_sb
00:19:19.637  ************************************
00:19:19.637   17:03:12	-- bdev/bdev_raid.sh@737 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true
00:19:19.637   17:03:12	-- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']'
00:19:19.637   17:03:12	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:19:19.637   17:03:12	-- common/autotest_common.sh@10 -- # set +x
00:19:19.637  ************************************
00:19:19.637  START TEST raid_rebuild_test_io
00:19:19.637  ************************************
00:19:19.637   17:03:12	-- common/autotest_common.sh@1114 -- # raid_rebuild_test raid1 2 false true
00:19:19.637   17:03:12	-- bdev/bdev_raid.sh@517 -- # local raid_level=raid1
00:19:19.637   17:03:12	-- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2
00:19:19.637   17:03:12	-- bdev/bdev_raid.sh@519 -- # local superblock=false
00:19:19.637   17:03:12	-- bdev/bdev_raid.sh@520 -- # local background_io=true
00:19:19.637    17:03:12	-- bdev/bdev_raid.sh@521 -- # (( i = 1 ))
00:19:19.637    17:03:12	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:19:19.637    17:03:12	-- bdev/bdev_raid.sh@521 -- # echo BaseBdev1
00:19:19.637    17:03:12	-- bdev/bdev_raid.sh@521 -- # (( i++ ))
00:19:19.637    17:03:12	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:19:19.637    17:03:12	-- bdev/bdev_raid.sh@521 -- # echo BaseBdev2
00:19:19.637    17:03:12	-- bdev/bdev_raid.sh@521 -- # (( i++ ))
00:19:19.637    17:03:12	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:19:19.637   17:03:12	-- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2')
00:19:19.637   17:03:12	-- bdev/bdev_raid.sh@521 -- # local base_bdevs
00:19:19.637   17:03:12	-- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1
00:19:19.637   17:03:12	-- bdev/bdev_raid.sh@523 -- # local strip_size
00:19:19.637   17:03:12	-- bdev/bdev_raid.sh@524 -- # local create_arg
00:19:19.637   17:03:12	-- bdev/bdev_raid.sh@525 -- # local raid_bdev_size
00:19:19.637   17:03:12	-- bdev/bdev_raid.sh@526 -- # local data_offset
00:19:19.637   17:03:12	-- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']'
00:19:19.637   17:03:12	-- bdev/bdev_raid.sh@536 -- # strip_size=0
00:19:19.637   17:03:12	-- bdev/bdev_raid.sh@539 -- # '[' false = true ']'
00:19:19.637   17:03:12	-- bdev/bdev_raid.sh@544 -- # raid_pid=133836
00:19:19.637   17:03:12	-- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid
00:19:19.637   17:03:12	-- bdev/bdev_raid.sh@545 -- # waitforlisten 133836 /var/tmp/spdk-raid.sock
00:19:19.637   17:03:12	-- common/autotest_common.sh@829 -- # '[' -z 133836 ']'
00:19:19.637   17:03:12	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock
00:19:19.637   17:03:12	-- common/autotest_common.sh@834 -- # local max_retries=100
00:19:19.637   17:03:12	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...'
00:19:19.637  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...
00:19:19.637   17:03:12	-- common/autotest_common.sh@838 -- # xtrace_disable
00:19:19.637   17:03:12	-- common/autotest_common.sh@10 -- # set +x
00:19:19.637  [2024-11-19 17:03:12.473957] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:19:19.637  I/O size of 3145728 is greater than zero copy threshold (65536).
00:19:19.637  Zero copy mechanism will not be used.
00:19:19.637  [2024-11-19 17:03:12.474311] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133836 ]
00:19:19.895  [2024-11-19 17:03:12.628604] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:19:19.895  [2024-11-19 17:03:12.702744] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:19:20.153  [2024-11-19 17:03:12.759780] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:19:20.721   17:03:13	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:19:20.721   17:03:13	-- common/autotest_common.sh@862 -- # return 0
00:19:20.721   17:03:13	-- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}"
00:19:20.721   17:03:13	-- bdev/bdev_raid.sh@549 -- # '[' false = true ']'
00:19:20.721   17:03:13	-- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1
00:19:20.979  BaseBdev1
00:19:21.236   17:03:13	-- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}"
00:19:21.236   17:03:13	-- bdev/bdev_raid.sh@549 -- # '[' false = true ']'
00:19:21.236   17:03:13	-- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2
00:19:21.494  BaseBdev2
00:19:21.494   17:03:14	-- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc
00:19:22.058  spare_malloc
00:19:22.058   17:03:14	-- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000
00:19:22.316  spare_delay
00:19:22.316   17:03:14	-- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare
00:19:22.575  [2024-11-19 17:03:15.194819] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay
00:19:22.575  [2024-11-19 17:03:15.194980] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:19:22.575  [2024-11-19 17:03:15.195034] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80
00:19:22.575  [2024-11-19 17:03:15.195099] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:19:22.575  [2024-11-19 17:03:15.198213] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:19:22.575  [2024-11-19 17:03:15.198324] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare
00:19:22.575  spare
00:19:22.575   17:03:15	-- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1
00:19:22.834  [2024-11-19 17:03:15.490965] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:19:22.834  [2024-11-19 17:03:15.493683] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:19:22.834  [2024-11-19 17:03:15.493814] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280
00:19:22.834  [2024-11-19 17:03:15.493827] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512
00:19:22.834  [2024-11-19 17:03:15.494089] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120
00:19:22.834  [2024-11-19 17:03:15.494572] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280
00:19:22.834  [2024-11-19 17:03:15.494600] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007280
00:19:22.834  [2024-11-19 17:03:15.494925] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:19:22.834   17:03:15	-- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2
00:19:22.834   17:03:15	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:19:22.834   17:03:15	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:19:22.834   17:03:15	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:19:22.834   17:03:15	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:19:22.834   17:03:15	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:19:22.834   17:03:15	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:19:22.834   17:03:15	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:19:22.834   17:03:15	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:19:22.834   17:03:15	-- bdev/bdev_raid.sh@125 -- # local tmp
00:19:22.834    17:03:15	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:19:22.834    17:03:15	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:19:23.092   17:03:15	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:19:23.092    "name": "raid_bdev1",
00:19:23.092    "uuid": "a736bcfa-eeee-4675-b443-212fc5fc2ae5",
00:19:23.092    "strip_size_kb": 0,
00:19:23.092    "state": "online",
00:19:23.092    "raid_level": "raid1",
00:19:23.092    "superblock": false,
00:19:23.092    "num_base_bdevs": 2,
00:19:23.092    "num_base_bdevs_discovered": 2,
00:19:23.092    "num_base_bdevs_operational": 2,
00:19:23.092    "base_bdevs_list": [
00:19:23.092      {
00:19:23.092        "name": "BaseBdev1",
00:19:23.092        "uuid": "b98c183d-af06-4222-b0f4-ad6899e00de6",
00:19:23.092        "is_configured": true,
00:19:23.092        "data_offset": 0,
00:19:23.092        "data_size": 65536
00:19:23.092      },
00:19:23.092      {
00:19:23.092        "name": "BaseBdev2",
00:19:23.092        "uuid": "1affe978-1803-4310-906e-6280a8c3ec27",
00:19:23.092        "is_configured": true,
00:19:23.092        "data_offset": 0,
00:19:23.092        "data_size": 65536
00:19:23.092      }
00:19:23.092    ]
00:19:23.092  }'
00:19:23.092   17:03:15	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:19:23.092   17:03:15	-- common/autotest_common.sh@10 -- # set +x
00:19:23.659    17:03:16	-- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1
00:19:23.659    17:03:16	-- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks'
00:19:23.918  [2024-11-19 17:03:16.739561] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:19:23.918   17:03:16	-- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536
00:19:23.918    17:03:16	-- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset'
00:19:23.918    17:03:16	-- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:19:24.176   17:03:16	-- bdev/bdev_raid.sh@570 -- # data_offset=0
00:19:24.176   17:03:16	-- bdev/bdev_raid.sh@572 -- # '[' true = true ']'
00:19:24.176   17:03:16	-- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1
00:19:24.176   17:03:16	-- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests
00:19:24.435  [2024-11-19 17:03:17.129794] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000021f0
00:19:24.435  I/O size of 3145728 is greater than zero copy threshold (65536).
00:19:24.435  Zero copy mechanism will not be used.
00:19:24.435  Running I/O for 60 seconds...
00:19:24.435  [2024-11-19 17:03:17.274185] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:19:24.435  [2024-11-19 17:03:17.282042] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000021f0
00:19:24.693   17:03:17	-- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1
00:19:24.693   17:03:17	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:19:24.693   17:03:17	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:19:24.693   17:03:17	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:19:24.693   17:03:17	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:19:24.693   17:03:17	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1
00:19:24.693   17:03:17	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:19:24.693   17:03:17	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:19:24.693   17:03:17	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:19:24.693   17:03:17	-- bdev/bdev_raid.sh@125 -- # local tmp
00:19:24.693    17:03:17	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:19:24.693    17:03:17	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:19:24.952   17:03:17	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:19:24.952    "name": "raid_bdev1",
00:19:24.952    "uuid": "a736bcfa-eeee-4675-b443-212fc5fc2ae5",
00:19:24.952    "strip_size_kb": 0,
00:19:24.952    "state": "online",
00:19:24.952    "raid_level": "raid1",
00:19:24.952    "superblock": false,
00:19:24.952    "num_base_bdevs": 2,
00:19:24.952    "num_base_bdevs_discovered": 1,
00:19:24.952    "num_base_bdevs_operational": 1,
00:19:24.952    "base_bdevs_list": [
00:19:24.952      {
00:19:24.952        "name": null,
00:19:24.952        "uuid": "00000000-0000-0000-0000-000000000000",
00:19:24.952        "is_configured": false,
00:19:24.952        "data_offset": 0,
00:19:24.952        "data_size": 65536
00:19:24.952      },
00:19:24.952      {
00:19:24.952        "name": "BaseBdev2",
00:19:24.952        "uuid": "1affe978-1803-4310-906e-6280a8c3ec27",
00:19:24.952        "is_configured": true,
00:19:24.952        "data_offset": 0,
00:19:24.952        "data_size": 65536
00:19:24.952      }
00:19:24.952    ]
00:19:24.952  }'
00:19:24.952   17:03:17	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:19:24.952   17:03:17	-- common/autotest_common.sh@10 -- # set +x
00:19:25.519   17:03:18	-- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare
00:19:25.778  [2024-11-19 17:03:18.628803] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare
00:19:25.778  [2024-11-19 17:03:18.628871] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:19:26.058   17:03:18	-- bdev/bdev_raid.sh@598 -- # sleep 1
00:19:26.058  [2024-11-19 17:03:18.701655] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0
00:19:26.058  [2024-11-19 17:03:18.704596] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1
00:19:26.058  [2024-11-19 17:03:18.831302] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144
00:19:26.058  [2024-11-19 17:03:18.831940] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144
00:19:26.359  [2024-11-19 17:03:18.951969] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144
00:19:26.359  [2024-11-19 17:03:18.952331] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144
00:19:26.617  [2024-11-19 17:03:19.293219] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288
00:19:26.617  [2024-11-19 17:03:19.293830] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288
00:19:26.617  [2024-11-19 17:03:19.428003] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288
00:19:26.875   17:03:19	-- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:19:26.875   17:03:19	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:19:26.875   17:03:19	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:19:26.875   17:03:19	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:19:26.875   17:03:19	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:19:26.875    17:03:19	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:19:26.875    17:03:19	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:19:27.132  [2024-11-19 17:03:19.866662] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432
00:19:27.391   17:03:19	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:19:27.391    "name": "raid_bdev1",
00:19:27.391    "uuid": "a736bcfa-eeee-4675-b443-212fc5fc2ae5",
00:19:27.391    "strip_size_kb": 0,
00:19:27.391    "state": "online",
00:19:27.391    "raid_level": "raid1",
00:19:27.391    "superblock": false,
00:19:27.391    "num_base_bdevs": 2,
00:19:27.391    "num_base_bdevs_discovered": 2,
00:19:27.391    "num_base_bdevs_operational": 2,
00:19:27.391    "process": {
00:19:27.391      "type": "rebuild",
00:19:27.391      "target": "spare",
00:19:27.391      "progress": {
00:19:27.391        "blocks": 16384,
00:19:27.391        "percent": 25
00:19:27.391      }
00:19:27.391    },
00:19:27.391    "base_bdevs_list": [
00:19:27.391      {
00:19:27.391        "name": "spare",
00:19:27.391        "uuid": "92ab4566-9ecd-5dd7-aee2-714e9d42dd32",
00:19:27.391        "is_configured": true,
00:19:27.391        "data_offset": 0,
00:19:27.391        "data_size": 65536
00:19:27.391      },
00:19:27.391      {
00:19:27.391        "name": "BaseBdev2",
00:19:27.391        "uuid": "1affe978-1803-4310-906e-6280a8c3ec27",
00:19:27.391        "is_configured": true,
00:19:27.391        "data_offset": 0,
00:19:27.391        "data_size": 65536
00:19:27.391      }
00:19:27.391    ]
00:19:27.391  }'
00:19:27.391    17:03:19	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:19:27.391   17:03:20	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:19:27.391    17:03:20	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:19:27.391   17:03:20	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:19:27.391   17:03:20	-- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare
00:19:27.391  [2024-11-19 17:03:20.114921] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576
00:19:27.649  [2024-11-19 17:03:20.374003] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576
00:19:27.649  [2024-11-19 17:03:20.374323] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576
00:19:27.649  [2024-11-19 17:03:20.378036] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:19:27.649  [2024-11-19 17:03:20.479055] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576
00:19:27.649  [2024-11-19 17:03:20.494810] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device
00:19:27.908  [2024-11-19 17:03:20.512573] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:19:27.908  [2024-11-19 17:03:20.542468] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000021f0
00:19:27.908   17:03:20	-- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1
00:19:27.908   17:03:20	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:19:27.908   17:03:20	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:19:27.908   17:03:20	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:19:27.908   17:03:20	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:19:27.908   17:03:20	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1
00:19:27.908   17:03:20	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:19:27.908   17:03:20	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:19:27.908   17:03:20	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:19:27.908   17:03:20	-- bdev/bdev_raid.sh@125 -- # local tmp
00:19:27.908    17:03:20	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:19:27.908    17:03:20	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:19:28.166   17:03:20	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:19:28.166    "name": "raid_bdev1",
00:19:28.166    "uuid": "a736bcfa-eeee-4675-b443-212fc5fc2ae5",
00:19:28.166    "strip_size_kb": 0,
00:19:28.166    "state": "online",
00:19:28.166    "raid_level": "raid1",
00:19:28.166    "superblock": false,
00:19:28.166    "num_base_bdevs": 2,
00:19:28.166    "num_base_bdevs_discovered": 1,
00:19:28.166    "num_base_bdevs_operational": 1,
00:19:28.166    "base_bdevs_list": [
00:19:28.166      {
00:19:28.166        "name": null,
00:19:28.166        "uuid": "00000000-0000-0000-0000-000000000000",
00:19:28.166        "is_configured": false,
00:19:28.166        "data_offset": 0,
00:19:28.166        "data_size": 65536
00:19:28.166      },
00:19:28.166      {
00:19:28.166        "name": "BaseBdev2",
00:19:28.166        "uuid": "1affe978-1803-4310-906e-6280a8c3ec27",
00:19:28.166        "is_configured": true,
00:19:28.166        "data_offset": 0,
00:19:28.166        "data_size": 65536
00:19:28.166      }
00:19:28.166    ]
00:19:28.166  }'
00:19:28.166   17:03:20	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:19:28.166   17:03:20	-- common/autotest_common.sh@10 -- # set +x
00:19:28.733   17:03:21	-- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none
00:19:28.733   17:03:21	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:19:28.733   17:03:21	-- bdev/bdev_raid.sh@184 -- # local process_type=none
00:19:28.733   17:03:21	-- bdev/bdev_raid.sh@185 -- # local target=none
00:19:28.733   17:03:21	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:19:28.733    17:03:21	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:19:28.733    17:03:21	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:19:28.993   17:03:21	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:19:28.993    "name": "raid_bdev1",
00:19:28.993    "uuid": "a736bcfa-eeee-4675-b443-212fc5fc2ae5",
00:19:28.993    "strip_size_kb": 0,
00:19:28.993    "state": "online",
00:19:28.993    "raid_level": "raid1",
00:19:28.993    "superblock": false,
00:19:28.993    "num_base_bdevs": 2,
00:19:28.993    "num_base_bdevs_discovered": 1,
00:19:28.993    "num_base_bdevs_operational": 1,
00:19:28.993    "base_bdevs_list": [
00:19:28.993      {
00:19:28.993        "name": null,
00:19:28.993        "uuid": "00000000-0000-0000-0000-000000000000",
00:19:28.993        "is_configured": false,
00:19:28.993        "data_offset": 0,
00:19:28.993        "data_size": 65536
00:19:28.993      },
00:19:28.993      {
00:19:28.993        "name": "BaseBdev2",
00:19:28.993        "uuid": "1affe978-1803-4310-906e-6280a8c3ec27",
00:19:28.993        "is_configured": true,
00:19:28.993        "data_offset": 0,
00:19:28.993        "data_size": 65536
00:19:28.993      }
00:19:28.993    ]
00:19:28.993  }'
00:19:28.993    17:03:21	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:19:28.993   17:03:21	-- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]]
00:19:28.993    17:03:21	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:19:29.252   17:03:21	-- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]]
00:19:29.252   17:03:21	-- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare
00:19:29.510  [2024-11-19 17:03:22.157950] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare
00:19:29.511  [2024-11-19 17:03:22.158032] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:19:29.511   17:03:22	-- bdev/bdev_raid.sh@614 -- # sleep 1
00:19:29.511  [2024-11-19 17:03:22.207773] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460
00:19:29.511  [2024-11-19 17:03:22.210406] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1
00:19:29.511  [2024-11-19 17:03:22.314394] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144
00:19:29.511  [2024-11-19 17:03:22.314998] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144
00:19:29.770  [2024-11-19 17:03:22.525714] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144
00:19:29.770  [2024-11-19 17:03:22.526034] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144
00:19:30.336  [2024-11-19 17:03:22.987994] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288
00:19:30.595   17:03:23	-- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:19:30.595   17:03:23	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:19:30.595   17:03:23	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:19:30.595   17:03:23	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:19:30.595   17:03:23	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:19:30.595    17:03:23	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:19:30.595    17:03:23	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:19:30.595  [2024-11-19 17:03:23.352259] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432
00:19:30.853  [2024-11-19 17:03:23.565298] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432
00:19:30.853   17:03:23	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:19:30.853    "name": "raid_bdev1",
00:19:30.853    "uuid": "a736bcfa-eeee-4675-b443-212fc5fc2ae5",
00:19:30.853    "strip_size_kb": 0,
00:19:30.853    "state": "online",
00:19:30.853    "raid_level": "raid1",
00:19:30.853    "superblock": false,
00:19:30.853    "num_base_bdevs": 2,
00:19:30.853    "num_base_bdevs_discovered": 2,
00:19:30.853    "num_base_bdevs_operational": 2,
00:19:30.853    "process": {
00:19:30.853      "type": "rebuild",
00:19:30.853      "target": "spare",
00:19:30.853      "progress": {
00:19:30.853        "blocks": 14336,
00:19:30.853        "percent": 21
00:19:30.853      }
00:19:30.853    },
00:19:30.853    "base_bdevs_list": [
00:19:30.853      {
00:19:30.853        "name": "spare",
00:19:30.853        "uuid": "92ab4566-9ecd-5dd7-aee2-714e9d42dd32",
00:19:30.853        "is_configured": true,
00:19:30.853        "data_offset": 0,
00:19:30.853        "data_size": 65536
00:19:30.853      },
00:19:30.853      {
00:19:30.853        "name": "BaseBdev2",
00:19:30.853        "uuid": "1affe978-1803-4310-906e-6280a8c3ec27",
00:19:30.853        "is_configured": true,
00:19:30.854        "data_offset": 0,
00:19:30.854        "data_size": 65536
00:19:30.854      }
00:19:30.854    ]
00:19:30.854  }'
00:19:30.854    17:03:23	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:19:30.854   17:03:23	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:19:30.854    17:03:23	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:19:30.854   17:03:23	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:19:30.854   17:03:23	-- bdev/bdev_raid.sh@617 -- # '[' false = true ']'
00:19:30.854   17:03:23	-- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2
00:19:30.854   17:03:23	-- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']'
00:19:30.854   17:03:23	-- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']'
00:19:30.854   17:03:23	-- bdev/bdev_raid.sh@657 -- # local timeout=404
00:19:30.854   17:03:23	-- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout ))
00:19:30.854   17:03:23	-- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:19:30.854   17:03:23	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:19:30.854   17:03:23	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:19:30.854   17:03:23	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:19:30.854   17:03:23	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:19:30.854    17:03:23	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:19:30.854    17:03:23	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:19:31.112  [2024-11-19 17:03:23.794636] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576
00:19:31.112  [2024-11-19 17:03:23.904127] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576
00:19:31.112  [2024-11-19 17:03:23.904469] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576
00:19:31.112   17:03:23	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:19:31.112    "name": "raid_bdev1",
00:19:31.112    "uuid": "a736bcfa-eeee-4675-b443-212fc5fc2ae5",
00:19:31.112    "strip_size_kb": 0,
00:19:31.112    "state": "online",
00:19:31.112    "raid_level": "raid1",
00:19:31.112    "superblock": false,
00:19:31.112    "num_base_bdevs": 2,
00:19:31.112    "num_base_bdevs_discovered": 2,
00:19:31.112    "num_base_bdevs_operational": 2,
00:19:31.112    "process": {
00:19:31.112      "type": "rebuild",
00:19:31.112      "target": "spare",
00:19:31.112      "progress": {
00:19:31.112        "blocks": 22528,
00:19:31.112        "percent": 34
00:19:31.112      }
00:19:31.112    },
00:19:31.112    "base_bdevs_list": [
00:19:31.112      {
00:19:31.112        "name": "spare",
00:19:31.112        "uuid": "92ab4566-9ecd-5dd7-aee2-714e9d42dd32",
00:19:31.112        "is_configured": true,
00:19:31.112        "data_offset": 0,
00:19:31.112        "data_size": 65536
00:19:31.112      },
00:19:31.112      {
00:19:31.112        "name": "BaseBdev2",
00:19:31.112        "uuid": "1affe978-1803-4310-906e-6280a8c3ec27",
00:19:31.112        "is_configured": true,
00:19:31.112        "data_offset": 0,
00:19:31.112        "data_size": 65536
00:19:31.112      }
00:19:31.112    ]
00:19:31.112  }'
00:19:31.370    17:03:23	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:19:31.370   17:03:23	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:19:31.370    17:03:23	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:19:31.370   17:03:24	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:19:31.370   17:03:24	-- bdev/bdev_raid.sh@662 -- # sleep 1
00:19:31.628  [2024-11-19 17:03:24.370523] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720
00:19:31.887  [2024-11-19 17:03:24.683638] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864
00:19:32.494   17:03:25	-- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout ))
00:19:32.494   17:03:25	-- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:19:32.494   17:03:25	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:19:32.494   17:03:25	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:19:32.494   17:03:25	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:19:32.494   17:03:25	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:19:32.494    17:03:25	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:19:32.494    17:03:25	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:19:32.494   17:03:25	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:19:32.494    "name": "raid_bdev1",
00:19:32.494    "uuid": "a736bcfa-eeee-4675-b443-212fc5fc2ae5",
00:19:32.494    "strip_size_kb": 0,
00:19:32.494    "state": "online",
00:19:32.494    "raid_level": "raid1",
00:19:32.494    "superblock": false,
00:19:32.494    "num_base_bdevs": 2,
00:19:32.494    "num_base_bdevs_discovered": 2,
00:19:32.494    "num_base_bdevs_operational": 2,
00:19:32.494    "process": {
00:19:32.494      "type": "rebuild",
00:19:32.494      "target": "spare",
00:19:32.494      "progress": {
00:19:32.494        "blocks": 45056,
00:19:32.494        "percent": 68
00:19:32.494      }
00:19:32.494    },
00:19:32.494    "base_bdevs_list": [
00:19:32.494      {
00:19:32.494        "name": "spare",
00:19:32.494        "uuid": "92ab4566-9ecd-5dd7-aee2-714e9d42dd32",
00:19:32.494        "is_configured": true,
00:19:32.494        "data_offset": 0,
00:19:32.494        "data_size": 65536
00:19:32.494      },
00:19:32.494      {
00:19:32.494        "name": "BaseBdev2",
00:19:32.494        "uuid": "1affe978-1803-4310-906e-6280a8c3ec27",
00:19:32.494        "is_configured": true,
00:19:32.494        "data_offset": 0,
00:19:32.494        "data_size": 65536
00:19:32.494      }
00:19:32.494    ]
00:19:32.494  }'
00:19:32.494    17:03:25	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:19:32.752   17:03:25	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:19:32.752    17:03:25	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:19:32.752  [2024-11-19 17:03:25.394550] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152
00:19:32.752   17:03:25	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:19:32.752   17:03:25	-- bdev/bdev_raid.sh@662 -- # sleep 1
00:19:33.319  [2024-11-19 17:03:26.078127] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440
00:19:33.577  [2024-11-19 17:03:26.298163] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440
00:19:33.836   17:03:26	-- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout ))
00:19:33.836   17:03:26	-- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:19:33.836   17:03:26	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:19:33.836   17:03:26	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:19:33.836   17:03:26	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:19:33.836   17:03:26	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:19:33.836    17:03:26	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:19:33.836    17:03:26	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:19:33.836  [2024-11-19 17:03:26.637114] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1
00:19:34.094   17:03:26	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:19:34.094    "name": "raid_bdev1",
00:19:34.094    "uuid": "a736bcfa-eeee-4675-b443-212fc5fc2ae5",
00:19:34.094    "strip_size_kb": 0,
00:19:34.094    "state": "online",
00:19:34.094    "raid_level": "raid1",
00:19:34.094    "superblock": false,
00:19:34.094    "num_base_bdevs": 2,
00:19:34.094    "num_base_bdevs_discovered": 2,
00:19:34.094    "num_base_bdevs_operational": 2,
00:19:34.094    "process": {
00:19:34.094      "type": "rebuild",
00:19:34.094      "target": "spare",
00:19:34.094      "progress": {
00:19:34.094        "blocks": 65536,
00:19:34.094        "percent": 100
00:19:34.094      }
00:19:34.094    },
00:19:34.094    "base_bdevs_list": [
00:19:34.094      {
00:19:34.094        "name": "spare",
00:19:34.094        "uuid": "92ab4566-9ecd-5dd7-aee2-714e9d42dd32",
00:19:34.094        "is_configured": true,
00:19:34.094        "data_offset": 0,
00:19:34.094        "data_size": 65536
00:19:34.094      },
00:19:34.094      {
00:19:34.094        "name": "BaseBdev2",
00:19:34.094        "uuid": "1affe978-1803-4310-906e-6280a8c3ec27",
00:19:34.094        "is_configured": true,
00:19:34.094        "data_offset": 0,
00:19:34.094        "data_size": 65536
00:19:34.094      }
00:19:34.094    ]
00:19:34.094  }'
00:19:34.094    17:03:26	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:19:34.094  [2024-11-19 17:03:26.744283] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1
00:19:34.094  [2024-11-19 17:03:26.747482] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:19:34.094   17:03:26	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:19:34.094    17:03:26	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:19:34.094   17:03:26	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:19:34.094   17:03:26	-- bdev/bdev_raid.sh@662 -- # sleep 1
00:19:35.029   17:03:27	-- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout ))
00:19:35.029   17:03:27	-- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:19:35.029   17:03:27	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:19:35.029   17:03:27	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:19:35.029   17:03:27	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:19:35.029   17:03:27	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:19:35.029    17:03:27	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:19:35.029    17:03:27	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:19:35.286   17:03:28	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:19:35.286    "name": "raid_bdev1",
00:19:35.286    "uuid": "a736bcfa-eeee-4675-b443-212fc5fc2ae5",
00:19:35.286    "strip_size_kb": 0,
00:19:35.286    "state": "online",
00:19:35.286    "raid_level": "raid1",
00:19:35.286    "superblock": false,
00:19:35.286    "num_base_bdevs": 2,
00:19:35.286    "num_base_bdevs_discovered": 2,
00:19:35.286    "num_base_bdevs_operational": 2,
00:19:35.286    "base_bdevs_list": [
00:19:35.286      {
00:19:35.286        "name": "spare",
00:19:35.286        "uuid": "92ab4566-9ecd-5dd7-aee2-714e9d42dd32",
00:19:35.286        "is_configured": true,
00:19:35.286        "data_offset": 0,
00:19:35.286        "data_size": 65536
00:19:35.286      },
00:19:35.286      {
00:19:35.286        "name": "BaseBdev2",
00:19:35.286        "uuid": "1affe978-1803-4310-906e-6280a8c3ec27",
00:19:35.286        "is_configured": true,
00:19:35.286        "data_offset": 0,
00:19:35.286        "data_size": 65536
00:19:35.286      }
00:19:35.286    ]
00:19:35.286  }'
00:19:35.286    17:03:28	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:19:35.286   17:03:28	-- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]]
00:19:35.286    17:03:28	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:19:35.545   17:03:28	-- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]]
00:19:35.545   17:03:28	-- bdev/bdev_raid.sh@660 -- # break
00:19:35.545   17:03:28	-- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none
00:19:35.545   17:03:28	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:19:35.545   17:03:28	-- bdev/bdev_raid.sh@184 -- # local process_type=none
00:19:35.545   17:03:28	-- bdev/bdev_raid.sh@185 -- # local target=none
00:19:35.545   17:03:28	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:19:35.545    17:03:28	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:19:35.545    17:03:28	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:19:35.803   17:03:28	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:19:35.803    "name": "raid_bdev1",
00:19:35.803    "uuid": "a736bcfa-eeee-4675-b443-212fc5fc2ae5",
00:19:35.803    "strip_size_kb": 0,
00:19:35.803    "state": "online",
00:19:35.803    "raid_level": "raid1",
00:19:35.803    "superblock": false,
00:19:35.803    "num_base_bdevs": 2,
00:19:35.803    "num_base_bdevs_discovered": 2,
00:19:35.803    "num_base_bdevs_operational": 2,
00:19:35.803    "base_bdevs_list": [
00:19:35.803      {
00:19:35.803        "name": "spare",
00:19:35.803        "uuid": "92ab4566-9ecd-5dd7-aee2-714e9d42dd32",
00:19:35.803        "is_configured": true,
00:19:35.803        "data_offset": 0,
00:19:35.803        "data_size": 65536
00:19:35.803      },
00:19:35.803      {
00:19:35.803        "name": "BaseBdev2",
00:19:35.803        "uuid": "1affe978-1803-4310-906e-6280a8c3ec27",
00:19:35.803        "is_configured": true,
00:19:35.803        "data_offset": 0,
00:19:35.803        "data_size": 65536
00:19:35.803      }
00:19:35.803    ]
00:19:35.803  }'
00:19:35.803    17:03:28	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:19:35.803   17:03:28	-- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]]
00:19:35.803    17:03:28	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:19:35.803   17:03:28	-- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]]
00:19:35.803   17:03:28	-- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2
00:19:35.803   17:03:28	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:19:35.803   17:03:28	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:19:35.803   17:03:28	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:19:35.803   17:03:28	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:19:35.803   17:03:28	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:19:35.803   17:03:28	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:19:35.803   17:03:28	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:19:35.803   17:03:28	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:19:35.803   17:03:28	-- bdev/bdev_raid.sh@125 -- # local tmp
00:19:35.803    17:03:28	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:19:35.803    17:03:28	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:19:36.370   17:03:28	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:19:36.370    "name": "raid_bdev1",
00:19:36.370    "uuid": "a736bcfa-eeee-4675-b443-212fc5fc2ae5",
00:19:36.370    "strip_size_kb": 0,
00:19:36.370    "state": "online",
00:19:36.370    "raid_level": "raid1",
00:19:36.370    "superblock": false,
00:19:36.370    "num_base_bdevs": 2,
00:19:36.370    "num_base_bdevs_discovered": 2,
00:19:36.370    "num_base_bdevs_operational": 2,
00:19:36.370    "base_bdevs_list": [
00:19:36.370      {
00:19:36.370        "name": "spare",
00:19:36.370        "uuid": "92ab4566-9ecd-5dd7-aee2-714e9d42dd32",
00:19:36.370        "is_configured": true,
00:19:36.370        "data_offset": 0,
00:19:36.370        "data_size": 65536
00:19:36.370      },
00:19:36.370      {
00:19:36.370        "name": "BaseBdev2",
00:19:36.370        "uuid": "1affe978-1803-4310-906e-6280a8c3ec27",
00:19:36.370        "is_configured": true,
00:19:36.370        "data_offset": 0,
00:19:36.370        "data_size": 65536
00:19:36.370      }
00:19:36.370    ]
00:19:36.370  }'
00:19:36.370   17:03:28	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:19:36.370   17:03:28	-- common/autotest_common.sh@10 -- # set +x
00:19:36.935   17:03:29	-- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1
00:19:37.193  [2024-11-19 17:03:29.824001] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:19:37.193  [2024-11-19 17:03:29.824058] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:19:37.193  
00:19:37.193                                                                                                  Latency(us)
00:19:37.193  
[2024-11-19T17:03:30.057Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:19:37.193  
[2024-11-19T17:03:30.057Z]  Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728)
00:19:37.193  	 raid_bdev1          :      12.79      99.99     299.97       0.00     0.00   14064.81     534.43  120835.90
00:19:37.193  
[2024-11-19T17:03:30.057Z]  ===================================================================================================================
00:19:37.193  
[2024-11-19T17:03:30.057Z]  Total                       :                 99.99     299.97       0.00     0.00   14064.81     534.43  120835.90
00:19:37.193  [2024-11-19 17:03:29.928720] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:19:37.193  [2024-11-19 17:03:29.928791] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:19:37.193  0
00:19:37.193  [2024-11-19 17:03:29.928908] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:19:37.193  [2024-11-19 17:03:29.928921] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name raid_bdev1, state offline
00:19:37.193    17:03:29	-- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:19:37.193    17:03:29	-- bdev/bdev_raid.sh@671 -- # jq length
00:19:37.450   17:03:30	-- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]]
00:19:37.450   17:03:30	-- bdev/bdev_raid.sh@673 -- # '[' true = true ']'
00:19:37.450   17:03:30	-- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0
00:19:37.450   17:03:30	-- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:19:37.450   17:03:30	-- bdev/nbd_common.sh@10 -- # bdev_list=('spare')
00:19:37.450   17:03:30	-- bdev/nbd_common.sh@10 -- # local bdev_list
00:19:37.450   17:03:30	-- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0')
00:19:37.450   17:03:30	-- bdev/nbd_common.sh@11 -- # local nbd_list
00:19:37.450   17:03:30	-- bdev/nbd_common.sh@12 -- # local i
00:19:37.450   17:03:30	-- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:19:37.450   17:03:30	-- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:19:37.450   17:03:30	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0
00:19:37.750  /dev/nbd0
00:19:37.750    17:03:30	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:19:37.750   17:03:30	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:19:37.750   17:03:30	-- common/autotest_common.sh@866 -- # local nbd_name=nbd0
00:19:37.750   17:03:30	-- common/autotest_common.sh@867 -- # local i
00:19:37.750   17:03:30	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:19:37.750   17:03:30	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:19:37.750   17:03:30	-- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions
00:19:37.750   17:03:30	-- common/autotest_common.sh@871 -- # break
00:19:37.750   17:03:30	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:19:37.750   17:03:30	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:19:37.750   17:03:30	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:19:37.750  1+0 records in
00:19:37.750  1+0 records out
00:19:37.750  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000433777 s, 9.4 MB/s
00:19:37.750    17:03:30	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:19:37.750   17:03:30	-- common/autotest_common.sh@884 -- # size=4096
00:19:37.750   17:03:30	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:19:37.750   17:03:30	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:19:37.750   17:03:30	-- common/autotest_common.sh@887 -- # return 0
00:19:37.750   17:03:30	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:19:37.750   17:03:30	-- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:19:37.750   17:03:30	-- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}"
00:19:37.750   17:03:30	-- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev2 ']'
00:19:37.750   17:03:30	-- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1
00:19:37.750   17:03:30	-- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:19:37.750   17:03:30	-- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2')
00:19:37.750   17:03:30	-- bdev/nbd_common.sh@10 -- # local bdev_list
00:19:37.750   17:03:30	-- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1')
00:19:37.750   17:03:30	-- bdev/nbd_common.sh@11 -- # local nbd_list
00:19:37.750   17:03:30	-- bdev/nbd_common.sh@12 -- # local i
00:19:37.750   17:03:30	-- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:19:37.750   17:03:30	-- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:19:37.750   17:03:30	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1
00:19:38.317  /dev/nbd1
00:19:38.317    17:03:30	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:19:38.317   17:03:30	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:19:38.317   17:03:30	-- common/autotest_common.sh@866 -- # local nbd_name=nbd1
00:19:38.317   17:03:30	-- common/autotest_common.sh@867 -- # local i
00:19:38.317   17:03:30	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:19:38.317   17:03:30	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:19:38.317   17:03:30	-- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions
00:19:38.317   17:03:30	-- common/autotest_common.sh@871 -- # break
00:19:38.317   17:03:30	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:19:38.317   17:03:30	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:19:38.317   17:03:30	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:19:38.317  1+0 records in
00:19:38.317  1+0 records out
00:19:38.317  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000372703 s, 11.0 MB/s
00:19:38.317    17:03:30	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:19:38.317   17:03:30	-- common/autotest_common.sh@884 -- # size=4096
00:19:38.317   17:03:30	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:19:38.317   17:03:30	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:19:38.317   17:03:30	-- common/autotest_common.sh@887 -- # return 0
00:19:38.317   17:03:30	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:19:38.317   17:03:30	-- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:19:38.317   17:03:30	-- bdev/bdev_raid.sh@681 -- # cmp -i 0 /dev/nbd0 /dev/nbd1
00:19:38.317   17:03:31	-- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1
00:19:38.317   17:03:31	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:19:38.317   17:03:31	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1')
00:19:38.317   17:03:31	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:19:38.317   17:03:31	-- bdev/nbd_common.sh@51 -- # local i
00:19:38.317   17:03:31	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:19:38.317   17:03:31	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1
00:19:38.575    17:03:31	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:19:38.575   17:03:31	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:19:38.575   17:03:31	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:19:38.575   17:03:31	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:19:38.575   17:03:31	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:19:38.575   17:03:31	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:19:38.575   17:03:31	-- bdev/nbd_common.sh@41 -- # break
00:19:38.575   17:03:31	-- bdev/nbd_common.sh@45 -- # return 0
00:19:38.575   17:03:31	-- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0
00:19:38.575   17:03:31	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:19:38.575   17:03:31	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:19:38.575   17:03:31	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:19:38.575   17:03:31	-- bdev/nbd_common.sh@51 -- # local i
00:19:38.575   17:03:31	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:19:38.575   17:03:31	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0
00:19:39.141    17:03:31	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:19:39.141   17:03:31	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:19:39.141   17:03:31	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:19:39.141   17:03:31	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:19:39.141   17:03:31	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:19:39.141   17:03:31	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:19:39.141   17:03:31	-- bdev/nbd_common.sh@41 -- # break
00:19:39.141   17:03:31	-- bdev/nbd_common.sh@45 -- # return 0
00:19:39.141   17:03:31	-- bdev/bdev_raid.sh@692 -- # '[' false = true ']'
00:19:39.141   17:03:31	-- bdev/bdev_raid.sh@709 -- # killprocess 133836
00:19:39.141   17:03:31	-- common/autotest_common.sh@936 -- # '[' -z 133836 ']'
00:19:39.141   17:03:31	-- common/autotest_common.sh@940 -- # kill -0 133836
00:19:39.141    17:03:31	-- common/autotest_common.sh@941 -- # uname
00:19:39.141   17:03:31	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:19:39.141    17:03:31	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 133836
00:19:39.141   17:03:31	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:19:39.141   17:03:31	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:19:39.141   17:03:31	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 133836'
00:19:39.141  killing process with pid 133836
00:19:39.141   17:03:31	-- common/autotest_common.sh@955 -- # kill 133836
00:19:39.141  Received shutdown signal, test time was about 14.615432 seconds
00:19:39.141  
00:19:39.141                                                                                                  Latency(us)
00:19:39.141  
[2024-11-19T17:03:32.005Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:19:39.141  
[2024-11-19T17:03:32.005Z]  ===================================================================================================================
00:19:39.141  
[2024-11-19T17:03:32.005Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:19:39.141  [2024-11-19 17:03:31.748196] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:19:39.141   17:03:31	-- common/autotest_common.sh@960 -- # wait 133836
00:19:39.141  [2024-11-19 17:03:31.775264] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:19:39.405   17:03:32	-- bdev/bdev_raid.sh@711 -- # return 0
00:19:39.405  
00:19:39.405  real	0m19.647s
00:19:39.405  user	0m30.915s
00:19:39.405  sys	0m2.607s
00:19:39.405   17:03:32	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:19:39.405   17:03:32	-- common/autotest_common.sh@10 -- # set +x
00:19:39.405  ************************************
00:19:39.405  END TEST raid_rebuild_test_io
00:19:39.405  ************************************
00:19:39.405   17:03:32	-- bdev/bdev_raid.sh@738 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true
00:19:39.405   17:03:32	-- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']'
00:19:39.405   17:03:32	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:19:39.405   17:03:32	-- common/autotest_common.sh@10 -- # set +x
00:19:39.405  ************************************
00:19:39.405  START TEST raid_rebuild_test_sb_io
00:19:39.405  ************************************
00:19:39.405   17:03:32	-- common/autotest_common.sh@1114 -- # raid_rebuild_test raid1 2 true true
00:19:39.405   17:03:32	-- bdev/bdev_raid.sh@517 -- # local raid_level=raid1
00:19:39.405   17:03:32	-- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2
00:19:39.405   17:03:32	-- bdev/bdev_raid.sh@519 -- # local superblock=true
00:19:39.405   17:03:32	-- bdev/bdev_raid.sh@520 -- # local background_io=true
00:19:39.405    17:03:32	-- bdev/bdev_raid.sh@521 -- # (( i = 1 ))
00:19:39.405    17:03:32	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:19:39.405    17:03:32	-- bdev/bdev_raid.sh@521 -- # echo BaseBdev1
00:19:39.405    17:03:32	-- bdev/bdev_raid.sh@521 -- # (( i++ ))
00:19:39.405    17:03:32	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:19:39.405    17:03:32	-- bdev/bdev_raid.sh@521 -- # echo BaseBdev2
00:19:39.405    17:03:32	-- bdev/bdev_raid.sh@521 -- # (( i++ ))
00:19:39.405    17:03:32	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:19:39.405   17:03:32	-- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2')
00:19:39.405   17:03:32	-- bdev/bdev_raid.sh@521 -- # local base_bdevs
00:19:39.405   17:03:32	-- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1
00:19:39.405   17:03:32	-- bdev/bdev_raid.sh@523 -- # local strip_size
00:19:39.405   17:03:32	-- bdev/bdev_raid.sh@524 -- # local create_arg
00:19:39.405   17:03:32	-- bdev/bdev_raid.sh@525 -- # local raid_bdev_size
00:19:39.405   17:03:32	-- bdev/bdev_raid.sh@526 -- # local data_offset
00:19:39.405   17:03:32	-- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']'
00:19:39.406   17:03:32	-- bdev/bdev_raid.sh@536 -- # strip_size=0
00:19:39.406   17:03:32	-- bdev/bdev_raid.sh@539 -- # '[' true = true ']'
00:19:39.406   17:03:32	-- bdev/bdev_raid.sh@540 -- # create_arg+=' -s'
00:19:39.406   17:03:32	-- bdev/bdev_raid.sh@544 -- # raid_pid=134339
00:19:39.406   17:03:32	-- bdev/bdev_raid.sh@545 -- # waitforlisten 134339 /var/tmp/spdk-raid.sock
00:19:39.406   17:03:32	-- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid
00:19:39.406   17:03:32	-- common/autotest_common.sh@829 -- # '[' -z 134339 ']'
00:19:39.406   17:03:32	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock
00:19:39.406   17:03:32	-- common/autotest_common.sh@834 -- # local max_retries=100
00:19:39.406   17:03:32	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...'
00:19:39.406  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...
00:19:39.406   17:03:32	-- common/autotest_common.sh@838 -- # xtrace_disable
00:19:39.406   17:03:32	-- common/autotest_common.sh@10 -- # set +x
00:19:39.406  [2024-11-19 17:03:32.189364] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:19:39.406  I/O size of 3145728 is greater than zero copy threshold (65536).
00:19:39.406  Zero copy mechanism will not be used.
00:19:39.406  [2024-11-19 17:03:32.189614] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134339 ]
00:19:39.676  [2024-11-19 17:03:32.347961] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:19:39.676  [2024-11-19 17:03:32.403011] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:19:39.676  [2024-11-19 17:03:32.451242] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:19:40.244   17:03:33	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:19:40.244   17:03:33	-- common/autotest_common.sh@862 -- # return 0
00:19:40.244   17:03:33	-- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}"
00:19:40.244   17:03:33	-- bdev/bdev_raid.sh@549 -- # '[' true = true ']'
00:19:40.244   17:03:33	-- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc
00:19:40.503  BaseBdev1_malloc
00:19:40.761   17:03:33	-- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1
00:19:40.761  [2024-11-19 17:03:33.568908] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc
00:19:40.761  [2024-11-19 17:03:33.569041] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:19:40.761  [2024-11-19 17:03:33.569082] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80
00:19:40.761  [2024-11-19 17:03:33.569135] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:19:40.761  [2024-11-19 17:03:33.572047] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:19:40.761  [2024-11-19 17:03:33.572143] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1
00:19:40.761  BaseBdev1
00:19:40.761   17:03:33	-- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}"
00:19:40.761   17:03:33	-- bdev/bdev_raid.sh@549 -- # '[' true = true ']'
00:19:40.761   17:03:33	-- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc
00:19:41.020  BaseBdev2_malloc
00:19:41.020   17:03:33	-- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2
00:19:41.279  [2024-11-19 17:03:34.022417] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc
00:19:41.279  [2024-11-19 17:03:34.022529] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:19:41.279  [2024-11-19 17:03:34.022570] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680
00:19:41.279  [2024-11-19 17:03:34.022623] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:19:41.279  [2024-11-19 17:03:34.025227] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:19:41.279  [2024-11-19 17:03:34.025288] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2
00:19:41.279  BaseBdev2
00:19:41.279   17:03:34	-- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc
00:19:41.538  spare_malloc
00:19:41.538   17:03:34	-- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000
00:19:41.797  spare_delay
00:19:41.797   17:03:34	-- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare
00:19:42.056  [2024-11-19 17:03:34.759319] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay
00:19:42.056  [2024-11-19 17:03:34.759420] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:19:42.056  [2024-11-19 17:03:34.759463] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880
00:19:42.056  [2024-11-19 17:03:34.759507] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:19:42.056  [2024-11-19 17:03:34.762219] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:19:42.056  [2024-11-19 17:03:34.762291] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare
00:19:42.056  spare
00:19:42.056   17:03:34	-- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1
00:19:42.314  [2024-11-19 17:03:35.123492] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:19:42.314  [2024-11-19 17:03:35.125903] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:19:42.314  [2024-11-19 17:03:35.126151] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80
00:19:42.314  [2024-11-19 17:03:35.126169] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512
00:19:42.314  [2024-11-19 17:03:35.126350] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0
00:19:42.314  [2024-11-19 17:03:35.126812] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80
00:19:42.314  [2024-11-19 17:03:35.126832] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80
00:19:42.314  [2024-11-19 17:03:35.127042] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:19:42.314   17:03:35	-- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2
00:19:42.314   17:03:35	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:19:42.314   17:03:35	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:19:42.314   17:03:35	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:19:42.315   17:03:35	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:19:42.315   17:03:35	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:19:42.315   17:03:35	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:19:42.315   17:03:35	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:19:42.315   17:03:35	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:19:42.315   17:03:35	-- bdev/bdev_raid.sh@125 -- # local tmp
00:19:42.315    17:03:35	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:19:42.315    17:03:35	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:19:42.574   17:03:35	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:19:42.574    "name": "raid_bdev1",
00:19:42.574    "uuid": "b9fe05e1-8427-4689-8075-ed778bb8d3c6",
00:19:42.574    "strip_size_kb": 0,
00:19:42.574    "state": "online",
00:19:42.574    "raid_level": "raid1",
00:19:42.574    "superblock": true,
00:19:42.574    "num_base_bdevs": 2,
00:19:42.574    "num_base_bdevs_discovered": 2,
00:19:42.574    "num_base_bdevs_operational": 2,
00:19:42.574    "base_bdevs_list": [
00:19:42.574      {
00:19:42.574        "name": "BaseBdev1",
00:19:42.574        "uuid": "8f8b6e92-5908-57af-8708-31558afa980a",
00:19:42.574        "is_configured": true,
00:19:42.574        "data_offset": 2048,
00:19:42.574        "data_size": 63488
00:19:42.574      },
00:19:42.574      {
00:19:42.574        "name": "BaseBdev2",
00:19:42.574        "uuid": "6a48a678-c983-5c25-b631-f62f6936d624",
00:19:42.574        "is_configured": true,
00:19:42.574        "data_offset": 2048,
00:19:42.574        "data_size": 63488
00:19:42.574      }
00:19:42.574    ]
00:19:42.574  }'
00:19:42.574   17:03:35	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:19:42.574   17:03:35	-- common/autotest_common.sh@10 -- # set +x
00:19:43.511    17:03:36	-- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1
00:19:43.511    17:03:36	-- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks'
00:19:43.511  [2024-11-19 17:03:36.279783] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:19:43.511   17:03:36	-- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488
00:19:43.511    17:03:36	-- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:19:43.511    17:03:36	-- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset'
00:19:43.770   17:03:36	-- bdev/bdev_raid.sh@570 -- # data_offset=2048
00:19:43.770   17:03:36	-- bdev/bdev_raid.sh@572 -- # '[' true = true ']'
00:19:43.770   17:03:36	-- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1
00:19:43.770   17:03:36	-- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests
00:19:43.770  [2024-11-19 17:03:36.609770] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390
00:19:43.770  I/O size of 3145728 is greater than zero copy threshold (65536).
00:19:43.770  Zero copy mechanism will not be used.
00:19:43.770  Running I/O for 60 seconds...
00:19:44.029  [2024-11-19 17:03:36.695860] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:19:44.029  [2024-11-19 17:03:36.702801] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002390
00:19:44.029   17:03:36	-- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1
00:19:44.029   17:03:36	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:19:44.029   17:03:36	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:19:44.029   17:03:36	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:19:44.029   17:03:36	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:19:44.029   17:03:36	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1
00:19:44.029   17:03:36	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:19:44.029   17:03:36	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:19:44.029   17:03:36	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:19:44.029   17:03:36	-- bdev/bdev_raid.sh@125 -- # local tmp
00:19:44.029    17:03:36	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:19:44.029    17:03:36	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:19:44.288   17:03:37	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:19:44.288    "name": "raid_bdev1",
00:19:44.288    "uuid": "b9fe05e1-8427-4689-8075-ed778bb8d3c6",
00:19:44.288    "strip_size_kb": 0,
00:19:44.288    "state": "online",
00:19:44.288    "raid_level": "raid1",
00:19:44.288    "superblock": true,
00:19:44.288    "num_base_bdevs": 2,
00:19:44.288    "num_base_bdevs_discovered": 1,
00:19:44.288    "num_base_bdevs_operational": 1,
00:19:44.288    "base_bdevs_list": [
00:19:44.288      {
00:19:44.288        "name": null,
00:19:44.288        "uuid": "00000000-0000-0000-0000-000000000000",
00:19:44.288        "is_configured": false,
00:19:44.288        "data_offset": 2048,
00:19:44.288        "data_size": 63488
00:19:44.288      },
00:19:44.288      {
00:19:44.288        "name": "BaseBdev2",
00:19:44.288        "uuid": "6a48a678-c983-5c25-b631-f62f6936d624",
00:19:44.288        "is_configured": true,
00:19:44.288        "data_offset": 2048,
00:19:44.288        "data_size": 63488
00:19:44.288      }
00:19:44.288    ]
00:19:44.288  }'
00:19:44.288   17:03:37	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:19:44.288   17:03:37	-- common/autotest_common.sh@10 -- # set +x
00:19:44.856   17:03:37	-- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare
00:19:45.115  [2024-11-19 17:03:37.807468] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare
00:19:45.115  [2024-11-19 17:03:37.807542] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:19:45.115  [2024-11-19 17:03:37.845980] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460
00:19:45.115  [2024-11-19 17:03:37.848559] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1
00:19:45.115   17:03:37	-- bdev/bdev_raid.sh@598 -- # sleep 1
00:19:45.115  [2024-11-19 17:03:37.965418] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144
00:19:45.115  [2024-11-19 17:03:37.965889] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144
00:19:45.374  [2024-11-19 17:03:38.191546] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144
00:19:45.374  [2024-11-19 17:03:38.191828] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144
00:19:45.941  [2024-11-19 17:03:38.536281] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288
00:19:45.941  [2024-11-19 17:03:38.536782] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288
00:19:45.941  [2024-11-19 17:03:38.762375] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288
00:19:45.941  [2024-11-19 17:03:38.762716] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288
00:19:46.200   17:03:38	-- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:19:46.200   17:03:38	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:19:46.200   17:03:38	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:19:46.200   17:03:38	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:19:46.200   17:03:38	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:19:46.200    17:03:38	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:19:46.200    17:03:38	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:19:46.458  [2024-11-19 17:03:39.138029] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432
00:19:46.458   17:03:39	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:19:46.458    "name": "raid_bdev1",
00:19:46.458    "uuid": "b9fe05e1-8427-4689-8075-ed778bb8d3c6",
00:19:46.458    "strip_size_kb": 0,
00:19:46.458    "state": "online",
00:19:46.458    "raid_level": "raid1",
00:19:46.458    "superblock": true,
00:19:46.458    "num_base_bdevs": 2,
00:19:46.458    "num_base_bdevs_discovered": 2,
00:19:46.458    "num_base_bdevs_operational": 2,
00:19:46.458    "process": {
00:19:46.458      "type": "rebuild",
00:19:46.458      "target": "spare",
00:19:46.458      "progress": {
00:19:46.458        "blocks": 14336,
00:19:46.458        "percent": 22
00:19:46.458      }
00:19:46.458    },
00:19:46.458    "base_bdevs_list": [
00:19:46.458      {
00:19:46.458        "name": "spare",
00:19:46.458        "uuid": "000d886d-63a0-5d6a-8ce4-c716b2cb3f85",
00:19:46.458        "is_configured": true,
00:19:46.458        "data_offset": 2048,
00:19:46.458        "data_size": 63488
00:19:46.458      },
00:19:46.458      {
00:19:46.458        "name": "BaseBdev2",
00:19:46.458        "uuid": "6a48a678-c983-5c25-b631-f62f6936d624",
00:19:46.458        "is_configured": true,
00:19:46.458        "data_offset": 2048,
00:19:46.458        "data_size": 63488
00:19:46.458      }
00:19:46.458    ]
00:19:46.458  }'
00:19:46.458    17:03:39	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:19:46.458   17:03:39	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:19:46.458    17:03:39	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:19:46.458   17:03:39	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:19:46.458   17:03:39	-- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare
00:19:46.716  [2024-11-19 17:03:39.371845] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432
00:19:46.716  [2024-11-19 17:03:39.372214] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432
00:19:46.716  [2024-11-19 17:03:39.498982] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:19:46.973  [2024-11-19 17:03:39.585568] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device
00:19:46.973  [2024-11-19 17:03:39.596228] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:19:46.973  [2024-11-19 17:03:39.628847] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002390
00:19:46.973   17:03:39	-- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1
00:19:46.973   17:03:39	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:19:46.973   17:03:39	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:19:46.973   17:03:39	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:19:46.973   17:03:39	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:19:46.973   17:03:39	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1
00:19:46.973   17:03:39	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:19:46.973   17:03:39	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:19:46.973   17:03:39	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:19:46.973   17:03:39	-- bdev/bdev_raid.sh@125 -- # local tmp
00:19:46.973    17:03:39	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:19:46.973    17:03:39	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:19:47.230   17:03:39	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:19:47.230    "name": "raid_bdev1",
00:19:47.230    "uuid": "b9fe05e1-8427-4689-8075-ed778bb8d3c6",
00:19:47.230    "strip_size_kb": 0,
00:19:47.230    "state": "online",
00:19:47.230    "raid_level": "raid1",
00:19:47.230    "superblock": true,
00:19:47.230    "num_base_bdevs": 2,
00:19:47.230    "num_base_bdevs_discovered": 1,
00:19:47.230    "num_base_bdevs_operational": 1,
00:19:47.230    "base_bdevs_list": [
00:19:47.230      {
00:19:47.230        "name": null,
00:19:47.231        "uuid": "00000000-0000-0000-0000-000000000000",
00:19:47.231        "is_configured": false,
00:19:47.231        "data_offset": 2048,
00:19:47.231        "data_size": 63488
00:19:47.231      },
00:19:47.231      {
00:19:47.231        "name": "BaseBdev2",
00:19:47.231        "uuid": "6a48a678-c983-5c25-b631-f62f6936d624",
00:19:47.231        "is_configured": true,
00:19:47.231        "data_offset": 2048,
00:19:47.231        "data_size": 63488
00:19:47.231      }
00:19:47.231    ]
00:19:47.231  }'
00:19:47.231   17:03:39	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:19:47.231   17:03:39	-- common/autotest_common.sh@10 -- # set +x
00:19:47.797   17:03:40	-- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none
00:19:47.797   17:03:40	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:19:47.797   17:03:40	-- bdev/bdev_raid.sh@184 -- # local process_type=none
00:19:47.797   17:03:40	-- bdev/bdev_raid.sh@185 -- # local target=none
00:19:47.797   17:03:40	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:19:47.797    17:03:40	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:19:47.797    17:03:40	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:19:48.055   17:03:40	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:19:48.055    "name": "raid_bdev1",
00:19:48.055    "uuid": "b9fe05e1-8427-4689-8075-ed778bb8d3c6",
00:19:48.055    "strip_size_kb": 0,
00:19:48.055    "state": "online",
00:19:48.055    "raid_level": "raid1",
00:19:48.055    "superblock": true,
00:19:48.056    "num_base_bdevs": 2,
00:19:48.056    "num_base_bdevs_discovered": 1,
00:19:48.056    "num_base_bdevs_operational": 1,
00:19:48.056    "base_bdevs_list": [
00:19:48.056      {
00:19:48.056        "name": null,
00:19:48.056        "uuid": "00000000-0000-0000-0000-000000000000",
00:19:48.056        "is_configured": false,
00:19:48.056        "data_offset": 2048,
00:19:48.056        "data_size": 63488
00:19:48.056      },
00:19:48.056      {
00:19:48.056        "name": "BaseBdev2",
00:19:48.056        "uuid": "6a48a678-c983-5c25-b631-f62f6936d624",
00:19:48.056        "is_configured": true,
00:19:48.056        "data_offset": 2048,
00:19:48.056        "data_size": 63488
00:19:48.056      }
00:19:48.056    ]
00:19:48.056  }'
00:19:48.056    17:03:40	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:19:48.314   17:03:40	-- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]]
00:19:48.314    17:03:40	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:19:48.314   17:03:40	-- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]]
00:19:48.314   17:03:40	-- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare
00:19:48.573  [2024-11-19 17:03:41.257192] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare
00:19:48.573  [2024-11-19 17:03:41.257287] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:19:48.573   17:03:41	-- bdev/bdev_raid.sh@614 -- # sleep 1
00:19:48.573  [2024-11-19 17:03:41.332714] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600
00:19:48.573  [2024-11-19 17:03:41.335680] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1
00:19:48.831  [2024-11-19 17:03:41.448851] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144
00:19:48.831  [2024-11-19 17:03:41.449694] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144
00:19:48.831  [2024-11-19 17:03:41.590441] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144
00:19:49.398  [2024-11-19 17:03:41.950384] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288
00:19:49.398  [2024-11-19 17:03:42.087669] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288
00:19:49.656   17:03:42	-- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:19:49.656   17:03:42	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:19:49.656   17:03:42	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:19:49.656   17:03:42	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:19:49.656   17:03:42	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:19:49.656    17:03:42	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:19:49.656    17:03:42	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:19:49.915  [2024-11-19 17:03:42.514443] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432
00:19:49.915   17:03:42	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:19:49.915    "name": "raid_bdev1",
00:19:49.915    "uuid": "b9fe05e1-8427-4689-8075-ed778bb8d3c6",
00:19:49.915    "strip_size_kb": 0,
00:19:49.915    "state": "online",
00:19:49.915    "raid_level": "raid1",
00:19:49.915    "superblock": true,
00:19:49.915    "num_base_bdevs": 2,
00:19:49.915    "num_base_bdevs_discovered": 2,
00:19:49.915    "num_base_bdevs_operational": 2,
00:19:49.915    "process": {
00:19:49.915      "type": "rebuild",
00:19:49.915      "target": "spare",
00:19:49.915      "progress": {
00:19:49.915        "blocks": 16384,
00:19:49.915        "percent": 25
00:19:49.915      }
00:19:49.915    },
00:19:49.915    "base_bdevs_list": [
00:19:49.915      {
00:19:49.915        "name": "spare",
00:19:49.915        "uuid": "000d886d-63a0-5d6a-8ce4-c716b2cb3f85",
00:19:49.915        "is_configured": true,
00:19:49.915        "data_offset": 2048,
00:19:49.915        "data_size": 63488
00:19:49.915      },
00:19:49.915      {
00:19:49.915        "name": "BaseBdev2",
00:19:49.915        "uuid": "6a48a678-c983-5c25-b631-f62f6936d624",
00:19:49.915        "is_configured": true,
00:19:49.915        "data_offset": 2048,
00:19:49.915        "data_size": 63488
00:19:49.915      }
00:19:49.915    ]
00:19:49.915  }'
00:19:49.915    17:03:42	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:19:49.915   17:03:42	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:19:49.915    17:03:42	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:19:49.915   17:03:42	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:19:49.915   17:03:42	-- bdev/bdev_raid.sh@617 -- # '[' true = true ']'
00:19:49.915   17:03:42	-- bdev/bdev_raid.sh@617 -- # '[' = false ']'
00:19:49.915  /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected
00:19:49.915   17:03:42	-- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2
00:19:49.915   17:03:42	-- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']'
00:19:49.915   17:03:42	-- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']'
00:19:49.915   17:03:42	-- bdev/bdev_raid.sh@657 -- # local timeout=423
00:19:49.915   17:03:42	-- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout ))
00:19:49.915   17:03:42	-- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:19:49.915   17:03:42	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:19:49.915   17:03:42	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:19:49.915   17:03:42	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:19:49.915   17:03:42	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:19:49.915    17:03:42	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:19:49.915    17:03:42	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:19:49.915  [2024-11-19 17:03:42.738533] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576
00:19:50.174  [2024-11-19 17:03:42.866799] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576
00:19:50.174  [2024-11-19 17:03:42.867263] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576
00:19:50.174   17:03:42	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:19:50.174    "name": "raid_bdev1",
00:19:50.174    "uuid": "b9fe05e1-8427-4689-8075-ed778bb8d3c6",
00:19:50.174    "strip_size_kb": 0,
00:19:50.174    "state": "online",
00:19:50.174    "raid_level": "raid1",
00:19:50.174    "superblock": true,
00:19:50.174    "num_base_bdevs": 2,
00:19:50.174    "num_base_bdevs_discovered": 2,
00:19:50.174    "num_base_bdevs_operational": 2,
00:19:50.174    "process": {
00:19:50.174      "type": "rebuild",
00:19:50.174      "target": "spare",
00:19:50.174      "progress": {
00:19:50.174        "blocks": 22528,
00:19:50.174        "percent": 35
00:19:50.174      }
00:19:50.174    },
00:19:50.174    "base_bdevs_list": [
00:19:50.174      {
00:19:50.174        "name": "spare",
00:19:50.174        "uuid": "000d886d-63a0-5d6a-8ce4-c716b2cb3f85",
00:19:50.174        "is_configured": true,
00:19:50.174        "data_offset": 2048,
00:19:50.174        "data_size": 63488
00:19:50.174      },
00:19:50.174      {
00:19:50.174        "name": "BaseBdev2",
00:19:50.174        "uuid": "6a48a678-c983-5c25-b631-f62f6936d624",
00:19:50.174        "is_configured": true,
00:19:50.174        "data_offset": 2048,
00:19:50.174        "data_size": 63488
00:19:50.174      }
00:19:50.174    ]
00:19:50.174  }'
00:19:50.174    17:03:42	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:19:50.432   17:03:43	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:19:50.432    17:03:43	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:19:50.432   17:03:43	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:19:50.432   17:03:43	-- bdev/bdev_raid.sh@662 -- # sleep 1
00:19:50.432  [2024-11-19 17:03:43.132257] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720
00:19:50.432  [2024-11-19 17:03:43.133149] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720
00:19:50.432  [2024-11-19 17:03:43.243809] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720
00:19:50.432  [2024-11-19 17:03:43.244259] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720
00:19:50.691  [2024-11-19 17:03:43.482223] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864
00:19:50.950  [2024-11-19 17:03:43.584666] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864
00:19:51.515   17:03:44	-- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout ))
00:19:51.515   17:03:44	-- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:19:51.515   17:03:44	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:19:51.515   17:03:44	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:19:51.515   17:03:44	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:19:51.515   17:03:44	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:19:51.515    17:03:44	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:19:51.515    17:03:44	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:19:51.515  [2024-11-19 17:03:44.160086] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152
00:19:51.515  [2024-11-19 17:03:44.160930] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152
00:19:51.515   17:03:44	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:19:51.516    "name": "raid_bdev1",
00:19:51.516    "uuid": "b9fe05e1-8427-4689-8075-ed778bb8d3c6",
00:19:51.516    "strip_size_kb": 0,
00:19:51.516    "state": "online",
00:19:51.516    "raid_level": "raid1",
00:19:51.516    "superblock": true,
00:19:51.516    "num_base_bdevs": 2,
00:19:51.516    "num_base_bdevs_discovered": 2,
00:19:51.516    "num_base_bdevs_operational": 2,
00:19:51.516    "process": {
00:19:51.516      "type": "rebuild",
00:19:51.516      "target": "spare",
00:19:51.516      "progress": {
00:19:51.516        "blocks": 47104,
00:19:51.516        "percent": 74
00:19:51.516      }
00:19:51.516    },
00:19:51.516    "base_bdevs_list": [
00:19:51.516      {
00:19:51.516        "name": "spare",
00:19:51.516        "uuid": "000d886d-63a0-5d6a-8ce4-c716b2cb3f85",
00:19:51.516        "is_configured": true,
00:19:51.516        "data_offset": 2048,
00:19:51.516        "data_size": 63488
00:19:51.516      },
00:19:51.516      {
00:19:51.516        "name": "BaseBdev2",
00:19:51.516        "uuid": "6a48a678-c983-5c25-b631-f62f6936d624",
00:19:51.516        "is_configured": true,
00:19:51.516        "data_offset": 2048,
00:19:51.516        "data_size": 63488
00:19:51.516      }
00:19:51.516    ]
00:19:51.516  }'
00:19:51.516    17:03:44	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:19:51.774   17:03:44	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:19:51.774    17:03:44	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:19:51.774   17:03:44	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:19:51.774   17:03:44	-- bdev/bdev_raid.sh@662 -- # sleep 1
00:19:52.032  [2024-11-19 17:03:44.634032] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296
00:19:52.032  [2024-11-19 17:03:44.858364] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440
00:19:52.599  [2024-11-19 17:03:45.196321] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1
00:19:52.599  [2024-11-19 17:03:45.304350] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1
00:19:52.599  [2024-11-19 17:03:45.306949] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:19:52.599   17:03:45	-- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout ))
00:19:52.599   17:03:45	-- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:19:52.599   17:03:45	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:19:52.599   17:03:45	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:19:52.599   17:03:45	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:19:52.599   17:03:45	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:19:52.858    17:03:45	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:19:52.858    17:03:45	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:19:53.116   17:03:45	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:19:53.116    "name": "raid_bdev1",
00:19:53.116    "uuid": "b9fe05e1-8427-4689-8075-ed778bb8d3c6",
00:19:53.116    "strip_size_kb": 0,
00:19:53.116    "state": "online",
00:19:53.116    "raid_level": "raid1",
00:19:53.116    "superblock": true,
00:19:53.116    "num_base_bdevs": 2,
00:19:53.116    "num_base_bdevs_discovered": 2,
00:19:53.116    "num_base_bdevs_operational": 2,
00:19:53.116    "base_bdevs_list": [
00:19:53.116      {
00:19:53.116        "name": "spare",
00:19:53.116        "uuid": "000d886d-63a0-5d6a-8ce4-c716b2cb3f85",
00:19:53.116        "is_configured": true,
00:19:53.116        "data_offset": 2048,
00:19:53.116        "data_size": 63488
00:19:53.116      },
00:19:53.116      {
00:19:53.116        "name": "BaseBdev2",
00:19:53.116        "uuid": "6a48a678-c983-5c25-b631-f62f6936d624",
00:19:53.116        "is_configured": true,
00:19:53.116        "data_offset": 2048,
00:19:53.116        "data_size": 63488
00:19:53.116      }
00:19:53.116    ]
00:19:53.116  }'
00:19:53.116    17:03:45	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:19:53.116   17:03:45	-- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]]
00:19:53.116    17:03:45	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:19:53.116   17:03:45	-- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]]
00:19:53.116   17:03:45	-- bdev/bdev_raid.sh@660 -- # break
00:19:53.116   17:03:45	-- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none
00:19:53.116   17:03:45	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:19:53.116   17:03:45	-- bdev/bdev_raid.sh@184 -- # local process_type=none
00:19:53.116   17:03:45	-- bdev/bdev_raid.sh@185 -- # local target=none
00:19:53.116   17:03:45	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:19:53.116    17:03:45	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:19:53.116    17:03:45	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:19:53.374   17:03:46	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:19:53.374    "name": "raid_bdev1",
00:19:53.374    "uuid": "b9fe05e1-8427-4689-8075-ed778bb8d3c6",
00:19:53.374    "strip_size_kb": 0,
00:19:53.374    "state": "online",
00:19:53.374    "raid_level": "raid1",
00:19:53.374    "superblock": true,
00:19:53.374    "num_base_bdevs": 2,
00:19:53.374    "num_base_bdevs_discovered": 2,
00:19:53.374    "num_base_bdevs_operational": 2,
00:19:53.374    "base_bdevs_list": [
00:19:53.374      {
00:19:53.374        "name": "spare",
00:19:53.374        "uuid": "000d886d-63a0-5d6a-8ce4-c716b2cb3f85",
00:19:53.374        "is_configured": true,
00:19:53.374        "data_offset": 2048,
00:19:53.374        "data_size": 63488
00:19:53.374      },
00:19:53.374      {
00:19:53.374        "name": "BaseBdev2",
00:19:53.374        "uuid": "6a48a678-c983-5c25-b631-f62f6936d624",
00:19:53.374        "is_configured": true,
00:19:53.374        "data_offset": 2048,
00:19:53.374        "data_size": 63488
00:19:53.374      }
00:19:53.374    ]
00:19:53.374  }'
00:19:53.374    17:03:46	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:19:53.374   17:03:46	-- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]]
00:19:53.374    17:03:46	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:19:53.633   17:03:46	-- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]]
00:19:53.633   17:03:46	-- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2
00:19:53.633   17:03:46	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:19:53.633   17:03:46	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:19:53.633   17:03:46	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:19:53.633   17:03:46	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:19:53.633   17:03:46	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:19:53.633   17:03:46	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:19:53.633   17:03:46	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:19:53.633   17:03:46	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:19:53.633   17:03:46	-- bdev/bdev_raid.sh@125 -- # local tmp
00:19:53.633    17:03:46	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:19:53.633    17:03:46	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:19:53.633   17:03:46	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:19:53.633    "name": "raid_bdev1",
00:19:53.633    "uuid": "b9fe05e1-8427-4689-8075-ed778bb8d3c6",
00:19:53.633    "strip_size_kb": 0,
00:19:53.633    "state": "online",
00:19:53.633    "raid_level": "raid1",
00:19:53.633    "superblock": true,
00:19:53.633    "num_base_bdevs": 2,
00:19:53.633    "num_base_bdevs_discovered": 2,
00:19:53.633    "num_base_bdevs_operational": 2,
00:19:53.633    "base_bdevs_list": [
00:19:53.633      {
00:19:53.633        "name": "spare",
00:19:53.633        "uuid": "000d886d-63a0-5d6a-8ce4-c716b2cb3f85",
00:19:53.633        "is_configured": true,
00:19:53.633        "data_offset": 2048,
00:19:53.633        "data_size": 63488
00:19:53.633      },
00:19:53.633      {
00:19:53.633        "name": "BaseBdev2",
00:19:53.633        "uuid": "6a48a678-c983-5c25-b631-f62f6936d624",
00:19:53.633        "is_configured": true,
00:19:53.633        "data_offset": 2048,
00:19:53.633        "data_size": 63488
00:19:53.633      }
00:19:53.633    ]
00:19:53.633  }'
00:19:53.633   17:03:46	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:19:53.633   17:03:46	-- common/autotest_common.sh@10 -- # set +x
00:19:54.624   17:03:47	-- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1
00:19:54.883  [2024-11-19 17:03:47.487447] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:19:54.883  [2024-11-19 17:03:47.487513] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:19:54.883  
00:19:54.883                                                                                                  Latency(us)
00:19:54.883  
[2024-11-19T17:03:47.747Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:19:54.883  
[2024-11-19T17:03:47.747Z]  Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728)
00:19:54.883  	 raid_bdev1          :      10.89     107.76     323.27       0.00     0.00   12129.74     497.37  118339.29
00:19:54.883  
[2024-11-19T17:03:47.747Z]  ===================================================================================================================
00:19:54.883  
[2024-11-19T17:03:47.747Z]  Total                       :                107.76     323.27       0.00     0.00   12129.74     497.37  118339.29
00:19:54.883  [2024-11-19 17:03:47.512684] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:19:54.883  [2024-11-19 17:03:47.512754] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:19:54.883  [2024-11-19 17:03:47.512877] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:19:54.883  [2024-11-19 17:03:47.512893] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline
00:19:54.883  0
00:19:54.883    17:03:47	-- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:19:54.883    17:03:47	-- bdev/bdev_raid.sh@671 -- # jq length
00:19:55.142   17:03:47	-- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]]
00:19:55.142   17:03:47	-- bdev/bdev_raid.sh@673 -- # '[' true = true ']'
00:19:55.142   17:03:47	-- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0
00:19:55.142   17:03:47	-- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:19:55.142   17:03:47	-- bdev/nbd_common.sh@10 -- # bdev_list=('spare')
00:19:55.142   17:03:47	-- bdev/nbd_common.sh@10 -- # local bdev_list
00:19:55.142   17:03:47	-- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0')
00:19:55.142   17:03:47	-- bdev/nbd_common.sh@11 -- # local nbd_list
00:19:55.142   17:03:47	-- bdev/nbd_common.sh@12 -- # local i
00:19:55.142   17:03:47	-- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:19:55.142   17:03:47	-- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:19:55.142   17:03:47	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0
00:19:55.401  /dev/nbd0
00:19:55.401    17:03:48	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:19:55.401   17:03:48	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:19:55.401   17:03:48	-- common/autotest_common.sh@866 -- # local nbd_name=nbd0
00:19:55.401   17:03:48	-- common/autotest_common.sh@867 -- # local i
00:19:55.401   17:03:48	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:19:55.401   17:03:48	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:19:55.401   17:03:48	-- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions
00:19:55.401   17:03:48	-- common/autotest_common.sh@871 -- # break
00:19:55.401   17:03:48	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:19:55.401   17:03:48	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:19:55.402   17:03:48	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:19:55.402  1+0 records in
00:19:55.402  1+0 records out
00:19:55.402  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00063086 s, 6.5 MB/s
00:19:55.402    17:03:48	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:19:55.402   17:03:48	-- common/autotest_common.sh@884 -- # size=4096
00:19:55.402   17:03:48	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:19:55.402   17:03:48	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:19:55.402   17:03:48	-- common/autotest_common.sh@887 -- # return 0
00:19:55.402   17:03:48	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:19:55.402   17:03:48	-- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:19:55.402   17:03:48	-- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}"
00:19:55.402   17:03:48	-- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev2 ']'
00:19:55.402   17:03:48	-- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1
00:19:55.402   17:03:48	-- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:19:55.402   17:03:48	-- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2')
00:19:55.402   17:03:48	-- bdev/nbd_common.sh@10 -- # local bdev_list
00:19:55.402   17:03:48	-- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1')
00:19:55.402   17:03:48	-- bdev/nbd_common.sh@11 -- # local nbd_list
00:19:55.402   17:03:48	-- bdev/nbd_common.sh@12 -- # local i
00:19:55.402   17:03:48	-- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:19:55.402   17:03:48	-- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:19:55.402   17:03:48	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1
00:19:55.660  /dev/nbd1
00:19:55.660    17:03:48	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:19:55.660   17:03:48	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:19:55.660   17:03:48	-- common/autotest_common.sh@866 -- # local nbd_name=nbd1
00:19:55.660   17:03:48	-- common/autotest_common.sh@867 -- # local i
00:19:55.660   17:03:48	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:19:55.660   17:03:48	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:19:55.660   17:03:48	-- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions
00:19:55.660   17:03:48	-- common/autotest_common.sh@871 -- # break
00:19:55.660   17:03:48	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:19:55.660   17:03:48	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:19:55.660   17:03:48	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:19:55.660  1+0 records in
00:19:55.660  1+0 records out
00:19:55.660  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000775541 s, 5.3 MB/s
00:19:55.660    17:03:48	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:19:55.660   17:03:48	-- common/autotest_common.sh@884 -- # size=4096
00:19:55.660   17:03:48	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:19:55.660   17:03:48	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:19:55.660   17:03:48	-- common/autotest_common.sh@887 -- # return 0
00:19:55.660   17:03:48	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:19:55.660   17:03:48	-- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:19:55.660   17:03:48	-- bdev/bdev_raid.sh@681 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1
00:19:55.919   17:03:48	-- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1
00:19:55.919   17:03:48	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:19:55.919   17:03:48	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1')
00:19:55.919   17:03:48	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:19:55.919   17:03:48	-- bdev/nbd_common.sh@51 -- # local i
00:19:55.919   17:03:48	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:19:55.919   17:03:48	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1
00:19:56.177    17:03:48	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:19:56.177   17:03:48	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:19:56.177   17:03:48	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:19:56.177   17:03:48	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:19:56.177   17:03:48	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:19:56.177   17:03:48	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:19:56.177   17:03:48	-- bdev/nbd_common.sh@41 -- # break
00:19:56.177   17:03:48	-- bdev/nbd_common.sh@45 -- # return 0
00:19:56.177   17:03:48	-- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0
00:19:56.177   17:03:48	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:19:56.177   17:03:48	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:19:56.177   17:03:48	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:19:56.177   17:03:48	-- bdev/nbd_common.sh@51 -- # local i
00:19:56.177   17:03:48	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:19:56.177   17:03:48	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0
00:19:56.438    17:03:49	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:19:56.438   17:03:49	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:19:56.438   17:03:49	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:19:56.438   17:03:49	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:19:56.438   17:03:49	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:19:56.438   17:03:49	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:19:56.438   17:03:49	-- bdev/nbd_common.sh@41 -- # break
00:19:56.438   17:03:49	-- bdev/nbd_common.sh@45 -- # return 0
00:19:56.438   17:03:49	-- bdev/bdev_raid.sh@692 -- # '[' true = true ']'
00:19:56.438   17:03:49	-- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}"
00:19:56.438   17:03:49	-- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']'
00:19:56.438   17:03:49	-- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1
00:19:56.709   17:03:49	-- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1
00:19:56.968  [2024-11-19 17:03:49.630545] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc
00:19:56.968  [2024-11-19 17:03:49.630676] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:19:56.968  [2024-11-19 17:03:49.630716] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080
00:19:56.968  [2024-11-19 17:03:49.630754] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:19:56.968  [2024-11-19 17:03:49.633687] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:19:56.968  [2024-11-19 17:03:49.633787] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1
00:19:56.968  [2024-11-19 17:03:49.633900] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1
00:19:56.968  [2024-11-19 17:03:49.633960] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:19:56.968  BaseBdev1
00:19:56.968   17:03:49	-- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}"
00:19:56.969   17:03:49	-- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']'
00:19:56.969   17:03:49	-- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2
00:19:57.227   17:03:49	-- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2
00:19:57.484  [2024-11-19 17:03:50.142769] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc
00:19:57.484  [2024-11-19 17:03:50.142954] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:19:57.484  [2024-11-19 17:03:50.142995] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980
00:19:57.484  [2024-11-19 17:03:50.143027] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:19:57.484  [2024-11-19 17:03:50.143520] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:19:57.484  [2024-11-19 17:03:50.143597] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2
00:19:57.484  [2024-11-19 17:03:50.143703] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2
00:19:57.484  [2024-11-19 17:03:50.143717] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1)
00:19:57.484  [2024-11-19 17:03:50.143726] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:19:57.484  [2024-11-19 17:03:50.143756] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state configuring
00:19:57.484  [2024-11-19 17:03:50.143807] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:19:57.484  BaseBdev2
00:19:57.484   17:03:50	-- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare
00:19:57.743   17:03:50	-- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare
00:19:58.002  [2024-11-19 17:03:50.658914] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay
00:19:58.002  [2024-11-19 17:03:50.659040] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:19:58.002  [2024-11-19 17:03:50.659103] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80
00:19:58.002  [2024-11-19 17:03:50.659129] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:19:58.002  [2024-11-19 17:03:50.659705] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:19:58.002  [2024-11-19 17:03:50.659778] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare
00:19:58.002  [2024-11-19 17:03:50.659896] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare
00:19:58.002  [2024-11-19 17:03:50.659937] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:19:58.002  spare
00:19:58.002   17:03:50	-- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2
00:19:58.002   17:03:50	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:19:58.002   17:03:50	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:19:58.002   17:03:50	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:19:58.002   17:03:50	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:19:58.002   17:03:50	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:19:58.002   17:03:50	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:19:58.002   17:03:50	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:19:58.002   17:03:50	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:19:58.002   17:03:50	-- bdev/bdev_raid.sh@125 -- # local tmp
00:19:58.002    17:03:50	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:19:58.002    17:03:50	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:19:58.002  [2024-11-19 17:03:50.760079] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80
00:19:58.002  [2024-11-19 17:03:50.760137] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512
00:19:58.002  [2024-11-19 17:03:50.760356] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000278c0
00:19:58.002  [2024-11-19 17:03:50.760894] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80
00:19:58.002  [2024-11-19 17:03:50.760921] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80
00:19:58.002  [2024-11-19 17:03:50.761097] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:19:58.260   17:03:50	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:19:58.260    "name": "raid_bdev1",
00:19:58.260    "uuid": "b9fe05e1-8427-4689-8075-ed778bb8d3c6",
00:19:58.260    "strip_size_kb": 0,
00:19:58.260    "state": "online",
00:19:58.260    "raid_level": "raid1",
00:19:58.260    "superblock": true,
00:19:58.260    "num_base_bdevs": 2,
00:19:58.260    "num_base_bdevs_discovered": 2,
00:19:58.260    "num_base_bdevs_operational": 2,
00:19:58.260    "base_bdevs_list": [
00:19:58.260      {
00:19:58.260        "name": "spare",
00:19:58.260        "uuid": "000d886d-63a0-5d6a-8ce4-c716b2cb3f85",
00:19:58.260        "is_configured": true,
00:19:58.260        "data_offset": 2048,
00:19:58.260        "data_size": 63488
00:19:58.260      },
00:19:58.260      {
00:19:58.260        "name": "BaseBdev2",
00:19:58.260        "uuid": "6a48a678-c983-5c25-b631-f62f6936d624",
00:19:58.260        "is_configured": true,
00:19:58.260        "data_offset": 2048,
00:19:58.260        "data_size": 63488
00:19:58.260      }
00:19:58.260    ]
00:19:58.260  }'
00:19:58.261   17:03:50	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:19:58.261   17:03:50	-- common/autotest_common.sh@10 -- # set +x
00:19:58.827   17:03:51	-- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none
00:19:58.827   17:03:51	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:19:58.827   17:03:51	-- bdev/bdev_raid.sh@184 -- # local process_type=none
00:19:58.827   17:03:51	-- bdev/bdev_raid.sh@185 -- # local target=none
00:19:58.827   17:03:51	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:19:58.827    17:03:51	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:19:58.827    17:03:51	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:19:59.395   17:03:51	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:19:59.395    "name": "raid_bdev1",
00:19:59.395    "uuid": "b9fe05e1-8427-4689-8075-ed778bb8d3c6",
00:19:59.395    "strip_size_kb": 0,
00:19:59.395    "state": "online",
00:19:59.395    "raid_level": "raid1",
00:19:59.395    "superblock": true,
00:19:59.395    "num_base_bdevs": 2,
00:19:59.395    "num_base_bdevs_discovered": 2,
00:19:59.395    "num_base_bdevs_operational": 2,
00:19:59.395    "base_bdevs_list": [
00:19:59.395      {
00:19:59.395        "name": "spare",
00:19:59.395        "uuid": "000d886d-63a0-5d6a-8ce4-c716b2cb3f85",
00:19:59.395        "is_configured": true,
00:19:59.395        "data_offset": 2048,
00:19:59.395        "data_size": 63488
00:19:59.395      },
00:19:59.395      {
00:19:59.395        "name": "BaseBdev2",
00:19:59.395        "uuid": "6a48a678-c983-5c25-b631-f62f6936d624",
00:19:59.395        "is_configured": true,
00:19:59.395        "data_offset": 2048,
00:19:59.395        "data_size": 63488
00:19:59.395      }
00:19:59.395    ]
00:19:59.395  }'
00:19:59.395    17:03:51	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:19:59.395   17:03:52	-- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]]
00:19:59.395    17:03:52	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:19:59.395   17:03:52	-- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]]
00:19:59.395    17:03:52	-- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:19:59.395    17:03:52	-- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name'
00:19:59.654   17:03:52	-- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]]
00:19:59.654   17:03:52	-- bdev/bdev_raid.sh@709 -- # killprocess 134339
00:19:59.654   17:03:52	-- common/autotest_common.sh@936 -- # '[' -z 134339 ']'
00:19:59.654   17:03:52	-- common/autotest_common.sh@940 -- # kill -0 134339
00:19:59.654    17:03:52	-- common/autotest_common.sh@941 -- # uname
00:19:59.654   17:03:52	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:19:59.654    17:03:52	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 134339
00:19:59.654   17:03:52	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:19:59.654   17:03:52	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:19:59.654   17:03:52	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 134339'
00:19:59.654  killing process with pid 134339
00:19:59.654   17:03:52	-- common/autotest_common.sh@955 -- # kill 134339
00:19:59.654  Received shutdown signal, test time was about 15.771949 seconds
00:19:59.654  
00:19:59.654                                                                                                  Latency(us)
00:19:59.654  
[2024-11-19T17:03:52.518Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:19:59.654  
[2024-11-19T17:03:52.518Z]  ===================================================================================================================
00:19:59.654  
[2024-11-19T17:03:52.518Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:19:59.654  [2024-11-19 17:03:52.384326] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:19:59.654  [2024-11-19 17:03:52.384446] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:19:59.654  [2024-11-19 17:03:52.384530] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:19:59.654  [2024-11-19 17:03:52.384545] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline
00:19:59.654   17:03:52	-- common/autotest_common.sh@960 -- # wait 134339
00:19:59.654  [2024-11-19 17:03:52.415073] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:19:59.913   17:03:52	-- bdev/bdev_raid.sh@711 -- # return 0
00:19:59.913  
00:19:59.913  real	0m20.603s
00:19:59.913  user	0m33.782s
00:19:59.913  sys	0m2.875s
00:19:59.913   17:03:52	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:19:59.913   17:03:52	-- common/autotest_common.sh@10 -- # set +x
00:19:59.913  ************************************
00:19:59.913  END TEST raid_rebuild_test_sb_io
00:19:59.913  ************************************
00:20:00.171   17:03:52	-- bdev/bdev_raid.sh@734 -- # for n in 2 4
00:20:00.171   17:03:52	-- bdev/bdev_raid.sh@735 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false
00:20:00.171   17:03:52	-- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']'
00:20:00.171   17:03:52	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:20:00.171   17:03:52	-- common/autotest_common.sh@10 -- # set +x
00:20:00.171  ************************************
00:20:00.171  START TEST raid_rebuild_test
00:20:00.171  ************************************
00:20:00.171   17:03:52	-- common/autotest_common.sh@1114 -- # raid_rebuild_test raid1 4 false false
00:20:00.171   17:03:52	-- bdev/bdev_raid.sh@517 -- # local raid_level=raid1
00:20:00.171   17:03:52	-- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4
00:20:00.171   17:03:52	-- bdev/bdev_raid.sh@519 -- # local superblock=false
00:20:00.171   17:03:52	-- bdev/bdev_raid.sh@520 -- # local background_io=false
00:20:00.171    17:03:52	-- bdev/bdev_raid.sh@521 -- # (( i = 1 ))
00:20:00.171    17:03:52	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:20:00.171    17:03:52	-- bdev/bdev_raid.sh@521 -- # echo BaseBdev1
00:20:00.171    17:03:52	-- bdev/bdev_raid.sh@521 -- # (( i++ ))
00:20:00.171    17:03:52	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:20:00.171    17:03:52	-- bdev/bdev_raid.sh@521 -- # echo BaseBdev2
00:20:00.171    17:03:52	-- bdev/bdev_raid.sh@521 -- # (( i++ ))
00:20:00.171    17:03:52	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:20:00.171    17:03:52	-- bdev/bdev_raid.sh@521 -- # echo BaseBdev3
00:20:00.171    17:03:52	-- bdev/bdev_raid.sh@521 -- # (( i++ ))
00:20:00.171    17:03:52	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:20:00.171    17:03:52	-- bdev/bdev_raid.sh@521 -- # echo BaseBdev4
00:20:00.171    17:03:52	-- bdev/bdev_raid.sh@521 -- # (( i++ ))
00:20:00.171    17:03:52	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:20:00.171   17:03:52	-- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4')
00:20:00.171   17:03:52	-- bdev/bdev_raid.sh@521 -- # local base_bdevs
00:20:00.171   17:03:52	-- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1
00:20:00.171   17:03:52	-- bdev/bdev_raid.sh@523 -- # local strip_size
00:20:00.171   17:03:52	-- bdev/bdev_raid.sh@524 -- # local create_arg
00:20:00.171   17:03:52	-- bdev/bdev_raid.sh@525 -- # local raid_bdev_size
00:20:00.171   17:03:52	-- bdev/bdev_raid.sh@526 -- # local data_offset
00:20:00.171   17:03:52	-- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']'
00:20:00.171   17:03:52	-- bdev/bdev_raid.sh@536 -- # strip_size=0
00:20:00.171   17:03:52	-- bdev/bdev_raid.sh@539 -- # '[' false = true ']'
00:20:00.171   17:03:52	-- bdev/bdev_raid.sh@544 -- # raid_pid=134887
00:20:00.171   17:03:52	-- bdev/bdev_raid.sh@545 -- # waitforlisten 134887 /var/tmp/spdk-raid.sock
00:20:00.171   17:03:52	-- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid
00:20:00.171   17:03:52	-- common/autotest_common.sh@829 -- # '[' -z 134887 ']'
00:20:00.171   17:03:52	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock
00:20:00.171   17:03:52	-- common/autotest_common.sh@834 -- # local max_retries=100
00:20:00.171   17:03:52	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...'
00:20:00.171  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...
00:20:00.171   17:03:52	-- common/autotest_common.sh@838 -- # xtrace_disable
00:20:00.171   17:03:52	-- common/autotest_common.sh@10 -- # set +x
00:20:00.171  [2024-11-19 17:03:52.858811] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:20:00.171  I/O size of 3145728 is greater than zero copy threshold (65536).
00:20:00.171  Zero copy mechanism will not be used.
00:20:00.171  [2024-11-19 17:03:52.859140] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134887 ]
00:20:00.171  [2024-11-19 17:03:53.016525] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:20:00.430  [2024-11-19 17:03:53.091711] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:20:00.430  [2024-11-19 17:03:53.139740] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:20:01.366   17:03:53	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:20:01.366   17:03:53	-- common/autotest_common.sh@862 -- # return 0
00:20:01.366   17:03:53	-- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}"
00:20:01.366   17:03:53	-- bdev/bdev_raid.sh@549 -- # '[' false = true ']'
00:20:01.366   17:03:53	-- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1
00:20:01.366  BaseBdev1
00:20:01.366   17:03:54	-- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}"
00:20:01.366   17:03:54	-- bdev/bdev_raid.sh@549 -- # '[' false = true ']'
00:20:01.366   17:03:54	-- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2
00:20:01.626  BaseBdev2
00:20:01.626   17:03:54	-- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}"
00:20:01.626   17:03:54	-- bdev/bdev_raid.sh@549 -- # '[' false = true ']'
00:20:01.626   17:03:54	-- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3
00:20:01.883  BaseBdev3
00:20:01.883   17:03:54	-- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}"
00:20:01.883   17:03:54	-- bdev/bdev_raid.sh@549 -- # '[' false = true ']'
00:20:01.883   17:03:54	-- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4
00:20:02.140  BaseBdev4
00:20:02.140   17:03:54	-- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc
00:20:02.398  spare_malloc
00:20:02.398   17:03:55	-- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000
00:20:02.964  spare_delay
00:20:02.964   17:03:55	-- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare
00:20:03.223  [2024-11-19 17:03:55.941065] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay
00:20:03.223  [2024-11-19 17:03:55.941245] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:20:03.223  [2024-11-19 17:03:55.941293] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880
00:20:03.223  [2024-11-19 17:03:55.941358] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:20:03.223  [2024-11-19 17:03:55.944730] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:20:03.223  [2024-11-19 17:03:55.944852] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare
00:20:03.223  spare
00:20:03.223   17:03:55	-- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1
00:20:03.481  [2024-11-19 17:03:56.237347] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:20:03.481  [2024-11-19 17:03:56.240255] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:20:03.481  [2024-11-19 17:03:56.240346] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:20:03.481  [2024-11-19 17:03:56.240384] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed
00:20:03.481  [2024-11-19 17:03:56.240500] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80
00:20:03.482  [2024-11-19 17:03:56.240515] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512
00:20:03.482  [2024-11-19 17:03:56.240759] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0
00:20:03.482  [2024-11-19 17:03:56.241250] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80
00:20:03.482  [2024-11-19 17:03:56.241279] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80
00:20:03.482  [2024-11-19 17:03:56.241588] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:20:03.482   17:03:56	-- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4
00:20:03.482   17:03:56	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:20:03.482   17:03:56	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:20:03.482   17:03:56	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:20:03.482   17:03:56	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:20:03.482   17:03:56	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:20:03.482   17:03:56	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:20:03.482   17:03:56	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:20:03.482   17:03:56	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:20:03.482   17:03:56	-- bdev/bdev_raid.sh@125 -- # local tmp
00:20:03.482    17:03:56	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:20:03.482    17:03:56	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:20:03.741   17:03:56	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:20:03.741    "name": "raid_bdev1",
00:20:03.741    "uuid": "45bc2c01-dccc-4bef-8a6f-77a2be25cc3c",
00:20:03.741    "strip_size_kb": 0,
00:20:03.741    "state": "online",
00:20:03.741    "raid_level": "raid1",
00:20:03.741    "superblock": false,
00:20:03.741    "num_base_bdevs": 4,
00:20:03.741    "num_base_bdevs_discovered": 4,
00:20:03.741    "num_base_bdevs_operational": 4,
00:20:03.741    "base_bdevs_list": [
00:20:03.741      {
00:20:03.741        "name": "BaseBdev1",
00:20:03.741        "uuid": "29d286cf-9f3e-425f-b983-b509871475f0",
00:20:03.741        "is_configured": true,
00:20:03.741        "data_offset": 0,
00:20:03.741        "data_size": 65536
00:20:03.741      },
00:20:03.741      {
00:20:03.741        "name": "BaseBdev2",
00:20:03.741        "uuid": "758bdb6b-ed99-409b-8252-086fb4da0e87",
00:20:03.741        "is_configured": true,
00:20:03.741        "data_offset": 0,
00:20:03.741        "data_size": 65536
00:20:03.741      },
00:20:03.741      {
00:20:03.741        "name": "BaseBdev3",
00:20:03.741        "uuid": "421a8f45-88e4-4c99-ae74-aee9f566c16d",
00:20:03.741        "is_configured": true,
00:20:03.741        "data_offset": 0,
00:20:03.741        "data_size": 65536
00:20:03.741      },
00:20:03.741      {
00:20:03.742        "name": "BaseBdev4",
00:20:03.742        "uuid": "5587dab4-235e-41d8-979f-7c03424034ee",
00:20:03.742        "is_configured": true,
00:20:03.742        "data_offset": 0,
00:20:03.742        "data_size": 65536
00:20:03.742      }
00:20:03.742    ]
00:20:03.742  }'
00:20:03.742   17:03:56	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:20:03.742   17:03:56	-- common/autotest_common.sh@10 -- # set +x
00:20:04.679    17:03:57	-- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks'
00:20:04.679    17:03:57	-- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1
00:20:04.679  [2024-11-19 17:03:57.450093] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:20:04.679   17:03:57	-- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536
00:20:04.679    17:03:57	-- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:20:04.679    17:03:57	-- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset'
00:20:05.246   17:03:57	-- bdev/bdev_raid.sh@570 -- # data_offset=0
00:20:05.246   17:03:57	-- bdev/bdev_raid.sh@572 -- # '[' false = true ']'
00:20:05.246   17:03:57	-- bdev/bdev_raid.sh@576 -- # local write_unit_size
00:20:05.246   17:03:57	-- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0
00:20:05.246   17:03:57	-- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:20:05.246   17:03:57	-- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1')
00:20:05.246   17:03:57	-- bdev/nbd_common.sh@10 -- # local bdev_list
00:20:05.246   17:03:57	-- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0')
00:20:05.246   17:03:57	-- bdev/nbd_common.sh@11 -- # local nbd_list
00:20:05.246   17:03:57	-- bdev/nbd_common.sh@12 -- # local i
00:20:05.246   17:03:57	-- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:20:05.246   17:03:57	-- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:20:05.246   17:03:57	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0
00:20:05.504  [2024-11-19 17:03:58.138160] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460
00:20:05.504  /dev/nbd0
00:20:05.504    17:03:58	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:20:05.504   17:03:58	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:20:05.504   17:03:58	-- common/autotest_common.sh@866 -- # local nbd_name=nbd0
00:20:05.504   17:03:58	-- common/autotest_common.sh@867 -- # local i
00:20:05.504   17:03:58	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:20:05.504   17:03:58	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:20:05.504   17:03:58	-- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions
00:20:05.504   17:03:58	-- common/autotest_common.sh@871 -- # break
00:20:05.504   17:03:58	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:20:05.504   17:03:58	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:20:05.504   17:03:58	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:20:05.504  1+0 records in
00:20:05.504  1+0 records out
00:20:05.504  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000415327 s, 9.9 MB/s
00:20:05.504    17:03:58	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:20:05.504   17:03:58	-- common/autotest_common.sh@884 -- # size=4096
00:20:05.504   17:03:58	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:20:05.504   17:03:58	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:20:05.504   17:03:58	-- common/autotest_common.sh@887 -- # return 0
00:20:05.504   17:03:58	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:20:05.504   17:03:58	-- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:20:05.504   17:03:58	-- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']'
00:20:05.504   17:03:58	-- bdev/bdev_raid.sh@584 -- # write_unit_size=1
00:20:05.504   17:03:58	-- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct
00:20:13.615  65536+0 records in
00:20:13.615  65536+0 records out
00:20:13.615  33554432 bytes (34 MB, 32 MiB) copied, 7.57948 s, 4.4 MB/s
00:20:13.615   17:04:05	-- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0
00:20:13.615   17:04:05	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:20:13.615   17:04:05	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:20:13.615   17:04:05	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:20:13.615   17:04:05	-- bdev/nbd_common.sh@51 -- # local i
00:20:13.615   17:04:05	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:20:13.615   17:04:05	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0
00:20:13.615    17:04:06	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:20:13.615  [2024-11-19 17:04:06.075310] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:20:13.615   17:04:06	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:20:13.615   17:04:06	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:20:13.615   17:04:06	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:20:13.615   17:04:06	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:20:13.615   17:04:06	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:20:13.615   17:04:06	-- bdev/nbd_common.sh@41 -- # break
00:20:13.615   17:04:06	-- bdev/nbd_common.sh@45 -- # return 0
00:20:13.615   17:04:06	-- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1
00:20:13.615  [2024-11-19 17:04:06.315062] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:20:13.615   17:04:06	-- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3
00:20:13.615   17:04:06	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:20:13.615   17:04:06	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:20:13.615   17:04:06	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:20:13.615   17:04:06	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:20:13.615   17:04:06	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:20:13.615   17:04:06	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:20:13.615   17:04:06	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:20:13.615   17:04:06	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:20:13.615   17:04:06	-- bdev/bdev_raid.sh@125 -- # local tmp
00:20:13.615    17:04:06	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:20:13.615    17:04:06	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:20:13.874   17:04:06	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:20:13.874    "name": "raid_bdev1",
00:20:13.874    "uuid": "45bc2c01-dccc-4bef-8a6f-77a2be25cc3c",
00:20:13.874    "strip_size_kb": 0,
00:20:13.874    "state": "online",
00:20:13.874    "raid_level": "raid1",
00:20:13.874    "superblock": false,
00:20:13.874    "num_base_bdevs": 4,
00:20:13.874    "num_base_bdevs_discovered": 3,
00:20:13.874    "num_base_bdevs_operational": 3,
00:20:13.874    "base_bdevs_list": [
00:20:13.874      {
00:20:13.874        "name": null,
00:20:13.874        "uuid": "00000000-0000-0000-0000-000000000000",
00:20:13.874        "is_configured": false,
00:20:13.874        "data_offset": 0,
00:20:13.874        "data_size": 65536
00:20:13.874      },
00:20:13.874      {
00:20:13.874        "name": "BaseBdev2",
00:20:13.874        "uuid": "758bdb6b-ed99-409b-8252-086fb4da0e87",
00:20:13.874        "is_configured": true,
00:20:13.874        "data_offset": 0,
00:20:13.874        "data_size": 65536
00:20:13.874      },
00:20:13.874      {
00:20:13.874        "name": "BaseBdev3",
00:20:13.874        "uuid": "421a8f45-88e4-4c99-ae74-aee9f566c16d",
00:20:13.874        "is_configured": true,
00:20:13.874        "data_offset": 0,
00:20:13.874        "data_size": 65536
00:20:13.874      },
00:20:13.874      {
00:20:13.874        "name": "BaseBdev4",
00:20:13.874        "uuid": "5587dab4-235e-41d8-979f-7c03424034ee",
00:20:13.874        "is_configured": true,
00:20:13.874        "data_offset": 0,
00:20:13.874        "data_size": 65536
00:20:13.874      }
00:20:13.874    ]
00:20:13.874  }'
00:20:13.874   17:04:06	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:20:13.874   17:04:06	-- common/autotest_common.sh@10 -- # set +x
00:20:14.440   17:04:07	-- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare
00:20:14.698  [2024-11-19 17:04:07.411324] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare
00:20:14.698  [2024-11-19 17:04:07.411395] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:20:14.698  [2024-11-19 17:04:07.415118] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d06080
00:20:14.698  [2024-11-19 17:04:07.417632] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1
00:20:14.698   17:04:07	-- bdev/bdev_raid.sh@598 -- # sleep 1
00:20:15.633   17:04:08	-- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:20:15.633   17:04:08	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:20:15.633   17:04:08	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:20:15.633   17:04:08	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:20:15.633   17:04:08	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:20:15.633    17:04:08	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:20:15.633    17:04:08	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:20:15.891   17:04:08	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:20:15.891    "name": "raid_bdev1",
00:20:15.891    "uuid": "45bc2c01-dccc-4bef-8a6f-77a2be25cc3c",
00:20:15.891    "strip_size_kb": 0,
00:20:15.891    "state": "online",
00:20:15.891    "raid_level": "raid1",
00:20:15.891    "superblock": false,
00:20:15.891    "num_base_bdevs": 4,
00:20:15.891    "num_base_bdevs_discovered": 4,
00:20:15.891    "num_base_bdevs_operational": 4,
00:20:15.891    "process": {
00:20:15.891      "type": "rebuild",
00:20:15.891      "target": "spare",
00:20:15.891      "progress": {
00:20:15.891        "blocks": 24576,
00:20:15.891        "percent": 37
00:20:15.891      }
00:20:15.891    },
00:20:15.891    "base_bdevs_list": [
00:20:15.891      {
00:20:15.891        "name": "spare",
00:20:15.891        "uuid": "2fadd0b4-437e-504a-8b94-df7f4a9e4cf9",
00:20:15.891        "is_configured": true,
00:20:15.891        "data_offset": 0,
00:20:15.891        "data_size": 65536
00:20:15.891      },
00:20:15.891      {
00:20:15.891        "name": "BaseBdev2",
00:20:15.891        "uuid": "758bdb6b-ed99-409b-8252-086fb4da0e87",
00:20:15.891        "is_configured": true,
00:20:15.891        "data_offset": 0,
00:20:15.891        "data_size": 65536
00:20:15.891      },
00:20:15.891      {
00:20:15.891        "name": "BaseBdev3",
00:20:15.891        "uuid": "421a8f45-88e4-4c99-ae74-aee9f566c16d",
00:20:15.891        "is_configured": true,
00:20:15.891        "data_offset": 0,
00:20:15.891        "data_size": 65536
00:20:15.891      },
00:20:15.891      {
00:20:15.891        "name": "BaseBdev4",
00:20:15.891        "uuid": "5587dab4-235e-41d8-979f-7c03424034ee",
00:20:15.891        "is_configured": true,
00:20:15.891        "data_offset": 0,
00:20:15.891        "data_size": 65536
00:20:15.891      }
00:20:15.891    ]
00:20:15.891  }'
00:20:15.891    17:04:08	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:20:16.150   17:04:08	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:20:16.150    17:04:08	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:20:16.150   17:04:08	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:20:16.150   17:04:08	-- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare
00:20:16.408  [2024-11-19 17:04:09.035506] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:20:16.408  [2024-11-19 17:04:09.130075] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device
00:20:16.408  [2024-11-19 17:04:09.130215] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:20:16.408   17:04:09	-- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3
00:20:16.408   17:04:09	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:20:16.408   17:04:09	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:20:16.408   17:04:09	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:20:16.408   17:04:09	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:20:16.408   17:04:09	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:20:16.408   17:04:09	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:20:16.408   17:04:09	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:20:16.408   17:04:09	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:20:16.408   17:04:09	-- bdev/bdev_raid.sh@125 -- # local tmp
00:20:16.408    17:04:09	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:20:16.408    17:04:09	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:20:16.666   17:04:09	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:20:16.666    "name": "raid_bdev1",
00:20:16.666    "uuid": "45bc2c01-dccc-4bef-8a6f-77a2be25cc3c",
00:20:16.666    "strip_size_kb": 0,
00:20:16.666    "state": "online",
00:20:16.666    "raid_level": "raid1",
00:20:16.666    "superblock": false,
00:20:16.666    "num_base_bdevs": 4,
00:20:16.666    "num_base_bdevs_discovered": 3,
00:20:16.666    "num_base_bdevs_operational": 3,
00:20:16.666    "base_bdevs_list": [
00:20:16.666      {
00:20:16.666        "name": null,
00:20:16.666        "uuid": "00000000-0000-0000-0000-000000000000",
00:20:16.666        "is_configured": false,
00:20:16.666        "data_offset": 0,
00:20:16.666        "data_size": 65536
00:20:16.666      },
00:20:16.666      {
00:20:16.666        "name": "BaseBdev2",
00:20:16.666        "uuid": "758bdb6b-ed99-409b-8252-086fb4da0e87",
00:20:16.666        "is_configured": true,
00:20:16.666        "data_offset": 0,
00:20:16.666        "data_size": 65536
00:20:16.666      },
00:20:16.666      {
00:20:16.666        "name": "BaseBdev3",
00:20:16.666        "uuid": "421a8f45-88e4-4c99-ae74-aee9f566c16d",
00:20:16.666        "is_configured": true,
00:20:16.666        "data_offset": 0,
00:20:16.666        "data_size": 65536
00:20:16.666      },
00:20:16.666      {
00:20:16.666        "name": "BaseBdev4",
00:20:16.666        "uuid": "5587dab4-235e-41d8-979f-7c03424034ee",
00:20:16.666        "is_configured": true,
00:20:16.666        "data_offset": 0,
00:20:16.666        "data_size": 65536
00:20:16.666      }
00:20:16.666    ]
00:20:16.666  }'
00:20:16.666   17:04:09	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:20:16.666   17:04:09	-- common/autotest_common.sh@10 -- # set +x
00:20:17.267   17:04:10	-- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none
00:20:17.267   17:04:10	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:20:17.267   17:04:10	-- bdev/bdev_raid.sh@184 -- # local process_type=none
00:20:17.267   17:04:10	-- bdev/bdev_raid.sh@185 -- # local target=none
00:20:17.267   17:04:10	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:20:17.267    17:04:10	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:20:17.267    17:04:10	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:20:17.834   17:04:10	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:20:17.834    "name": "raid_bdev1",
00:20:17.834    "uuid": "45bc2c01-dccc-4bef-8a6f-77a2be25cc3c",
00:20:17.834    "strip_size_kb": 0,
00:20:17.834    "state": "online",
00:20:17.834    "raid_level": "raid1",
00:20:17.834    "superblock": false,
00:20:17.834    "num_base_bdevs": 4,
00:20:17.834    "num_base_bdevs_discovered": 3,
00:20:17.834    "num_base_bdevs_operational": 3,
00:20:17.834    "base_bdevs_list": [
00:20:17.834      {
00:20:17.834        "name": null,
00:20:17.834        "uuid": "00000000-0000-0000-0000-000000000000",
00:20:17.834        "is_configured": false,
00:20:17.834        "data_offset": 0,
00:20:17.834        "data_size": 65536
00:20:17.834      },
00:20:17.834      {
00:20:17.834        "name": "BaseBdev2",
00:20:17.834        "uuid": "758bdb6b-ed99-409b-8252-086fb4da0e87",
00:20:17.834        "is_configured": true,
00:20:17.834        "data_offset": 0,
00:20:17.834        "data_size": 65536
00:20:17.834      },
00:20:17.834      {
00:20:17.834        "name": "BaseBdev3",
00:20:17.834        "uuid": "421a8f45-88e4-4c99-ae74-aee9f566c16d",
00:20:17.834        "is_configured": true,
00:20:17.834        "data_offset": 0,
00:20:17.834        "data_size": 65536
00:20:17.834      },
00:20:17.834      {
00:20:17.834        "name": "BaseBdev4",
00:20:17.834        "uuid": "5587dab4-235e-41d8-979f-7c03424034ee",
00:20:17.834        "is_configured": true,
00:20:17.834        "data_offset": 0,
00:20:17.834        "data_size": 65536
00:20:17.834      }
00:20:17.834    ]
00:20:17.834  }'
00:20:17.834    17:04:10	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:20:17.834   17:04:10	-- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]]
00:20:17.834    17:04:10	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:20:17.834   17:04:10	-- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]]
00:20:17.834   17:04:10	-- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare
00:20:18.092  [2024-11-19 17:04:10.775340] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare
00:20:18.092  [2024-11-19 17:04:10.775407] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:20:18.092  [2024-11-19 17:04:10.779216] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d06220
00:20:18.092  [2024-11-19 17:04:10.781667] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1
00:20:18.092   17:04:10	-- bdev/bdev_raid.sh@614 -- # sleep 1
00:20:19.023   17:04:11	-- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:20:19.023   17:04:11	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:20:19.023   17:04:11	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:20:19.023   17:04:11	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:20:19.023   17:04:11	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:20:19.023    17:04:11	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:20:19.023    17:04:11	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:20:19.280   17:04:12	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:20:19.280    "name": "raid_bdev1",
00:20:19.280    "uuid": "45bc2c01-dccc-4bef-8a6f-77a2be25cc3c",
00:20:19.280    "strip_size_kb": 0,
00:20:19.280    "state": "online",
00:20:19.280    "raid_level": "raid1",
00:20:19.280    "superblock": false,
00:20:19.280    "num_base_bdevs": 4,
00:20:19.280    "num_base_bdevs_discovered": 4,
00:20:19.280    "num_base_bdevs_operational": 4,
00:20:19.280    "process": {
00:20:19.280      "type": "rebuild",
00:20:19.280      "target": "spare",
00:20:19.280      "progress": {
00:20:19.280        "blocks": 26624,
00:20:19.280        "percent": 40
00:20:19.280      }
00:20:19.280    },
00:20:19.280    "base_bdevs_list": [
00:20:19.280      {
00:20:19.280        "name": "spare",
00:20:19.280        "uuid": "2fadd0b4-437e-504a-8b94-df7f4a9e4cf9",
00:20:19.280        "is_configured": true,
00:20:19.280        "data_offset": 0,
00:20:19.280        "data_size": 65536
00:20:19.280      },
00:20:19.280      {
00:20:19.280        "name": "BaseBdev2",
00:20:19.280        "uuid": "758bdb6b-ed99-409b-8252-086fb4da0e87",
00:20:19.280        "is_configured": true,
00:20:19.280        "data_offset": 0,
00:20:19.280        "data_size": 65536
00:20:19.280      },
00:20:19.280      {
00:20:19.280        "name": "BaseBdev3",
00:20:19.280        "uuid": "421a8f45-88e4-4c99-ae74-aee9f566c16d",
00:20:19.280        "is_configured": true,
00:20:19.280        "data_offset": 0,
00:20:19.280        "data_size": 65536
00:20:19.280      },
00:20:19.280      {
00:20:19.280        "name": "BaseBdev4",
00:20:19.280        "uuid": "5587dab4-235e-41d8-979f-7c03424034ee",
00:20:19.280        "is_configured": true,
00:20:19.280        "data_offset": 0,
00:20:19.280        "data_size": 65536
00:20:19.280      }
00:20:19.280    ]
00:20:19.280  }'
00:20:19.280    17:04:12	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:20:19.538   17:04:12	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:20:19.538    17:04:12	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:20:19.538   17:04:12	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:20:19.538   17:04:12	-- bdev/bdev_raid.sh@617 -- # '[' false = true ']'
00:20:19.538   17:04:12	-- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4
00:20:19.538   17:04:12	-- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']'
00:20:19.538   17:04:12	-- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']'
00:20:19.538   17:04:12	-- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2
00:20:19.797  [2024-11-19 17:04:12.469313] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2
00:20:19.797  [2024-11-19 17:04:12.493950] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d06220
00:20:19.797   17:04:12	-- bdev/bdev_raid.sh@649 -- # base_bdevs[1]=
00:20:19.797   17:04:12	-- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- ))
00:20:19.797   17:04:12	-- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:20:19.797   17:04:12	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:20:19.797   17:04:12	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:20:19.797   17:04:12	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:20:19.797   17:04:12	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:20:19.797    17:04:12	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:20:19.797    17:04:12	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:20:20.055   17:04:12	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:20:20.055    "name": "raid_bdev1",
00:20:20.055    "uuid": "45bc2c01-dccc-4bef-8a6f-77a2be25cc3c",
00:20:20.055    "strip_size_kb": 0,
00:20:20.055    "state": "online",
00:20:20.055    "raid_level": "raid1",
00:20:20.055    "superblock": false,
00:20:20.055    "num_base_bdevs": 4,
00:20:20.055    "num_base_bdevs_discovered": 3,
00:20:20.055    "num_base_bdevs_operational": 3,
00:20:20.055    "process": {
00:20:20.055      "type": "rebuild",
00:20:20.055      "target": "spare",
00:20:20.055      "progress": {
00:20:20.055        "blocks": 38912,
00:20:20.055        "percent": 59
00:20:20.055      }
00:20:20.055    },
00:20:20.055    "base_bdevs_list": [
00:20:20.055      {
00:20:20.055        "name": "spare",
00:20:20.055        "uuid": "2fadd0b4-437e-504a-8b94-df7f4a9e4cf9",
00:20:20.055        "is_configured": true,
00:20:20.055        "data_offset": 0,
00:20:20.055        "data_size": 65536
00:20:20.055      },
00:20:20.055      {
00:20:20.055        "name": null,
00:20:20.055        "uuid": "00000000-0000-0000-0000-000000000000",
00:20:20.055        "is_configured": false,
00:20:20.055        "data_offset": 0,
00:20:20.055        "data_size": 65536
00:20:20.055      },
00:20:20.055      {
00:20:20.055        "name": "BaseBdev3",
00:20:20.055        "uuid": "421a8f45-88e4-4c99-ae74-aee9f566c16d",
00:20:20.055        "is_configured": true,
00:20:20.055        "data_offset": 0,
00:20:20.055        "data_size": 65536
00:20:20.055      },
00:20:20.055      {
00:20:20.055        "name": "BaseBdev4",
00:20:20.055        "uuid": "5587dab4-235e-41d8-979f-7c03424034ee",
00:20:20.055        "is_configured": true,
00:20:20.055        "data_offset": 0,
00:20:20.055        "data_size": 65536
00:20:20.055      }
00:20:20.055    ]
00:20:20.055  }'
00:20:20.055    17:04:12	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:20:20.055   17:04:12	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:20:20.055    17:04:12	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:20:20.055   17:04:12	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:20:20.055   17:04:12	-- bdev/bdev_raid.sh@657 -- # local timeout=453
00:20:20.055   17:04:12	-- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout ))
00:20:20.055   17:04:12	-- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:20:20.055   17:04:12	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:20:20.055   17:04:12	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:20:20.055   17:04:12	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:20:20.055   17:04:12	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:20:20.055    17:04:12	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:20:20.055    17:04:12	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:20:20.622   17:04:13	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:20:20.622    "name": "raid_bdev1",
00:20:20.622    "uuid": "45bc2c01-dccc-4bef-8a6f-77a2be25cc3c",
00:20:20.622    "strip_size_kb": 0,
00:20:20.622    "state": "online",
00:20:20.622    "raid_level": "raid1",
00:20:20.622    "superblock": false,
00:20:20.622    "num_base_bdevs": 4,
00:20:20.622    "num_base_bdevs_discovered": 3,
00:20:20.622    "num_base_bdevs_operational": 3,
00:20:20.622    "process": {
00:20:20.622      "type": "rebuild",
00:20:20.622      "target": "spare",
00:20:20.622      "progress": {
00:20:20.622        "blocks": 47104,
00:20:20.622        "percent": 71
00:20:20.622      }
00:20:20.622    },
00:20:20.622    "base_bdevs_list": [
00:20:20.622      {
00:20:20.622        "name": "spare",
00:20:20.622        "uuid": "2fadd0b4-437e-504a-8b94-df7f4a9e4cf9",
00:20:20.622        "is_configured": true,
00:20:20.622        "data_offset": 0,
00:20:20.622        "data_size": 65536
00:20:20.622      },
00:20:20.622      {
00:20:20.622        "name": null,
00:20:20.622        "uuid": "00000000-0000-0000-0000-000000000000",
00:20:20.622        "is_configured": false,
00:20:20.622        "data_offset": 0,
00:20:20.622        "data_size": 65536
00:20:20.622      },
00:20:20.622      {
00:20:20.622        "name": "BaseBdev3",
00:20:20.622        "uuid": "421a8f45-88e4-4c99-ae74-aee9f566c16d",
00:20:20.622        "is_configured": true,
00:20:20.622        "data_offset": 0,
00:20:20.622        "data_size": 65536
00:20:20.622      },
00:20:20.622      {
00:20:20.622        "name": "BaseBdev4",
00:20:20.622        "uuid": "5587dab4-235e-41d8-979f-7c03424034ee",
00:20:20.622        "is_configured": true,
00:20:20.622        "data_offset": 0,
00:20:20.622        "data_size": 65536
00:20:20.622      }
00:20:20.622    ]
00:20:20.622  }'
00:20:20.622    17:04:13	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:20:20.622   17:04:13	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:20:20.622    17:04:13	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:20:20.622   17:04:13	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:20:20.622   17:04:13	-- bdev/bdev_raid.sh@662 -- # sleep 1
00:20:21.188  [2024-11-19 17:04:14.008173] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1
00:20:21.188  [2024-11-19 17:04:14.008315] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1
00:20:21.188  [2024-11-19 17:04:14.008497] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:20:21.754   17:04:14	-- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout ))
00:20:21.754   17:04:14	-- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:20:21.754   17:04:14	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:20:21.754   17:04:14	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:20:21.754   17:04:14	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:20:21.754   17:04:14	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:20:21.754    17:04:14	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:20:21.754    17:04:14	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:20:22.012   17:04:14	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:20:22.012    "name": "raid_bdev1",
00:20:22.012    "uuid": "45bc2c01-dccc-4bef-8a6f-77a2be25cc3c",
00:20:22.012    "strip_size_kb": 0,
00:20:22.012    "state": "online",
00:20:22.012    "raid_level": "raid1",
00:20:22.012    "superblock": false,
00:20:22.012    "num_base_bdevs": 4,
00:20:22.012    "num_base_bdevs_discovered": 3,
00:20:22.012    "num_base_bdevs_operational": 3,
00:20:22.012    "base_bdevs_list": [
00:20:22.012      {
00:20:22.012        "name": "spare",
00:20:22.012        "uuid": "2fadd0b4-437e-504a-8b94-df7f4a9e4cf9",
00:20:22.012        "is_configured": true,
00:20:22.012        "data_offset": 0,
00:20:22.012        "data_size": 65536
00:20:22.012      },
00:20:22.012      {
00:20:22.012        "name": null,
00:20:22.012        "uuid": "00000000-0000-0000-0000-000000000000",
00:20:22.012        "is_configured": false,
00:20:22.012        "data_offset": 0,
00:20:22.012        "data_size": 65536
00:20:22.012      },
00:20:22.012      {
00:20:22.012        "name": "BaseBdev3",
00:20:22.012        "uuid": "421a8f45-88e4-4c99-ae74-aee9f566c16d",
00:20:22.012        "is_configured": true,
00:20:22.012        "data_offset": 0,
00:20:22.012        "data_size": 65536
00:20:22.012      },
00:20:22.012      {
00:20:22.012        "name": "BaseBdev4",
00:20:22.012        "uuid": "5587dab4-235e-41d8-979f-7c03424034ee",
00:20:22.012        "is_configured": true,
00:20:22.012        "data_offset": 0,
00:20:22.012        "data_size": 65536
00:20:22.012      }
00:20:22.012    ]
00:20:22.012  }'
00:20:22.013    17:04:14	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:20:22.013   17:04:14	-- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]]
00:20:22.013    17:04:14	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:20:22.013   17:04:14	-- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]]
00:20:22.013   17:04:14	-- bdev/bdev_raid.sh@660 -- # break
00:20:22.013   17:04:14	-- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none
00:20:22.013   17:04:14	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:20:22.013   17:04:14	-- bdev/bdev_raid.sh@184 -- # local process_type=none
00:20:22.013   17:04:14	-- bdev/bdev_raid.sh@185 -- # local target=none
00:20:22.013   17:04:14	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:20:22.013    17:04:14	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:20:22.013    17:04:14	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:20:22.271   17:04:15	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:20:22.271    "name": "raid_bdev1",
00:20:22.271    "uuid": "45bc2c01-dccc-4bef-8a6f-77a2be25cc3c",
00:20:22.271    "strip_size_kb": 0,
00:20:22.271    "state": "online",
00:20:22.271    "raid_level": "raid1",
00:20:22.271    "superblock": false,
00:20:22.271    "num_base_bdevs": 4,
00:20:22.271    "num_base_bdevs_discovered": 3,
00:20:22.271    "num_base_bdevs_operational": 3,
00:20:22.271    "base_bdevs_list": [
00:20:22.271      {
00:20:22.271        "name": "spare",
00:20:22.271        "uuid": "2fadd0b4-437e-504a-8b94-df7f4a9e4cf9",
00:20:22.271        "is_configured": true,
00:20:22.271        "data_offset": 0,
00:20:22.271        "data_size": 65536
00:20:22.271      },
00:20:22.271      {
00:20:22.271        "name": null,
00:20:22.271        "uuid": "00000000-0000-0000-0000-000000000000",
00:20:22.271        "is_configured": false,
00:20:22.271        "data_offset": 0,
00:20:22.271        "data_size": 65536
00:20:22.271      },
00:20:22.271      {
00:20:22.271        "name": "BaseBdev3",
00:20:22.271        "uuid": "421a8f45-88e4-4c99-ae74-aee9f566c16d",
00:20:22.271        "is_configured": true,
00:20:22.271        "data_offset": 0,
00:20:22.271        "data_size": 65536
00:20:22.271      },
00:20:22.271      {
00:20:22.271        "name": "BaseBdev4",
00:20:22.271        "uuid": "5587dab4-235e-41d8-979f-7c03424034ee",
00:20:22.271        "is_configured": true,
00:20:22.271        "data_offset": 0,
00:20:22.271        "data_size": 65536
00:20:22.271      }
00:20:22.271    ]
00:20:22.271  }'
00:20:22.271    17:04:15	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:20:22.271   17:04:15	-- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]]
00:20:22.271    17:04:15	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:20:22.529   17:04:15	-- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]]
00:20:22.529   17:04:15	-- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3
00:20:22.529   17:04:15	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:20:22.529   17:04:15	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:20:22.529   17:04:15	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:20:22.529   17:04:15	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:20:22.529   17:04:15	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:20:22.529   17:04:15	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:20:22.529   17:04:15	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:20:22.529   17:04:15	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:20:22.529   17:04:15	-- bdev/bdev_raid.sh@125 -- # local tmp
00:20:22.529    17:04:15	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:20:22.529    17:04:15	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:20:22.788   17:04:15	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:20:22.788    "name": "raid_bdev1",
00:20:22.788    "uuid": "45bc2c01-dccc-4bef-8a6f-77a2be25cc3c",
00:20:22.788    "strip_size_kb": 0,
00:20:22.788    "state": "online",
00:20:22.788    "raid_level": "raid1",
00:20:22.788    "superblock": false,
00:20:22.788    "num_base_bdevs": 4,
00:20:22.788    "num_base_bdevs_discovered": 3,
00:20:22.788    "num_base_bdevs_operational": 3,
00:20:22.788    "base_bdevs_list": [
00:20:22.788      {
00:20:22.788        "name": "spare",
00:20:22.788        "uuid": "2fadd0b4-437e-504a-8b94-df7f4a9e4cf9",
00:20:22.788        "is_configured": true,
00:20:22.788        "data_offset": 0,
00:20:22.788        "data_size": 65536
00:20:22.788      },
00:20:22.788      {
00:20:22.788        "name": null,
00:20:22.788        "uuid": "00000000-0000-0000-0000-000000000000",
00:20:22.788        "is_configured": false,
00:20:22.788        "data_offset": 0,
00:20:22.788        "data_size": 65536
00:20:22.788      },
00:20:22.788      {
00:20:22.788        "name": "BaseBdev3",
00:20:22.788        "uuid": "421a8f45-88e4-4c99-ae74-aee9f566c16d",
00:20:22.788        "is_configured": true,
00:20:22.788        "data_offset": 0,
00:20:22.788        "data_size": 65536
00:20:22.788      },
00:20:22.788      {
00:20:22.788        "name": "BaseBdev4",
00:20:22.788        "uuid": "5587dab4-235e-41d8-979f-7c03424034ee",
00:20:22.788        "is_configured": true,
00:20:22.788        "data_offset": 0,
00:20:22.788        "data_size": 65536
00:20:22.788      }
00:20:22.788    ]
00:20:22.788  }'
00:20:22.788   17:04:15	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:20:22.788   17:04:15	-- common/autotest_common.sh@10 -- # set +x
00:20:23.355   17:04:16	-- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1
00:20:23.614  [2024-11-19 17:04:16.394539] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:20:23.614  [2024-11-19 17:04:16.394612] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:20:23.614  [2024-11-19 17:04:16.394755] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:20:23.614  [2024-11-19 17:04:16.394871] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:20:23.614  [2024-11-19 17:04:16.394885] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline
00:20:23.614    17:04:16	-- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:20:23.614    17:04:16	-- bdev/bdev_raid.sh@671 -- # jq length
00:20:23.872   17:04:16	-- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]]
00:20:23.872   17:04:16	-- bdev/bdev_raid.sh@673 -- # '[' false = true ']'
00:20:23.872   17:04:16	-- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1'
00:20:23.872   17:04:16	-- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:20:23.872   17:04:16	-- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare')
00:20:23.872   17:04:16	-- bdev/nbd_common.sh@10 -- # local bdev_list
00:20:23.872   17:04:16	-- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:20:23.872   17:04:16	-- bdev/nbd_common.sh@11 -- # local nbd_list
00:20:23.872   17:04:16	-- bdev/nbd_common.sh@12 -- # local i
00:20:23.872   17:04:16	-- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:20:23.872   17:04:16	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:20:23.872   17:04:16	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0
00:20:24.439  /dev/nbd0
00:20:24.439    17:04:17	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:20:24.439   17:04:17	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:20:24.439   17:04:17	-- common/autotest_common.sh@866 -- # local nbd_name=nbd0
00:20:24.439   17:04:17	-- common/autotest_common.sh@867 -- # local i
00:20:24.439   17:04:17	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:20:24.439   17:04:17	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:20:24.439   17:04:17	-- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions
00:20:24.439   17:04:17	-- common/autotest_common.sh@871 -- # break
00:20:24.439   17:04:17	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:20:24.440   17:04:17	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:20:24.440   17:04:17	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:20:24.440  1+0 records in
00:20:24.440  1+0 records out
00:20:24.440  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000874608 s, 4.7 MB/s
00:20:24.440    17:04:17	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:20:24.440   17:04:17	-- common/autotest_common.sh@884 -- # size=4096
00:20:24.440   17:04:17	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:20:24.440   17:04:17	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:20:24.440   17:04:17	-- common/autotest_common.sh@887 -- # return 0
00:20:24.440   17:04:17	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:20:24.440   17:04:17	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:20:24.440   17:04:17	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1
00:20:24.698  /dev/nbd1
00:20:24.698    17:04:17	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:20:24.698   17:04:17	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:20:24.698   17:04:17	-- common/autotest_common.sh@866 -- # local nbd_name=nbd1
00:20:24.698   17:04:17	-- common/autotest_common.sh@867 -- # local i
00:20:24.698   17:04:17	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:20:24.698   17:04:17	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:20:24.698   17:04:17	-- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions
00:20:24.698   17:04:17	-- common/autotest_common.sh@871 -- # break
00:20:24.698   17:04:17	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:20:24.698   17:04:17	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:20:24.698   17:04:17	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:20:24.698  1+0 records in
00:20:24.698  1+0 records out
00:20:24.698  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000747516 s, 5.5 MB/s
00:20:24.698    17:04:17	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:20:24.698   17:04:17	-- common/autotest_common.sh@884 -- # size=4096
00:20:24.698   17:04:17	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:20:24.698   17:04:17	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:20:24.698   17:04:17	-- common/autotest_common.sh@887 -- # return 0
00:20:24.698   17:04:17	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:20:24.698   17:04:17	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:20:24.698   17:04:17	-- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1
00:20:24.957   17:04:17	-- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1'
00:20:24.957   17:04:17	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:20:24.957   17:04:17	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:20:24.957   17:04:17	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:20:24.957   17:04:17	-- bdev/nbd_common.sh@51 -- # local i
00:20:24.957   17:04:17	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:20:24.957   17:04:17	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0
00:20:25.215    17:04:18	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:20:25.215   17:04:18	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:20:25.215   17:04:18	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:20:25.215   17:04:18	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:20:25.215   17:04:18	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:20:25.215   17:04:18	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:20:25.215   17:04:18	-- bdev/nbd_common.sh@41 -- # break
00:20:25.215   17:04:18	-- bdev/nbd_common.sh@45 -- # return 0
00:20:25.215   17:04:18	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:20:25.215   17:04:18	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1
00:20:25.782    17:04:18	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:20:25.782   17:04:18	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:20:25.782   17:04:18	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:20:25.782   17:04:18	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:20:25.782   17:04:18	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:20:25.782   17:04:18	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:20:25.782   17:04:18	-- bdev/nbd_common.sh@41 -- # break
00:20:25.782   17:04:18	-- bdev/nbd_common.sh@45 -- # return 0
00:20:25.782   17:04:18	-- bdev/bdev_raid.sh@692 -- # '[' false = true ']'
00:20:25.782   17:04:18	-- bdev/bdev_raid.sh@709 -- # killprocess 134887
00:20:25.782   17:04:18	-- common/autotest_common.sh@936 -- # '[' -z 134887 ']'
00:20:25.782   17:04:18	-- common/autotest_common.sh@940 -- # kill -0 134887
00:20:25.782    17:04:18	-- common/autotest_common.sh@941 -- # uname
00:20:25.782   17:04:18	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:20:25.782    17:04:18	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 134887
00:20:25.782  killing process with pid 134887
00:20:25.782  Received shutdown signal, test time was about 60.000000 seconds
00:20:25.782  
00:20:25.782                                                                                                  Latency(us)
00:20:25.782  
[2024-11-19T17:04:18.646Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:20:25.782  
[2024-11-19T17:04:18.646Z]  ===================================================================================================================
00:20:25.782  
[2024-11-19T17:04:18.646Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00 18446744073709551616.00       0.00
00:20:25.782   17:04:18	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:20:25.782   17:04:18	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:20:25.782   17:04:18	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 134887'
00:20:25.782   17:04:18	-- common/autotest_common.sh@955 -- # kill 134887
00:20:25.782   17:04:18	-- common/autotest_common.sh@960 -- # wait 134887
00:20:25.782  [2024-11-19 17:04:18.401082] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:20:25.782  [2024-11-19 17:04:18.508293] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:20:26.433   17:04:18	-- bdev/bdev_raid.sh@711 -- # return 0
00:20:26.433  
00:20:26.433  real	0m26.209s
00:20:26.433  user	0m35.645s
00:20:26.433  sys	0m6.079s
00:20:26.433  ************************************
00:20:26.433  END TEST raid_rebuild_test
00:20:26.433  ************************************
00:20:26.433   17:04:18	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:20:26.433   17:04:18	-- common/autotest_common.sh@10 -- # set +x
00:20:26.433   17:04:19	-- bdev/bdev_raid.sh@736 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false
00:20:26.433   17:04:19	-- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']'
00:20:26.433   17:04:19	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:20:26.433   17:04:19	-- common/autotest_common.sh@10 -- # set +x
00:20:26.433  ************************************
00:20:26.433  START TEST raid_rebuild_test_sb
00:20:26.433  ************************************
00:20:26.433   17:04:19	-- common/autotest_common.sh@1114 -- # raid_rebuild_test raid1 4 true false
00:20:26.433   17:04:19	-- bdev/bdev_raid.sh@517 -- # local raid_level=raid1
00:20:26.433   17:04:19	-- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4
00:20:26.433   17:04:19	-- bdev/bdev_raid.sh@519 -- # local superblock=true
00:20:26.433   17:04:19	-- bdev/bdev_raid.sh@520 -- # local background_io=false
00:20:26.434    17:04:19	-- bdev/bdev_raid.sh@521 -- # (( i = 1 ))
00:20:26.434    17:04:19	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:20:26.434    17:04:19	-- bdev/bdev_raid.sh@521 -- # echo BaseBdev1
00:20:26.434    17:04:19	-- bdev/bdev_raid.sh@521 -- # (( i++ ))
00:20:26.434    17:04:19	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:20:26.434    17:04:19	-- bdev/bdev_raid.sh@521 -- # echo BaseBdev2
00:20:26.434    17:04:19	-- bdev/bdev_raid.sh@521 -- # (( i++ ))
00:20:26.434    17:04:19	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:20:26.434    17:04:19	-- bdev/bdev_raid.sh@521 -- # echo BaseBdev3
00:20:26.434    17:04:19	-- bdev/bdev_raid.sh@521 -- # (( i++ ))
00:20:26.434    17:04:19	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:20:26.434    17:04:19	-- bdev/bdev_raid.sh@521 -- # echo BaseBdev4
00:20:26.434    17:04:19	-- bdev/bdev_raid.sh@521 -- # (( i++ ))
00:20:26.434    17:04:19	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:20:26.434   17:04:19	-- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4')
00:20:26.434   17:04:19	-- bdev/bdev_raid.sh@521 -- # local base_bdevs
00:20:26.434   17:04:19	-- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1
00:20:26.434   17:04:19	-- bdev/bdev_raid.sh@523 -- # local strip_size
00:20:26.434   17:04:19	-- bdev/bdev_raid.sh@524 -- # local create_arg
00:20:26.434   17:04:19	-- bdev/bdev_raid.sh@525 -- # local raid_bdev_size
00:20:26.434   17:04:19	-- bdev/bdev_raid.sh@526 -- # local data_offset
00:20:26.434   17:04:19	-- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']'
00:20:26.434   17:04:19	-- bdev/bdev_raid.sh@536 -- # strip_size=0
00:20:26.434   17:04:19	-- bdev/bdev_raid.sh@539 -- # '[' true = true ']'
00:20:26.434   17:04:19	-- bdev/bdev_raid.sh@540 -- # create_arg+=' -s'
00:20:26.434   17:04:19	-- bdev/bdev_raid.sh@544 -- # raid_pid=135474
00:20:26.434   17:04:19	-- bdev/bdev_raid.sh@545 -- # waitforlisten 135474 /var/tmp/spdk-raid.sock
00:20:26.434   17:04:19	-- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid
00:20:26.434   17:04:19	-- common/autotest_common.sh@829 -- # '[' -z 135474 ']'
00:20:26.434   17:04:19	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock
00:20:26.434   17:04:19	-- common/autotest_common.sh@834 -- # local max_retries=100
00:20:26.434   17:04:19	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...'
00:20:26.434  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...
00:20:26.434   17:04:19	-- common/autotest_common.sh@838 -- # xtrace_disable
00:20:26.434   17:04:19	-- common/autotest_common.sh@10 -- # set +x
00:20:26.434  I/O size of 3145728 is greater than zero copy threshold (65536).
00:20:26.434  Zero copy mechanism will not be used.
00:20:26.434  [2024-11-19 17:04:19.126678] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:20:26.434  [2024-11-19 17:04:19.126947] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135474 ]
00:20:26.700  [2024-11-19 17:04:19.274476] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:20:26.700  [2024-11-19 17:04:19.370329] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:20:26.700  [2024-11-19 17:04:19.464530] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:20:27.635   17:04:20	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:20:27.635   17:04:20	-- common/autotest_common.sh@862 -- # return 0
00:20:27.635   17:04:20	-- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}"
00:20:27.635   17:04:20	-- bdev/bdev_raid.sh@549 -- # '[' true = true ']'
00:20:27.635   17:04:20	-- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc
00:20:27.893  BaseBdev1_malloc
00:20:27.893   17:04:20	-- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1
00:20:28.152  [2024-11-19 17:04:20.911650] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc
00:20:28.152  [2024-11-19 17:04:20.911800] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:20:28.152  [2024-11-19 17:04:20.911855] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80
00:20:28.152  [2024-11-19 17:04:20.911920] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:20:28.152  [2024-11-19 17:04:20.915332] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:20:28.152  [2024-11-19 17:04:20.915412] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1
00:20:28.152  BaseBdev1
00:20:28.152   17:04:20	-- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}"
00:20:28.152   17:04:20	-- bdev/bdev_raid.sh@549 -- # '[' true = true ']'
00:20:28.152   17:04:20	-- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc
00:20:28.411  BaseBdev2_malloc
00:20:28.670   17:04:21	-- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2
00:20:28.929  [2024-11-19 17:04:21.562146] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc
00:20:28.929  [2024-11-19 17:04:21.562310] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:20:28.929  [2024-11-19 17:04:21.562363] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680
00:20:28.929  [2024-11-19 17:04:21.562421] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:20:28.929  [2024-11-19 17:04:21.565663] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:20:28.929  [2024-11-19 17:04:21.565755] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2
00:20:28.929  BaseBdev2
00:20:28.929   17:04:21	-- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}"
00:20:28.929   17:04:21	-- bdev/bdev_raid.sh@549 -- # '[' true = true ']'
00:20:28.929   17:04:21	-- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc
00:20:29.187  BaseBdev3_malloc
00:20:29.187   17:04:21	-- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3
00:20:29.446  [2024-11-19 17:04:22.144350] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc
00:20:29.446  [2024-11-19 17:04:22.144506] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:20:29.446  [2024-11-19 17:04:22.144573] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280
00:20:29.446  [2024-11-19 17:04:22.144656] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:20:29.446  [2024-11-19 17:04:22.148430] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:20:29.446  [2024-11-19 17:04:22.148558] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3
00:20:29.446  BaseBdev3
00:20:29.446   17:04:22	-- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}"
00:20:29.446   17:04:22	-- bdev/bdev_raid.sh@549 -- # '[' true = true ']'
00:20:29.446   17:04:22	-- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc
00:20:29.704  BaseBdev4_malloc
00:20:29.704   17:04:22	-- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4
00:20:29.962  [2024-11-19 17:04:22.796240] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc
00:20:29.962  [2024-11-19 17:04:22.796373] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:20:29.962  [2024-11-19 17:04:22.796415] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80
00:20:29.962  [2024-11-19 17:04:22.796461] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:20:29.962  [2024-11-19 17:04:22.799612] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:20:29.962  [2024-11-19 17:04:22.799761] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4
00:20:29.962  BaseBdev4
00:20:30.221   17:04:22	-- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc
00:20:30.479  spare_malloc
00:20:30.479   17:04:23	-- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000
00:20:30.479  spare_delay
00:20:30.737   17:04:23	-- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare
00:20:30.737  [2024-11-19 17:04:23.550468] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay
00:20:30.737  [2024-11-19 17:04:23.550583] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:20:30.737  [2024-11-19 17:04:23.550625] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080
00:20:30.737  [2024-11-19 17:04:23.550668] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:20:30.737  [2024-11-19 17:04:23.553497] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:20:30.737  [2024-11-19 17:04:23.553571] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare
00:20:30.737  spare
00:20:30.737   17:04:23	-- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1
00:20:31.047  [2024-11-19 17:04:23.786625] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:20:31.047  [2024-11-19 17:04:23.789123] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:20:31.047  [2024-11-19 17:04:23.789207] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:20:31.047  [2024-11-19 17:04:23.789256] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed
00:20:31.047  [2024-11-19 17:04:23.789520] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680
00:20:31.047  [2024-11-19 17:04:23.789540] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512
00:20:31.047  [2024-11-19 17:04:23.789737] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600
00:20:31.047  [2024-11-19 17:04:23.790213] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680
00:20:31.047  [2024-11-19 17:04:23.790233] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680
00:20:31.047  [2024-11-19 17:04:23.790462] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:20:31.047   17:04:23	-- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4
00:20:31.047   17:04:23	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:20:31.047   17:04:23	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:20:31.047   17:04:23	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:20:31.047   17:04:23	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:20:31.047   17:04:23	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:20:31.047   17:04:23	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:20:31.047   17:04:23	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:20:31.047   17:04:23	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:20:31.047   17:04:23	-- bdev/bdev_raid.sh@125 -- # local tmp
00:20:31.047    17:04:23	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:20:31.047    17:04:23	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:20:31.306   17:04:24	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:20:31.306    "name": "raid_bdev1",
00:20:31.306    "uuid": "05e0f9bf-bfde-4ec8-8f4d-a974806398ce",
00:20:31.306    "strip_size_kb": 0,
00:20:31.306    "state": "online",
00:20:31.306    "raid_level": "raid1",
00:20:31.306    "superblock": true,
00:20:31.306    "num_base_bdevs": 4,
00:20:31.306    "num_base_bdevs_discovered": 4,
00:20:31.306    "num_base_bdevs_operational": 4,
00:20:31.306    "base_bdevs_list": [
00:20:31.306      {
00:20:31.306        "name": "BaseBdev1",
00:20:31.306        "uuid": "6870804f-9304-5dda-9741-c7a8f6b75f3f",
00:20:31.306        "is_configured": true,
00:20:31.306        "data_offset": 2048,
00:20:31.306        "data_size": 63488
00:20:31.306      },
00:20:31.306      {
00:20:31.306        "name": "BaseBdev2",
00:20:31.306        "uuid": "94d504d6-69fa-58a9-a31b-61b8f85f3eb9",
00:20:31.306        "is_configured": true,
00:20:31.306        "data_offset": 2048,
00:20:31.306        "data_size": 63488
00:20:31.306      },
00:20:31.306      {
00:20:31.306        "name": "BaseBdev3",
00:20:31.306        "uuid": "b9ee3895-d9b8-5f8d-b842-30ff169b6dcf",
00:20:31.306        "is_configured": true,
00:20:31.306        "data_offset": 2048,
00:20:31.306        "data_size": 63488
00:20:31.306      },
00:20:31.306      {
00:20:31.306        "name": "BaseBdev4",
00:20:31.306        "uuid": "1175fb15-4ee2-5fde-9f02-17c616fd6f3d",
00:20:31.306        "is_configured": true,
00:20:31.306        "data_offset": 2048,
00:20:31.306        "data_size": 63488
00:20:31.306      }
00:20:31.306    ]
00:20:31.306  }'
00:20:31.306   17:04:24	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:20:31.306   17:04:24	-- common/autotest_common.sh@10 -- # set +x
00:20:31.871    17:04:24	-- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1
00:20:31.871    17:04:24	-- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks'
00:20:32.129  [2024-11-19 17:04:24.931135] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:20:32.129   17:04:24	-- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488
00:20:32.129    17:04:24	-- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset'
00:20:32.129    17:04:24	-- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:20:32.386   17:04:25	-- bdev/bdev_raid.sh@570 -- # data_offset=2048
00:20:32.386   17:04:25	-- bdev/bdev_raid.sh@572 -- # '[' false = true ']'
00:20:32.386   17:04:25	-- bdev/bdev_raid.sh@576 -- # local write_unit_size
00:20:32.386   17:04:25	-- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0
00:20:32.386   17:04:25	-- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:20:32.386   17:04:25	-- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1')
00:20:32.386   17:04:25	-- bdev/nbd_common.sh@10 -- # local bdev_list
00:20:32.386   17:04:25	-- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0')
00:20:32.386   17:04:25	-- bdev/nbd_common.sh@11 -- # local nbd_list
00:20:32.386   17:04:25	-- bdev/nbd_common.sh@12 -- # local i
00:20:32.386   17:04:25	-- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:20:32.386   17:04:25	-- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:20:32.386   17:04:25	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0
00:20:32.643  [2024-11-19 17:04:25.403134] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0
00:20:32.643  /dev/nbd0
00:20:32.643    17:04:25	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:20:32.643   17:04:25	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:20:32.643   17:04:25	-- common/autotest_common.sh@866 -- # local nbd_name=nbd0
00:20:32.644   17:04:25	-- common/autotest_common.sh@867 -- # local i
00:20:32.644   17:04:25	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:20:32.644   17:04:25	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:20:32.644   17:04:25	-- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions
00:20:32.644   17:04:25	-- common/autotest_common.sh@871 -- # break
00:20:32.644   17:04:25	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:20:32.644   17:04:25	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:20:32.644   17:04:25	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:20:32.644  1+0 records in
00:20:32.644  1+0 records out
00:20:32.644  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000494227 s, 8.3 MB/s
00:20:32.644    17:04:25	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:20:32.644   17:04:25	-- common/autotest_common.sh@884 -- # size=4096
00:20:32.644   17:04:25	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:20:32.644   17:04:25	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:20:32.644   17:04:25	-- common/autotest_common.sh@887 -- # return 0
00:20:32.644   17:04:25	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:20:32.644   17:04:25	-- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:20:32.644   17:04:25	-- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']'
00:20:32.644   17:04:25	-- bdev/bdev_raid.sh@584 -- # write_unit_size=1
00:20:32.644   17:04:25	-- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct
00:20:40.757  63488+0 records in
00:20:40.757  63488+0 records out
00:20:40.757  32505856 bytes (33 MB, 31 MiB) copied, 7.71066 s, 4.2 MB/s
00:20:40.757   17:04:33	-- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0
00:20:40.757   17:04:33	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:20:40.757   17:04:33	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:20:40.757   17:04:33	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:20:40.757   17:04:33	-- bdev/nbd_common.sh@51 -- # local i
00:20:40.757   17:04:33	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:20:40.757   17:04:33	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0
00:20:40.757  [2024-11-19 17:04:33.450730] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:20:40.757    17:04:33	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:20:40.757   17:04:33	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:20:40.757   17:04:33	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:20:40.757   17:04:33	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:20:40.757   17:04:33	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:20:40.757   17:04:33	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:20:40.757   17:04:33	-- bdev/nbd_common.sh@41 -- # break
00:20:40.757   17:04:33	-- bdev/nbd_common.sh@45 -- # return 0
00:20:40.757   17:04:33	-- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1
00:20:41.015  [2024-11-19 17:04:33.658320] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:20:41.015   17:04:33	-- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3
00:20:41.015   17:04:33	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:20:41.015   17:04:33	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:20:41.015   17:04:33	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:20:41.015   17:04:33	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:20:41.015   17:04:33	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:20:41.015   17:04:33	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:20:41.015   17:04:33	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:20:41.015   17:04:33	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:20:41.015   17:04:33	-- bdev/bdev_raid.sh@125 -- # local tmp
00:20:41.015    17:04:33	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:20:41.015    17:04:33	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:20:41.274   17:04:33	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:20:41.274    "name": "raid_bdev1",
00:20:41.274    "uuid": "05e0f9bf-bfde-4ec8-8f4d-a974806398ce",
00:20:41.274    "strip_size_kb": 0,
00:20:41.274    "state": "online",
00:20:41.274    "raid_level": "raid1",
00:20:41.274    "superblock": true,
00:20:41.274    "num_base_bdevs": 4,
00:20:41.274    "num_base_bdevs_discovered": 3,
00:20:41.274    "num_base_bdevs_operational": 3,
00:20:41.274    "base_bdevs_list": [
00:20:41.274      {
00:20:41.274        "name": null,
00:20:41.274        "uuid": "00000000-0000-0000-0000-000000000000",
00:20:41.274        "is_configured": false,
00:20:41.274        "data_offset": 2048,
00:20:41.274        "data_size": 63488
00:20:41.274      },
00:20:41.274      {
00:20:41.274        "name": "BaseBdev2",
00:20:41.274        "uuid": "94d504d6-69fa-58a9-a31b-61b8f85f3eb9",
00:20:41.274        "is_configured": true,
00:20:41.274        "data_offset": 2048,
00:20:41.274        "data_size": 63488
00:20:41.274      },
00:20:41.274      {
00:20:41.274        "name": "BaseBdev3",
00:20:41.274        "uuid": "b9ee3895-d9b8-5f8d-b842-30ff169b6dcf",
00:20:41.274        "is_configured": true,
00:20:41.274        "data_offset": 2048,
00:20:41.274        "data_size": 63488
00:20:41.274      },
00:20:41.274      {
00:20:41.274        "name": "BaseBdev4",
00:20:41.274        "uuid": "1175fb15-4ee2-5fde-9f02-17c616fd6f3d",
00:20:41.274        "is_configured": true,
00:20:41.274        "data_offset": 2048,
00:20:41.274        "data_size": 63488
00:20:41.274      }
00:20:41.274    ]
00:20:41.274  }'
00:20:41.274   17:04:33	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:20:41.274   17:04:33	-- common/autotest_common.sh@10 -- # set +x
00:20:41.840   17:04:34	-- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare
00:20:41.840  [2024-11-19 17:04:34.678553] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare
00:20:41.840  [2024-11-19 17:04:34.678623] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:20:41.840  [2024-11-19 17:04:34.682454] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000c3e420
00:20:41.840  [2024-11-19 17:04:34.684891] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1
00:20:42.098   17:04:34	-- bdev/bdev_raid.sh@598 -- # sleep 1
00:20:43.113   17:04:35	-- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:20:43.113   17:04:35	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:20:43.113   17:04:35	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:20:43.113   17:04:35	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:20:43.113   17:04:35	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:20:43.113    17:04:35	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:20:43.113    17:04:35	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:20:43.113   17:04:35	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:20:43.113    "name": "raid_bdev1",
00:20:43.113    "uuid": "05e0f9bf-bfde-4ec8-8f4d-a974806398ce",
00:20:43.113    "strip_size_kb": 0,
00:20:43.113    "state": "online",
00:20:43.113    "raid_level": "raid1",
00:20:43.113    "superblock": true,
00:20:43.113    "num_base_bdevs": 4,
00:20:43.113    "num_base_bdevs_discovered": 4,
00:20:43.113    "num_base_bdevs_operational": 4,
00:20:43.113    "process": {
00:20:43.113      "type": "rebuild",
00:20:43.113      "target": "spare",
00:20:43.113      "progress": {
00:20:43.113        "blocks": 22528,
00:20:43.113        "percent": 35
00:20:43.113      }
00:20:43.113    },
00:20:43.113    "base_bdevs_list": [
00:20:43.113      {
00:20:43.113        "name": "spare",
00:20:43.113        "uuid": "b6a607d0-bb94-581d-bf8b-c02291d3baac",
00:20:43.113        "is_configured": true,
00:20:43.113        "data_offset": 2048,
00:20:43.113        "data_size": 63488
00:20:43.113      },
00:20:43.113      {
00:20:43.113        "name": "BaseBdev2",
00:20:43.113        "uuid": "94d504d6-69fa-58a9-a31b-61b8f85f3eb9",
00:20:43.113        "is_configured": true,
00:20:43.113        "data_offset": 2048,
00:20:43.113        "data_size": 63488
00:20:43.113      },
00:20:43.113      {
00:20:43.113        "name": "BaseBdev3",
00:20:43.113        "uuid": "b9ee3895-d9b8-5f8d-b842-30ff169b6dcf",
00:20:43.113        "is_configured": true,
00:20:43.113        "data_offset": 2048,
00:20:43.113        "data_size": 63488
00:20:43.113      },
00:20:43.113      {
00:20:43.113        "name": "BaseBdev4",
00:20:43.113        "uuid": "1175fb15-4ee2-5fde-9f02-17c616fd6f3d",
00:20:43.113        "is_configured": true,
00:20:43.113        "data_offset": 2048,
00:20:43.113        "data_size": 63488
00:20:43.113      }
00:20:43.113    ]
00:20:43.113  }'
00:20:43.113    17:04:35	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:20:43.113   17:04:35	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:20:43.113    17:04:35	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:20:43.371   17:04:36	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:20:43.371   17:04:36	-- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare
00:20:43.629  [2024-11-19 17:04:36.240248] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:20:43.629  [2024-11-19 17:04:36.296096] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device
00:20:43.629  [2024-11-19 17:04:36.296214] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:20:43.629   17:04:36	-- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3
00:20:43.629   17:04:36	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:20:43.629   17:04:36	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:20:43.629   17:04:36	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:20:43.629   17:04:36	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:20:43.629   17:04:36	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:20:43.629   17:04:36	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:20:43.629   17:04:36	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:20:43.629   17:04:36	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:20:43.629   17:04:36	-- bdev/bdev_raid.sh@125 -- # local tmp
00:20:43.629    17:04:36	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:20:43.629    17:04:36	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:20:43.887   17:04:36	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:20:43.887    "name": "raid_bdev1",
00:20:43.887    "uuid": "05e0f9bf-bfde-4ec8-8f4d-a974806398ce",
00:20:43.887    "strip_size_kb": 0,
00:20:43.887    "state": "online",
00:20:43.887    "raid_level": "raid1",
00:20:43.887    "superblock": true,
00:20:43.887    "num_base_bdevs": 4,
00:20:43.887    "num_base_bdevs_discovered": 3,
00:20:43.887    "num_base_bdevs_operational": 3,
00:20:43.887    "base_bdevs_list": [
00:20:43.887      {
00:20:43.887        "name": null,
00:20:43.887        "uuid": "00000000-0000-0000-0000-000000000000",
00:20:43.887        "is_configured": false,
00:20:43.887        "data_offset": 2048,
00:20:43.887        "data_size": 63488
00:20:43.887      },
00:20:43.887      {
00:20:43.887        "name": "BaseBdev2",
00:20:43.887        "uuid": "94d504d6-69fa-58a9-a31b-61b8f85f3eb9",
00:20:43.887        "is_configured": true,
00:20:43.887        "data_offset": 2048,
00:20:43.887        "data_size": 63488
00:20:43.887      },
00:20:43.887      {
00:20:43.887        "name": "BaseBdev3",
00:20:43.887        "uuid": "b9ee3895-d9b8-5f8d-b842-30ff169b6dcf",
00:20:43.887        "is_configured": true,
00:20:43.887        "data_offset": 2048,
00:20:43.887        "data_size": 63488
00:20:43.887      },
00:20:43.887      {
00:20:43.887        "name": "BaseBdev4",
00:20:43.887        "uuid": "1175fb15-4ee2-5fde-9f02-17c616fd6f3d",
00:20:43.887        "is_configured": true,
00:20:43.887        "data_offset": 2048,
00:20:43.887        "data_size": 63488
00:20:43.887      }
00:20:43.887    ]
00:20:43.887  }'
00:20:43.887   17:04:36	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:20:43.887   17:04:36	-- common/autotest_common.sh@10 -- # set +x
00:20:44.453   17:04:37	-- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none
00:20:44.453   17:04:37	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:20:44.453   17:04:37	-- bdev/bdev_raid.sh@184 -- # local process_type=none
00:20:44.453   17:04:37	-- bdev/bdev_raid.sh@185 -- # local target=none
00:20:44.453   17:04:37	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:20:44.453    17:04:37	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:20:44.453    17:04:37	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:20:44.711   17:04:37	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:20:44.711    "name": "raid_bdev1",
00:20:44.711    "uuid": "05e0f9bf-bfde-4ec8-8f4d-a974806398ce",
00:20:44.711    "strip_size_kb": 0,
00:20:44.711    "state": "online",
00:20:44.711    "raid_level": "raid1",
00:20:44.711    "superblock": true,
00:20:44.711    "num_base_bdevs": 4,
00:20:44.711    "num_base_bdevs_discovered": 3,
00:20:44.711    "num_base_bdevs_operational": 3,
00:20:44.711    "base_bdevs_list": [
00:20:44.711      {
00:20:44.711        "name": null,
00:20:44.711        "uuid": "00000000-0000-0000-0000-000000000000",
00:20:44.711        "is_configured": false,
00:20:44.711        "data_offset": 2048,
00:20:44.711        "data_size": 63488
00:20:44.711      },
00:20:44.711      {
00:20:44.711        "name": "BaseBdev2",
00:20:44.711        "uuid": "94d504d6-69fa-58a9-a31b-61b8f85f3eb9",
00:20:44.711        "is_configured": true,
00:20:44.711        "data_offset": 2048,
00:20:44.711        "data_size": 63488
00:20:44.711      },
00:20:44.711      {
00:20:44.711        "name": "BaseBdev3",
00:20:44.711        "uuid": "b9ee3895-d9b8-5f8d-b842-30ff169b6dcf",
00:20:44.711        "is_configured": true,
00:20:44.711        "data_offset": 2048,
00:20:44.711        "data_size": 63488
00:20:44.711      },
00:20:44.711      {
00:20:44.711        "name": "BaseBdev4",
00:20:44.711        "uuid": "1175fb15-4ee2-5fde-9f02-17c616fd6f3d",
00:20:44.711        "is_configured": true,
00:20:44.711        "data_offset": 2048,
00:20:44.711        "data_size": 63488
00:20:44.711      }
00:20:44.712    ]
00:20:44.712  }'
00:20:44.712    17:04:37	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:20:44.712   17:04:37	-- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]]
00:20:44.712    17:04:37	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:20:44.712   17:04:37	-- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]]
00:20:44.712   17:04:37	-- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare
00:20:44.970  [2024-11-19 17:04:37.769024] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare
00:20:44.970  [2024-11-19 17:04:37.769084] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:20:44.970  [2024-11-19 17:04:37.772737] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000c3e5c0
00:20:44.970  [2024-11-19 17:04:37.775107] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1
00:20:44.970   17:04:37	-- bdev/bdev_raid.sh@614 -- # sleep 1
00:20:46.344   17:04:38	-- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:20:46.344   17:04:38	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:20:46.344   17:04:38	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:20:46.344   17:04:38	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:20:46.344   17:04:38	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:20:46.344    17:04:38	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:20:46.344    17:04:38	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:20:46.344   17:04:39	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:20:46.344    "name": "raid_bdev1",
00:20:46.344    "uuid": "05e0f9bf-bfde-4ec8-8f4d-a974806398ce",
00:20:46.344    "strip_size_kb": 0,
00:20:46.344    "state": "online",
00:20:46.344    "raid_level": "raid1",
00:20:46.344    "superblock": true,
00:20:46.344    "num_base_bdevs": 4,
00:20:46.344    "num_base_bdevs_discovered": 4,
00:20:46.344    "num_base_bdevs_operational": 4,
00:20:46.344    "process": {
00:20:46.344      "type": "rebuild",
00:20:46.344      "target": "spare",
00:20:46.344      "progress": {
00:20:46.344        "blocks": 24576,
00:20:46.344        "percent": 38
00:20:46.344      }
00:20:46.344    },
00:20:46.344    "base_bdevs_list": [
00:20:46.344      {
00:20:46.344        "name": "spare",
00:20:46.344        "uuid": "b6a607d0-bb94-581d-bf8b-c02291d3baac",
00:20:46.344        "is_configured": true,
00:20:46.344        "data_offset": 2048,
00:20:46.344        "data_size": 63488
00:20:46.345      },
00:20:46.345      {
00:20:46.345        "name": "BaseBdev2",
00:20:46.345        "uuid": "94d504d6-69fa-58a9-a31b-61b8f85f3eb9",
00:20:46.345        "is_configured": true,
00:20:46.345        "data_offset": 2048,
00:20:46.345        "data_size": 63488
00:20:46.345      },
00:20:46.345      {
00:20:46.345        "name": "BaseBdev3",
00:20:46.345        "uuid": "b9ee3895-d9b8-5f8d-b842-30ff169b6dcf",
00:20:46.345        "is_configured": true,
00:20:46.345        "data_offset": 2048,
00:20:46.345        "data_size": 63488
00:20:46.345      },
00:20:46.345      {
00:20:46.345        "name": "BaseBdev4",
00:20:46.345        "uuid": "1175fb15-4ee2-5fde-9f02-17c616fd6f3d",
00:20:46.345        "is_configured": true,
00:20:46.345        "data_offset": 2048,
00:20:46.345        "data_size": 63488
00:20:46.345      }
00:20:46.345    ]
00:20:46.345  }'
00:20:46.345    17:04:39	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:20:46.345   17:04:39	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:20:46.345    17:04:39	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:20:46.345   17:04:39	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:20:46.345   17:04:39	-- bdev/bdev_raid.sh@617 -- # '[' true = true ']'
00:20:46.345   17:04:39	-- bdev/bdev_raid.sh@617 -- # '[' = false ']'
00:20:46.345  /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected
00:20:46.345   17:04:39	-- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4
00:20:46.345   17:04:39	-- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']'
00:20:46.345   17:04:39	-- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']'
00:20:46.345   17:04:39	-- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2
00:20:46.604  [2024-11-19 17:04:39.396634] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2
00:20:46.862  [2024-11-19 17:04:39.485151] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000c3e5c0
00:20:46.862   17:04:39	-- bdev/bdev_raid.sh@649 -- # base_bdevs[1]=
00:20:46.862   17:04:39	-- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- ))
00:20:46.862   17:04:39	-- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:20:46.862   17:04:39	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:20:46.862   17:04:39	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:20:46.862   17:04:39	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:20:46.862   17:04:39	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:20:46.862    17:04:39	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:20:46.862    17:04:39	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:20:47.120   17:04:39	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:20:47.120    "name": "raid_bdev1",
00:20:47.120    "uuid": "05e0f9bf-bfde-4ec8-8f4d-a974806398ce",
00:20:47.120    "strip_size_kb": 0,
00:20:47.120    "state": "online",
00:20:47.120    "raid_level": "raid1",
00:20:47.121    "superblock": true,
00:20:47.121    "num_base_bdevs": 4,
00:20:47.121    "num_base_bdevs_discovered": 3,
00:20:47.121    "num_base_bdevs_operational": 3,
00:20:47.121    "process": {
00:20:47.121      "type": "rebuild",
00:20:47.121      "target": "spare",
00:20:47.121      "progress": {
00:20:47.121        "blocks": 40960,
00:20:47.121        "percent": 64
00:20:47.121      }
00:20:47.121    },
00:20:47.121    "base_bdevs_list": [
00:20:47.121      {
00:20:47.121        "name": "spare",
00:20:47.121        "uuid": "b6a607d0-bb94-581d-bf8b-c02291d3baac",
00:20:47.121        "is_configured": true,
00:20:47.121        "data_offset": 2048,
00:20:47.121        "data_size": 63488
00:20:47.121      },
00:20:47.121      {
00:20:47.121        "name": null,
00:20:47.121        "uuid": "00000000-0000-0000-0000-000000000000",
00:20:47.121        "is_configured": false,
00:20:47.121        "data_offset": 2048,
00:20:47.121        "data_size": 63488
00:20:47.121      },
00:20:47.121      {
00:20:47.121        "name": "BaseBdev3",
00:20:47.121        "uuid": "b9ee3895-d9b8-5f8d-b842-30ff169b6dcf",
00:20:47.121        "is_configured": true,
00:20:47.121        "data_offset": 2048,
00:20:47.121        "data_size": 63488
00:20:47.121      },
00:20:47.121      {
00:20:47.121        "name": "BaseBdev4",
00:20:47.121        "uuid": "1175fb15-4ee2-5fde-9f02-17c616fd6f3d",
00:20:47.121        "is_configured": true,
00:20:47.121        "data_offset": 2048,
00:20:47.121        "data_size": 63488
00:20:47.121      }
00:20:47.121    ]
00:20:47.121  }'
00:20:47.121    17:04:39	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:20:47.121   17:04:39	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:20:47.121    17:04:39	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:20:47.379   17:04:40	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:20:47.380   17:04:40	-- bdev/bdev_raid.sh@657 -- # local timeout=481
00:20:47.380   17:04:40	-- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout ))
00:20:47.380   17:04:40	-- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:20:47.380   17:04:40	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:20:47.380   17:04:40	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:20:47.380   17:04:40	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:20:47.380   17:04:40	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:20:47.380    17:04:40	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:20:47.380    17:04:40	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:20:47.639   17:04:40	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:20:47.639    "name": "raid_bdev1",
00:20:47.639    "uuid": "05e0f9bf-bfde-4ec8-8f4d-a974806398ce",
00:20:47.639    "strip_size_kb": 0,
00:20:47.639    "state": "online",
00:20:47.639    "raid_level": "raid1",
00:20:47.639    "superblock": true,
00:20:47.639    "num_base_bdevs": 4,
00:20:47.639    "num_base_bdevs_discovered": 3,
00:20:47.639    "num_base_bdevs_operational": 3,
00:20:47.639    "process": {
00:20:47.639      "type": "rebuild",
00:20:47.639      "target": "spare",
00:20:47.639      "progress": {
00:20:47.639        "blocks": 49152,
00:20:47.639        "percent": 77
00:20:47.639      }
00:20:47.639    },
00:20:47.639    "base_bdevs_list": [
00:20:47.639      {
00:20:47.639        "name": "spare",
00:20:47.639        "uuid": "b6a607d0-bb94-581d-bf8b-c02291d3baac",
00:20:47.639        "is_configured": true,
00:20:47.639        "data_offset": 2048,
00:20:47.639        "data_size": 63488
00:20:47.639      },
00:20:47.639      {
00:20:47.639        "name": null,
00:20:47.639        "uuid": "00000000-0000-0000-0000-000000000000",
00:20:47.639        "is_configured": false,
00:20:47.639        "data_offset": 2048,
00:20:47.639        "data_size": 63488
00:20:47.639      },
00:20:47.639      {
00:20:47.639        "name": "BaseBdev3",
00:20:47.639        "uuid": "b9ee3895-d9b8-5f8d-b842-30ff169b6dcf",
00:20:47.639        "is_configured": true,
00:20:47.639        "data_offset": 2048,
00:20:47.639        "data_size": 63488
00:20:47.639      },
00:20:47.639      {
00:20:47.639        "name": "BaseBdev4",
00:20:47.639        "uuid": "1175fb15-4ee2-5fde-9f02-17c616fd6f3d",
00:20:47.639        "is_configured": true,
00:20:47.639        "data_offset": 2048,
00:20:47.639        "data_size": 63488
00:20:47.639      }
00:20:47.639    ]
00:20:47.639  }'
00:20:47.639    17:04:40	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:20:47.639   17:04:40	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:20:47.639    17:04:40	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:20:47.639   17:04:40	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:20:47.639   17:04:40	-- bdev/bdev_raid.sh@662 -- # sleep 1
00:20:48.207  [2024-11-19 17:04:40.895765] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1
00:20:48.207  [2024-11-19 17:04:40.895895] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1
00:20:48.207  [2024-11-19 17:04:40.896080] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:20:48.776   17:04:41	-- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout ))
00:20:48.776   17:04:41	-- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:20:48.776   17:04:41	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:20:48.776   17:04:41	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:20:48.776   17:04:41	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:20:48.776   17:04:41	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:20:48.776    17:04:41	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:20:48.776    17:04:41	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:20:49.035   17:04:41	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:20:49.035    "name": "raid_bdev1",
00:20:49.035    "uuid": "05e0f9bf-bfde-4ec8-8f4d-a974806398ce",
00:20:49.035    "strip_size_kb": 0,
00:20:49.035    "state": "online",
00:20:49.035    "raid_level": "raid1",
00:20:49.035    "superblock": true,
00:20:49.035    "num_base_bdevs": 4,
00:20:49.035    "num_base_bdevs_discovered": 3,
00:20:49.035    "num_base_bdevs_operational": 3,
00:20:49.035    "base_bdevs_list": [
00:20:49.035      {
00:20:49.035        "name": "spare",
00:20:49.035        "uuid": "b6a607d0-bb94-581d-bf8b-c02291d3baac",
00:20:49.035        "is_configured": true,
00:20:49.035        "data_offset": 2048,
00:20:49.035        "data_size": 63488
00:20:49.035      },
00:20:49.035      {
00:20:49.035        "name": null,
00:20:49.035        "uuid": "00000000-0000-0000-0000-000000000000",
00:20:49.035        "is_configured": false,
00:20:49.035        "data_offset": 2048,
00:20:49.035        "data_size": 63488
00:20:49.035      },
00:20:49.035      {
00:20:49.035        "name": "BaseBdev3",
00:20:49.035        "uuid": "b9ee3895-d9b8-5f8d-b842-30ff169b6dcf",
00:20:49.035        "is_configured": true,
00:20:49.035        "data_offset": 2048,
00:20:49.035        "data_size": 63488
00:20:49.035      },
00:20:49.035      {
00:20:49.035        "name": "BaseBdev4",
00:20:49.035        "uuid": "1175fb15-4ee2-5fde-9f02-17c616fd6f3d",
00:20:49.035        "is_configured": true,
00:20:49.035        "data_offset": 2048,
00:20:49.035        "data_size": 63488
00:20:49.035      }
00:20:49.035    ]
00:20:49.035  }'
00:20:49.035    17:04:41	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:20:49.035   17:04:41	-- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]]
00:20:49.035    17:04:41	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:20:49.035   17:04:41	-- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]]
00:20:49.035   17:04:41	-- bdev/bdev_raid.sh@660 -- # break
00:20:49.035   17:04:41	-- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none
00:20:49.035   17:04:41	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:20:49.035   17:04:41	-- bdev/bdev_raid.sh@184 -- # local process_type=none
00:20:49.035   17:04:41	-- bdev/bdev_raid.sh@185 -- # local target=none
00:20:49.035   17:04:41	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:20:49.035    17:04:41	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:20:49.035    17:04:41	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:20:49.295   17:04:41	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:20:49.295    "name": "raid_bdev1",
00:20:49.295    "uuid": "05e0f9bf-bfde-4ec8-8f4d-a974806398ce",
00:20:49.295    "strip_size_kb": 0,
00:20:49.295    "state": "online",
00:20:49.295    "raid_level": "raid1",
00:20:49.295    "superblock": true,
00:20:49.295    "num_base_bdevs": 4,
00:20:49.295    "num_base_bdevs_discovered": 3,
00:20:49.295    "num_base_bdevs_operational": 3,
00:20:49.295    "base_bdevs_list": [
00:20:49.295      {
00:20:49.295        "name": "spare",
00:20:49.295        "uuid": "b6a607d0-bb94-581d-bf8b-c02291d3baac",
00:20:49.295        "is_configured": true,
00:20:49.295        "data_offset": 2048,
00:20:49.295        "data_size": 63488
00:20:49.295      },
00:20:49.295      {
00:20:49.295        "name": null,
00:20:49.295        "uuid": "00000000-0000-0000-0000-000000000000",
00:20:49.295        "is_configured": false,
00:20:49.295        "data_offset": 2048,
00:20:49.295        "data_size": 63488
00:20:49.295      },
00:20:49.295      {
00:20:49.295        "name": "BaseBdev3",
00:20:49.295        "uuid": "b9ee3895-d9b8-5f8d-b842-30ff169b6dcf",
00:20:49.295        "is_configured": true,
00:20:49.295        "data_offset": 2048,
00:20:49.295        "data_size": 63488
00:20:49.295      },
00:20:49.295      {
00:20:49.295        "name": "BaseBdev4",
00:20:49.295        "uuid": "1175fb15-4ee2-5fde-9f02-17c616fd6f3d",
00:20:49.295        "is_configured": true,
00:20:49.295        "data_offset": 2048,
00:20:49.295        "data_size": 63488
00:20:49.295      }
00:20:49.295    ]
00:20:49.295  }'
00:20:49.295    17:04:41	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:20:49.295   17:04:41	-- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]]
00:20:49.295    17:04:41	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:20:49.295   17:04:42	-- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]]
00:20:49.295   17:04:42	-- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3
00:20:49.295   17:04:42	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:20:49.295   17:04:42	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:20:49.295   17:04:42	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:20:49.295   17:04:42	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:20:49.295   17:04:42	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:20:49.295   17:04:42	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:20:49.295   17:04:42	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:20:49.295   17:04:42	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:20:49.295   17:04:42	-- bdev/bdev_raid.sh@125 -- # local tmp
00:20:49.295    17:04:42	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:20:49.295    17:04:42	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:20:49.555   17:04:42	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:20:49.555    "name": "raid_bdev1",
00:20:49.555    "uuid": "05e0f9bf-bfde-4ec8-8f4d-a974806398ce",
00:20:49.555    "strip_size_kb": 0,
00:20:49.555    "state": "online",
00:20:49.555    "raid_level": "raid1",
00:20:49.555    "superblock": true,
00:20:49.555    "num_base_bdevs": 4,
00:20:49.555    "num_base_bdevs_discovered": 3,
00:20:49.555    "num_base_bdevs_operational": 3,
00:20:49.555    "base_bdevs_list": [
00:20:49.555      {
00:20:49.555        "name": "spare",
00:20:49.555        "uuid": "b6a607d0-bb94-581d-bf8b-c02291d3baac",
00:20:49.555        "is_configured": true,
00:20:49.555        "data_offset": 2048,
00:20:49.555        "data_size": 63488
00:20:49.555      },
00:20:49.555      {
00:20:49.555        "name": null,
00:20:49.555        "uuid": "00000000-0000-0000-0000-000000000000",
00:20:49.555        "is_configured": false,
00:20:49.555        "data_offset": 2048,
00:20:49.555        "data_size": 63488
00:20:49.555      },
00:20:49.555      {
00:20:49.555        "name": "BaseBdev3",
00:20:49.555        "uuid": "b9ee3895-d9b8-5f8d-b842-30ff169b6dcf",
00:20:49.555        "is_configured": true,
00:20:49.555        "data_offset": 2048,
00:20:49.555        "data_size": 63488
00:20:49.555      },
00:20:49.555      {
00:20:49.555        "name": "BaseBdev4",
00:20:49.555        "uuid": "1175fb15-4ee2-5fde-9f02-17c616fd6f3d",
00:20:49.555        "is_configured": true,
00:20:49.555        "data_offset": 2048,
00:20:49.555        "data_size": 63488
00:20:49.555      }
00:20:49.555    ]
00:20:49.555  }'
00:20:49.555   17:04:42	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:20:49.555   17:04:42	-- common/autotest_common.sh@10 -- # set +x
00:20:50.490   17:04:42	-- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1
00:20:50.490  [2024-11-19 17:04:43.237099] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:20:50.490  [2024-11-19 17:04:43.237153] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:20:50.490  [2024-11-19 17:04:43.237266] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:20:50.490  [2024-11-19 17:04:43.237379] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:20:50.490  [2024-11-19 17:04:43.237391] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline
00:20:50.490    17:04:43	-- bdev/bdev_raid.sh@671 -- # jq length
00:20:50.490    17:04:43	-- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:20:50.748   17:04:43	-- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]]
00:20:50.748   17:04:43	-- bdev/bdev_raid.sh@673 -- # '[' false = true ']'
00:20:50.748   17:04:43	-- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1'
00:20:50.748   17:04:43	-- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:20:50.748   17:04:43	-- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare')
00:20:50.748   17:04:43	-- bdev/nbd_common.sh@10 -- # local bdev_list
00:20:50.748   17:04:43	-- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:20:50.748   17:04:43	-- bdev/nbd_common.sh@11 -- # local nbd_list
00:20:50.748   17:04:43	-- bdev/nbd_common.sh@12 -- # local i
00:20:50.748   17:04:43	-- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:20:50.748   17:04:43	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:20:50.748   17:04:43	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0
00:20:51.007  /dev/nbd0
00:20:51.007    17:04:43	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:20:51.007   17:04:43	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:20:51.007   17:04:43	-- common/autotest_common.sh@866 -- # local nbd_name=nbd0
00:20:51.007   17:04:43	-- common/autotest_common.sh@867 -- # local i
00:20:51.007   17:04:43	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:20:51.007   17:04:43	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:20:51.007   17:04:43	-- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions
00:20:51.007   17:04:43	-- common/autotest_common.sh@871 -- # break
00:20:51.007   17:04:43	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:20:51.007   17:04:43	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:20:51.007   17:04:43	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:20:51.007  1+0 records in
00:20:51.007  1+0 records out
00:20:51.007  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000485468 s, 8.4 MB/s
00:20:51.007    17:04:43	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:20:51.007   17:04:43	-- common/autotest_common.sh@884 -- # size=4096
00:20:51.007   17:04:43	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:20:51.007   17:04:43	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:20:51.007   17:04:43	-- common/autotest_common.sh@887 -- # return 0
00:20:51.007   17:04:43	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:20:51.007   17:04:43	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:20:51.007   17:04:43	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1
00:20:51.574  /dev/nbd1
00:20:51.574    17:04:44	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:20:51.574   17:04:44	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:20:51.574   17:04:44	-- common/autotest_common.sh@866 -- # local nbd_name=nbd1
00:20:51.574   17:04:44	-- common/autotest_common.sh@867 -- # local i
00:20:51.574   17:04:44	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:20:51.574   17:04:44	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:20:51.574   17:04:44	-- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions
00:20:51.574   17:04:44	-- common/autotest_common.sh@871 -- # break
00:20:51.574   17:04:44	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:20:51.574   17:04:44	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:20:51.574   17:04:44	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:20:51.574  1+0 records in
00:20:51.574  1+0 records out
00:20:51.574  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000405705 s, 10.1 MB/s
00:20:51.574    17:04:44	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:20:51.574   17:04:44	-- common/autotest_common.sh@884 -- # size=4096
00:20:51.574   17:04:44	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:20:51.574   17:04:44	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:20:51.574   17:04:44	-- common/autotest_common.sh@887 -- # return 0
00:20:51.574   17:04:44	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:20:51.574   17:04:44	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:20:51.574   17:04:44	-- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1
00:20:51.574   17:04:44	-- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1'
00:20:51.574   17:04:44	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:20:51.574   17:04:44	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:20:51.574   17:04:44	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:20:51.574   17:04:44	-- bdev/nbd_common.sh@51 -- # local i
00:20:51.574   17:04:44	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:20:51.574   17:04:44	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0
00:20:51.832    17:04:44	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:20:51.832   17:04:44	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:20:51.832   17:04:44	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:20:51.832   17:04:44	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:20:51.832   17:04:44	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:20:51.832   17:04:44	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:20:51.832   17:04:44	-- bdev/nbd_common.sh@41 -- # break
00:20:51.832   17:04:44	-- bdev/nbd_common.sh@45 -- # return 0
00:20:51.832   17:04:44	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:20:51.832   17:04:44	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1
00:20:52.091    17:04:44	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:20:52.091   17:04:44	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:20:52.091   17:04:44	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:20:52.091   17:04:44	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:20:52.091   17:04:44	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:20:52.091   17:04:44	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:20:52.091   17:04:44	-- bdev/nbd_common.sh@41 -- # break
00:20:52.091   17:04:44	-- bdev/nbd_common.sh@45 -- # return 0
00:20:52.091   17:04:44	-- bdev/bdev_raid.sh@692 -- # '[' true = true ']'
00:20:52.091   17:04:44	-- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}"
00:20:52.091   17:04:44	-- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']'
00:20:52.091   17:04:44	-- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1
00:20:52.349   17:04:45	-- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1
00:20:52.608  [2024-11-19 17:04:45.325930] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc
00:20:52.608  [2024-11-19 17:04:45.326064] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:20:52.608  [2024-11-19 17:04:45.326117] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580
00:20:52.608  [2024-11-19 17:04:45.326157] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:20:52.608  [2024-11-19 17:04:45.328971] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:20:52.608  [2024-11-19 17:04:45.329082] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1
00:20:52.608  [2024-11-19 17:04:45.329192] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1
00:20:52.608  [2024-11-19 17:04:45.329261] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:20:52.608  BaseBdev1
00:20:52.608   17:04:45	-- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}"
00:20:52.608   17:04:45	-- bdev/bdev_raid.sh@695 -- # '[' -z '' ']'
00:20:52.608   17:04:45	-- bdev/bdev_raid.sh@696 -- # continue
00:20:52.608   17:04:45	-- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}"
00:20:52.608   17:04:45	-- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']'
00:20:52.608   17:04:45	-- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3
00:20:52.867   17:04:45	-- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3
00:20:53.127  [2024-11-19 17:04:45.886039] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc
00:20:53.127  [2024-11-19 17:04:45.886189] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:20:53.127  [2024-11-19 17:04:45.886236] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80
00:20:53.127  [2024-11-19 17:04:45.886262] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:20:53.127  [2024-11-19 17:04:45.886737] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:20:53.127  [2024-11-19 17:04:45.886805] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3
00:20:53.127  [2024-11-19 17:04:45.886912] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3
00:20:53.127  [2024-11-19 17:04:45.886926] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev3 (4) greater than existing raid bdev raid_bdev1 (1)
00:20:53.127  [2024-11-19 17:04:45.886934] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:20:53.127  [2024-11-19 17:04:45.886970] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state configuring
00:20:53.127  [2024-11-19 17:04:45.887036] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:20:53.127  BaseBdev3
00:20:53.127   17:04:45	-- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}"
00:20:53.127   17:04:45	-- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev4 ']'
00:20:53.127   17:04:45	-- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev4
00:20:53.385   17:04:46	-- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4
00:20:53.644  [2024-11-19 17:04:46.454109] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc
00:20:53.644  [2024-11-19 17:04:46.454267] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:20:53.644  [2024-11-19 17:04:46.454317] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480
00:20:53.644  [2024-11-19 17:04:46.454348] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:20:53.644  [2024-11-19 17:04:46.454813] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:20:53.644  [2024-11-19 17:04:46.454889] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4
00:20:53.644  [2024-11-19 17:04:46.454977] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev4
00:20:53.644  [2024-11-19 17:04:46.455007] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed
00:20:53.644  BaseBdev4
00:20:53.644   17:04:46	-- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare
00:20:53.901   17:04:46	-- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare
00:20:54.159  [2024-11-19 17:04:47.010311] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay
00:20:54.159  [2024-11-19 17:04:47.010469] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:20:54.159  [2024-11-19 17:04:47.010515] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780
00:20:54.159  [2024-11-19 17:04:47.010550] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:20:54.159  [2024-11-19 17:04:47.011148] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:20:54.159  [2024-11-19 17:04:47.011216] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare
00:20:54.159  [2024-11-19 17:04:47.011323] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare
00:20:54.159  [2024-11-19 17:04:47.011364] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:20:54.159  spare
00:20:54.416   17:04:47	-- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3
00:20:54.416   17:04:47	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:20:54.416   17:04:47	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:20:54.416   17:04:47	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:20:54.416   17:04:47	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:20:54.416   17:04:47	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:20:54.416   17:04:47	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:20:54.416   17:04:47	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:20:54.416   17:04:47	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:20:54.416   17:04:47	-- bdev/bdev_raid.sh@125 -- # local tmp
00:20:54.416    17:04:47	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:20:54.416    17:04:47	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:20:54.416  [2024-11-19 17:04:47.111503] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000b180
00:20:54.416  [2024-11-19 17:04:47.111547] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512
00:20:54.416  [2024-11-19 17:04:47.111751] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caf0b0
00:20:54.416  [2024-11-19 17:04:47.112242] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000b180
00:20:54.416  [2024-11-19 17:04:47.112268] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000b180
00:20:54.416  [2024-11-19 17:04:47.112413] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:20:54.673   17:04:47	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:20:54.673    "name": "raid_bdev1",
00:20:54.673    "uuid": "05e0f9bf-bfde-4ec8-8f4d-a974806398ce",
00:20:54.673    "strip_size_kb": 0,
00:20:54.673    "state": "online",
00:20:54.673    "raid_level": "raid1",
00:20:54.673    "superblock": true,
00:20:54.673    "num_base_bdevs": 4,
00:20:54.673    "num_base_bdevs_discovered": 3,
00:20:54.673    "num_base_bdevs_operational": 3,
00:20:54.673    "base_bdevs_list": [
00:20:54.673      {
00:20:54.673        "name": "spare",
00:20:54.673        "uuid": "b6a607d0-bb94-581d-bf8b-c02291d3baac",
00:20:54.673        "is_configured": true,
00:20:54.673        "data_offset": 2048,
00:20:54.673        "data_size": 63488
00:20:54.673      },
00:20:54.673      {
00:20:54.673        "name": null,
00:20:54.673        "uuid": "00000000-0000-0000-0000-000000000000",
00:20:54.673        "is_configured": false,
00:20:54.673        "data_offset": 2048,
00:20:54.673        "data_size": 63488
00:20:54.673      },
00:20:54.673      {
00:20:54.673        "name": "BaseBdev3",
00:20:54.673        "uuid": "b9ee3895-d9b8-5f8d-b842-30ff169b6dcf",
00:20:54.673        "is_configured": true,
00:20:54.673        "data_offset": 2048,
00:20:54.673        "data_size": 63488
00:20:54.673      },
00:20:54.673      {
00:20:54.673        "name": "BaseBdev4",
00:20:54.673        "uuid": "1175fb15-4ee2-5fde-9f02-17c616fd6f3d",
00:20:54.673        "is_configured": true,
00:20:54.673        "data_offset": 2048,
00:20:54.673        "data_size": 63488
00:20:54.673      }
00:20:54.673    ]
00:20:54.673  }'
00:20:54.673   17:04:47	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:20:54.673   17:04:47	-- common/autotest_common.sh@10 -- # set +x
00:20:55.238   17:04:47	-- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none
00:20:55.238   17:04:47	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:20:55.238   17:04:47	-- bdev/bdev_raid.sh@184 -- # local process_type=none
00:20:55.238   17:04:47	-- bdev/bdev_raid.sh@185 -- # local target=none
00:20:55.238   17:04:47	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:20:55.238    17:04:47	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:20:55.238    17:04:47	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:20:55.238   17:04:48	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:20:55.238    "name": "raid_bdev1",
00:20:55.238    "uuid": "05e0f9bf-bfde-4ec8-8f4d-a974806398ce",
00:20:55.238    "strip_size_kb": 0,
00:20:55.238    "state": "online",
00:20:55.238    "raid_level": "raid1",
00:20:55.238    "superblock": true,
00:20:55.238    "num_base_bdevs": 4,
00:20:55.238    "num_base_bdevs_discovered": 3,
00:20:55.238    "num_base_bdevs_operational": 3,
00:20:55.238    "base_bdevs_list": [
00:20:55.238      {
00:20:55.238        "name": "spare",
00:20:55.238        "uuid": "b6a607d0-bb94-581d-bf8b-c02291d3baac",
00:20:55.238        "is_configured": true,
00:20:55.238        "data_offset": 2048,
00:20:55.238        "data_size": 63488
00:20:55.238      },
00:20:55.238      {
00:20:55.238        "name": null,
00:20:55.238        "uuid": "00000000-0000-0000-0000-000000000000",
00:20:55.238        "is_configured": false,
00:20:55.238        "data_offset": 2048,
00:20:55.238        "data_size": 63488
00:20:55.238      },
00:20:55.238      {
00:20:55.238        "name": "BaseBdev3",
00:20:55.238        "uuid": "b9ee3895-d9b8-5f8d-b842-30ff169b6dcf",
00:20:55.238        "is_configured": true,
00:20:55.238        "data_offset": 2048,
00:20:55.238        "data_size": 63488
00:20:55.238      },
00:20:55.238      {
00:20:55.238        "name": "BaseBdev4",
00:20:55.238        "uuid": "1175fb15-4ee2-5fde-9f02-17c616fd6f3d",
00:20:55.238        "is_configured": true,
00:20:55.238        "data_offset": 2048,
00:20:55.238        "data_size": 63488
00:20:55.238      }
00:20:55.238    ]
00:20:55.238  }'
00:20:55.238    17:04:48	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:20:55.497   17:04:48	-- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]]
00:20:55.497    17:04:48	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:20:55.497   17:04:48	-- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]]
00:20:55.497    17:04:48	-- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:20:55.497    17:04:48	-- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name'
00:20:55.755   17:04:48	-- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]]
00:20:55.755   17:04:48	-- bdev/bdev_raid.sh@709 -- # killprocess 135474
00:20:55.755   17:04:48	-- common/autotest_common.sh@936 -- # '[' -z 135474 ']'
00:20:55.755   17:04:48	-- common/autotest_common.sh@940 -- # kill -0 135474
00:20:55.755    17:04:48	-- common/autotest_common.sh@941 -- # uname
00:20:55.755   17:04:48	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:20:55.755    17:04:48	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 135474
00:20:55.755   17:04:48	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:20:55.755   17:04:48	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:20:55.755   17:04:48	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 135474'
00:20:55.755  killing process with pid 135474
00:20:55.755   17:04:48	-- common/autotest_common.sh@955 -- # kill 135474
00:20:55.755  Received shutdown signal, test time was about 60.000000 seconds
00:20:55.755  
00:20:55.755                                                                                                  Latency(us)
00:20:55.755  
[2024-11-19T17:04:48.619Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:20:55.755  
[2024-11-19T17:04:48.619Z]  ===================================================================================================================
00:20:55.755  
[2024-11-19T17:04:48.619Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00 18446744073709551616.00       0.00
00:20:55.755  [2024-11-19 17:04:48.404703] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:20:55.755  [2024-11-19 17:04:48.404844] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:20:55.755   17:04:48	-- common/autotest_common.sh@960 -- # wait 135474
00:20:55.755  [2024-11-19 17:04:48.404942] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:20:55.755  [2024-11-19 17:04:48.404953] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000b180 name raid_bdev1, state offline
00:20:55.755  [2024-11-19 17:04:48.462161] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:20:56.014   17:04:48	-- bdev/bdev_raid.sh@711 -- # return 0
00:20:56.014  
00:20:56.014  real	0m29.710s
00:20:56.014  user	0m42.378s
00:20:56.014  sys	0m6.050s
00:20:56.014   17:04:48	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:20:56.014   17:04:48	-- common/autotest_common.sh@10 -- # set +x
00:20:56.014  ************************************
00:20:56.014  END TEST raid_rebuild_test_sb
00:20:56.014  ************************************
00:20:56.014   17:04:48	-- bdev/bdev_raid.sh@737 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true
00:20:56.014   17:04:48	-- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']'
00:20:56.014   17:04:48	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:20:56.014   17:04:48	-- common/autotest_common.sh@10 -- # set +x
00:20:56.014  ************************************
00:20:56.014  START TEST raid_rebuild_test_io
00:20:56.014  ************************************
00:20:56.014   17:04:48	-- common/autotest_common.sh@1114 -- # raid_rebuild_test raid1 4 false true
00:20:56.014   17:04:48	-- bdev/bdev_raid.sh@517 -- # local raid_level=raid1
00:20:56.014   17:04:48	-- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4
00:20:56.014   17:04:48	-- bdev/bdev_raid.sh@519 -- # local superblock=false
00:20:56.014   17:04:48	-- bdev/bdev_raid.sh@520 -- # local background_io=true
00:20:56.014    17:04:48	-- bdev/bdev_raid.sh@521 -- # (( i = 1 ))
00:20:56.014    17:04:48	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:20:56.014    17:04:48	-- bdev/bdev_raid.sh@521 -- # echo BaseBdev1
00:20:56.014    17:04:48	-- bdev/bdev_raid.sh@521 -- # (( i++ ))
00:20:56.014    17:04:48	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:20:56.014    17:04:48	-- bdev/bdev_raid.sh@521 -- # echo BaseBdev2
00:20:56.014    17:04:48	-- bdev/bdev_raid.sh@521 -- # (( i++ ))
00:20:56.014    17:04:48	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:20:56.014    17:04:48	-- bdev/bdev_raid.sh@521 -- # echo BaseBdev3
00:20:56.014    17:04:48	-- bdev/bdev_raid.sh@521 -- # (( i++ ))
00:20:56.014    17:04:48	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:20:56.014    17:04:48	-- bdev/bdev_raid.sh@521 -- # echo BaseBdev4
00:20:56.014    17:04:48	-- bdev/bdev_raid.sh@521 -- # (( i++ ))
00:20:56.014    17:04:48	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:20:56.014   17:04:48	-- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4')
00:20:56.014   17:04:48	-- bdev/bdev_raid.sh@521 -- # local base_bdevs
00:20:56.014   17:04:48	-- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1
00:20:56.014   17:04:48	-- bdev/bdev_raid.sh@523 -- # local strip_size
00:20:56.014   17:04:48	-- bdev/bdev_raid.sh@524 -- # local create_arg
00:20:56.014   17:04:48	-- bdev/bdev_raid.sh@525 -- # local raid_bdev_size
00:20:56.014   17:04:48	-- bdev/bdev_raid.sh@526 -- # local data_offset
00:20:56.014   17:04:48	-- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']'
00:20:56.014   17:04:48	-- bdev/bdev_raid.sh@536 -- # strip_size=0
00:20:56.014   17:04:48	-- bdev/bdev_raid.sh@539 -- # '[' false = true ']'
00:20:56.014   17:04:48	-- bdev/bdev_raid.sh@544 -- # raid_pid=136156
00:20:56.014   17:04:48	-- bdev/bdev_raid.sh@545 -- # waitforlisten 136156 /var/tmp/spdk-raid.sock
00:20:56.014   17:04:48	-- common/autotest_common.sh@829 -- # '[' -z 136156 ']'
00:20:56.014   17:04:48	-- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid
00:20:56.014   17:04:48	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock
00:20:56.014   17:04:48	-- common/autotest_common.sh@834 -- # local max_retries=100
00:20:56.014   17:04:48	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...'
00:20:56.014  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...
00:20:56.014   17:04:48	-- common/autotest_common.sh@838 -- # xtrace_disable
00:20:56.014   17:04:48	-- common/autotest_common.sh@10 -- # set +x
00:20:56.273  [2024-11-19 17:04:48.905559] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:20:56.273  I/O size of 3145728 is greater than zero copy threshold (65536).
00:20:56.273  Zero copy mechanism will not be used.
00:20:56.273  [2024-11-19 17:04:48.905816] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136156 ]
00:20:56.273  [2024-11-19 17:04:49.059439] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:20:56.273  [2024-11-19 17:04:49.111002] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:20:56.530  [2024-11-19 17:04:49.156077] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:20:57.097   17:04:49	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:20:57.097   17:04:49	-- common/autotest_common.sh@862 -- # return 0
00:20:57.097   17:04:49	-- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}"
00:20:57.097   17:04:49	-- bdev/bdev_raid.sh@549 -- # '[' false = true ']'
00:20:57.097   17:04:49	-- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1
00:20:57.355  BaseBdev1
00:20:57.355   17:04:50	-- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}"
00:20:57.355   17:04:50	-- bdev/bdev_raid.sh@549 -- # '[' false = true ']'
00:20:57.355   17:04:50	-- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2
00:20:57.613  BaseBdev2
00:20:57.613   17:04:50	-- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}"
00:20:57.613   17:04:50	-- bdev/bdev_raid.sh@549 -- # '[' false = true ']'
00:20:57.613   17:04:50	-- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3
00:20:57.871  BaseBdev3
00:20:57.871   17:04:50	-- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}"
00:20:57.871   17:04:50	-- bdev/bdev_raid.sh@549 -- # '[' false = true ']'
00:20:57.871   17:04:50	-- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4
00:20:58.128  BaseBdev4
00:20:58.386   17:04:50	-- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc
00:20:58.386  spare_malloc
00:20:58.668   17:04:51	-- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000
00:20:58.668  spare_delay
00:20:58.669   17:04:51	-- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare
00:20:58.927  [2024-11-19 17:04:51.693055] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay
00:20:58.927  [2024-11-19 17:04:51.693186] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:20:58.927  [2024-11-19 17:04:51.693230] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880
00:20:58.927  [2024-11-19 17:04:51.693274] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:20:58.927  [2024-11-19 17:04:51.696298] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:20:58.927  [2024-11-19 17:04:51.696390] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare
00:20:58.927  spare
00:20:58.927   17:04:51	-- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1
00:20:59.185  [2024-11-19 17:04:51.957327] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:20:59.185  [2024-11-19 17:04:51.959702] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:20:59.186  [2024-11-19 17:04:51.959761] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:20:59.186  [2024-11-19 17:04:51.959791] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed
00:20:59.186  [2024-11-19 17:04:51.959874] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80
00:20:59.186  [2024-11-19 17:04:51.959884] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512
00:20:59.186  [2024-11-19 17:04:51.960087] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0
00:20:59.186  [2024-11-19 17:04:51.960478] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80
00:20:59.186  [2024-11-19 17:04:51.960501] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80
00:20:59.186  [2024-11-19 17:04:51.960708] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:20:59.186   17:04:51	-- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4
00:20:59.186   17:04:51	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:20:59.186   17:04:51	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:20:59.186   17:04:51	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:20:59.186   17:04:51	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:20:59.186   17:04:51	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:20:59.186   17:04:51	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:20:59.186   17:04:51	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:20:59.186   17:04:51	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:20:59.186   17:04:51	-- bdev/bdev_raid.sh@125 -- # local tmp
00:20:59.186    17:04:51	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:20:59.186    17:04:51	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:20:59.445   17:04:52	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:20:59.445    "name": "raid_bdev1",
00:20:59.445    "uuid": "d3a3590c-f17c-4e6e-9025-ed1ddb7bfb2b",
00:20:59.445    "strip_size_kb": 0,
00:20:59.445    "state": "online",
00:20:59.445    "raid_level": "raid1",
00:20:59.445    "superblock": false,
00:20:59.445    "num_base_bdevs": 4,
00:20:59.445    "num_base_bdevs_discovered": 4,
00:20:59.445    "num_base_bdevs_operational": 4,
00:20:59.445    "base_bdevs_list": [
00:20:59.445      {
00:20:59.445        "name": "BaseBdev1",
00:20:59.445        "uuid": "1092236b-d1cb-4ba3-9112-6ed379ee66be",
00:20:59.445        "is_configured": true,
00:20:59.445        "data_offset": 0,
00:20:59.445        "data_size": 65536
00:20:59.445      },
00:20:59.445      {
00:20:59.445        "name": "BaseBdev2",
00:20:59.445        "uuid": "17ee1669-ebf2-4c13-929b-6abb5d232047",
00:20:59.445        "is_configured": true,
00:20:59.445        "data_offset": 0,
00:20:59.445        "data_size": 65536
00:20:59.445      },
00:20:59.445      {
00:20:59.445        "name": "BaseBdev3",
00:20:59.445        "uuid": "f4f10704-1449-4129-adb1-2f154ed1cc67",
00:20:59.445        "is_configured": true,
00:20:59.445        "data_offset": 0,
00:20:59.445        "data_size": 65536
00:20:59.445      },
00:20:59.445      {
00:20:59.445        "name": "BaseBdev4",
00:20:59.445        "uuid": "6ea229f6-4516-465b-890d-817524c9d9bf",
00:20:59.445        "is_configured": true,
00:20:59.445        "data_offset": 0,
00:20:59.445        "data_size": 65536
00:20:59.445      }
00:20:59.445    ]
00:20:59.445  }'
00:20:59.445   17:04:52	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:20:59.445   17:04:52	-- common/autotest_common.sh@10 -- # set +x
00:21:00.011    17:04:52	-- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1
00:21:00.011    17:04:52	-- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks'
00:21:00.269  [2024-11-19 17:04:52.949713] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:21:00.269   17:04:52	-- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536
00:21:00.269    17:04:52	-- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset'
00:21:00.269    17:04:52	-- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:21:00.528   17:04:53	-- bdev/bdev_raid.sh@570 -- # data_offset=0
00:21:00.528   17:04:53	-- bdev/bdev_raid.sh@572 -- # '[' true = true ']'
00:21:00.528   17:04:53	-- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests
00:21:00.528   17:04:53	-- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1
00:21:00.528  [2024-11-19 17:04:53.371732] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390
00:21:00.528  I/O size of 3145728 is greater than zero copy threshold (65536).
00:21:00.528  Zero copy mechanism will not be used.
00:21:00.528  Running I/O for 60 seconds...
00:21:00.787  [2024-11-19 17:04:53.446088] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:21:00.787  [2024-11-19 17:04:53.446362] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002390
00:21:00.787   17:04:53	-- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3
00:21:00.787   17:04:53	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:21:00.787   17:04:53	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:21:00.787   17:04:53	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:21:00.787   17:04:53	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:21:00.787   17:04:53	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:21:00.787   17:04:53	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:21:00.787   17:04:53	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:21:00.787   17:04:53	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:21:00.787   17:04:53	-- bdev/bdev_raid.sh@125 -- # local tmp
00:21:00.787    17:04:53	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:21:00.787    17:04:53	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:21:01.046   17:04:53	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:21:01.046    "name": "raid_bdev1",
00:21:01.046    "uuid": "d3a3590c-f17c-4e6e-9025-ed1ddb7bfb2b",
00:21:01.046    "strip_size_kb": 0,
00:21:01.046    "state": "online",
00:21:01.046    "raid_level": "raid1",
00:21:01.046    "superblock": false,
00:21:01.046    "num_base_bdevs": 4,
00:21:01.046    "num_base_bdevs_discovered": 3,
00:21:01.046    "num_base_bdevs_operational": 3,
00:21:01.046    "base_bdevs_list": [
00:21:01.046      {
00:21:01.046        "name": null,
00:21:01.046        "uuid": "00000000-0000-0000-0000-000000000000",
00:21:01.046        "is_configured": false,
00:21:01.046        "data_offset": 0,
00:21:01.046        "data_size": 65536
00:21:01.047      },
00:21:01.047      {
00:21:01.047        "name": "BaseBdev2",
00:21:01.047        "uuid": "17ee1669-ebf2-4c13-929b-6abb5d232047",
00:21:01.047        "is_configured": true,
00:21:01.047        "data_offset": 0,
00:21:01.047        "data_size": 65536
00:21:01.047      },
00:21:01.047      {
00:21:01.047        "name": "BaseBdev3",
00:21:01.047        "uuid": "f4f10704-1449-4129-adb1-2f154ed1cc67",
00:21:01.047        "is_configured": true,
00:21:01.047        "data_offset": 0,
00:21:01.047        "data_size": 65536
00:21:01.047      },
00:21:01.047      {
00:21:01.047        "name": "BaseBdev4",
00:21:01.047        "uuid": "6ea229f6-4516-465b-890d-817524c9d9bf",
00:21:01.047        "is_configured": true,
00:21:01.047        "data_offset": 0,
00:21:01.047        "data_size": 65536
00:21:01.047      }
00:21:01.047    ]
00:21:01.047  }'
00:21:01.047   17:04:53	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:21:01.047   17:04:53	-- common/autotest_common.sh@10 -- # set +x
00:21:01.615   17:04:54	-- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare
00:21:01.873  [2024-11-19 17:04:54.652436] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare
00:21:01.873  [2024-11-19 17:04:54.652551] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:21:01.873   17:04:54	-- bdev/bdev_raid.sh@598 -- # sleep 1
00:21:01.873  [2024-11-19 17:04:54.715167] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460
00:21:01.873  [2024-11-19 17:04:54.719047] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1
00:21:02.132  [2024-11-19 17:04:54.839291] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144
00:21:02.132  [2024-11-19 17:04:54.971604] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144
00:21:02.132  [2024-11-19 17:04:54.972489] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144
00:21:02.699  [2024-11-19 17:04:55.367412] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288
00:21:02.957  [2024-11-19 17:04:55.599862] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288
00:21:02.957  [2024-11-19 17:04:55.600864] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288
00:21:02.957   17:04:55	-- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:21:02.958   17:04:55	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:21:02.958   17:04:55	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:21:02.958   17:04:55	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:21:02.958   17:04:55	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:21:02.958    17:04:55	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:21:02.958    17:04:55	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:21:03.222   17:04:55	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:21:03.222    "name": "raid_bdev1",
00:21:03.222    "uuid": "d3a3590c-f17c-4e6e-9025-ed1ddb7bfb2b",
00:21:03.222    "strip_size_kb": 0,
00:21:03.222    "state": "online",
00:21:03.222    "raid_level": "raid1",
00:21:03.222    "superblock": false,
00:21:03.222    "num_base_bdevs": 4,
00:21:03.222    "num_base_bdevs_discovered": 4,
00:21:03.222    "num_base_bdevs_operational": 4,
00:21:03.222    "process": {
00:21:03.222      "type": "rebuild",
00:21:03.222      "target": "spare",
00:21:03.222      "progress": {
00:21:03.222        "blocks": 12288,
00:21:03.222        "percent": 18
00:21:03.222      }
00:21:03.222    },
00:21:03.222    "base_bdevs_list": [
00:21:03.222      {
00:21:03.222        "name": "spare",
00:21:03.222        "uuid": "0f4d333b-1060-5c79-88c0-090043576526",
00:21:03.222        "is_configured": true,
00:21:03.222        "data_offset": 0,
00:21:03.222        "data_size": 65536
00:21:03.222      },
00:21:03.222      {
00:21:03.222        "name": "BaseBdev2",
00:21:03.222        "uuid": "17ee1669-ebf2-4c13-929b-6abb5d232047",
00:21:03.222        "is_configured": true,
00:21:03.222        "data_offset": 0,
00:21:03.222        "data_size": 65536
00:21:03.222      },
00:21:03.222      {
00:21:03.223        "name": "BaseBdev3",
00:21:03.223        "uuid": "f4f10704-1449-4129-adb1-2f154ed1cc67",
00:21:03.223        "is_configured": true,
00:21:03.223        "data_offset": 0,
00:21:03.223        "data_size": 65536
00:21:03.223      },
00:21:03.223      {
00:21:03.223        "name": "BaseBdev4",
00:21:03.223        "uuid": "6ea229f6-4516-465b-890d-817524c9d9bf",
00:21:03.223        "is_configured": true,
00:21:03.223        "data_offset": 0,
00:21:03.223        "data_size": 65536
00:21:03.223      }
00:21:03.223    ]
00:21:03.223  }'
00:21:03.223    17:04:55	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:21:03.223   17:04:55	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:21:03.223    17:04:55	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:21:03.223   17:04:56	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:21:03.223   17:04:56	-- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare
00:21:03.483  [2024-11-19 17:04:56.103034] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432
00:21:03.483  [2024-11-19 17:04:56.252652] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:21:03.742  [2024-11-19 17:04:56.346630] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device
00:21:03.742  [2024-11-19 17:04:56.359105] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:21:03.742  [2024-11-19 17:04:56.383139] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002390
00:21:03.742   17:04:56	-- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3
00:21:03.742   17:04:56	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:21:03.742   17:04:56	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:21:03.742   17:04:56	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:21:03.742   17:04:56	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:21:03.742   17:04:56	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:21:03.742   17:04:56	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:21:03.742   17:04:56	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:21:03.742   17:04:56	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:21:03.742   17:04:56	-- bdev/bdev_raid.sh@125 -- # local tmp
00:21:03.742    17:04:56	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:21:03.742    17:04:56	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:21:04.001   17:04:56	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:21:04.001    "name": "raid_bdev1",
00:21:04.001    "uuid": "d3a3590c-f17c-4e6e-9025-ed1ddb7bfb2b",
00:21:04.001    "strip_size_kb": 0,
00:21:04.001    "state": "online",
00:21:04.001    "raid_level": "raid1",
00:21:04.001    "superblock": false,
00:21:04.001    "num_base_bdevs": 4,
00:21:04.001    "num_base_bdevs_discovered": 3,
00:21:04.001    "num_base_bdevs_operational": 3,
00:21:04.001    "base_bdevs_list": [
00:21:04.001      {
00:21:04.001        "name": null,
00:21:04.001        "uuid": "00000000-0000-0000-0000-000000000000",
00:21:04.001        "is_configured": false,
00:21:04.001        "data_offset": 0,
00:21:04.001        "data_size": 65536
00:21:04.001      },
00:21:04.001      {
00:21:04.001        "name": "BaseBdev2",
00:21:04.001        "uuid": "17ee1669-ebf2-4c13-929b-6abb5d232047",
00:21:04.001        "is_configured": true,
00:21:04.001        "data_offset": 0,
00:21:04.001        "data_size": 65536
00:21:04.001      },
00:21:04.001      {
00:21:04.001        "name": "BaseBdev3",
00:21:04.001        "uuid": "f4f10704-1449-4129-adb1-2f154ed1cc67",
00:21:04.001        "is_configured": true,
00:21:04.001        "data_offset": 0,
00:21:04.001        "data_size": 65536
00:21:04.001      },
00:21:04.001      {
00:21:04.001        "name": "BaseBdev4",
00:21:04.001        "uuid": "6ea229f6-4516-465b-890d-817524c9d9bf",
00:21:04.001        "is_configured": true,
00:21:04.001        "data_offset": 0,
00:21:04.001        "data_size": 65536
00:21:04.001      }
00:21:04.001    ]
00:21:04.001  }'
00:21:04.001   17:04:56	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:21:04.001   17:04:56	-- common/autotest_common.sh@10 -- # set +x
00:21:04.569   17:04:57	-- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none
00:21:04.569   17:04:57	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:21:04.569   17:04:57	-- bdev/bdev_raid.sh@184 -- # local process_type=none
00:21:04.569   17:04:57	-- bdev/bdev_raid.sh@185 -- # local target=none
00:21:04.569   17:04:57	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:21:04.569    17:04:57	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:21:04.569    17:04:57	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:21:04.828   17:04:57	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:21:04.828    "name": "raid_bdev1",
00:21:04.828    "uuid": "d3a3590c-f17c-4e6e-9025-ed1ddb7bfb2b",
00:21:04.828    "strip_size_kb": 0,
00:21:04.828    "state": "online",
00:21:04.828    "raid_level": "raid1",
00:21:04.828    "superblock": false,
00:21:04.828    "num_base_bdevs": 4,
00:21:04.828    "num_base_bdevs_discovered": 3,
00:21:04.828    "num_base_bdevs_operational": 3,
00:21:04.828    "base_bdevs_list": [
00:21:04.828      {
00:21:04.828        "name": null,
00:21:04.828        "uuid": "00000000-0000-0000-0000-000000000000",
00:21:04.828        "is_configured": false,
00:21:04.828        "data_offset": 0,
00:21:04.828        "data_size": 65536
00:21:04.828      },
00:21:04.828      {
00:21:04.828        "name": "BaseBdev2",
00:21:04.828        "uuid": "17ee1669-ebf2-4c13-929b-6abb5d232047",
00:21:04.828        "is_configured": true,
00:21:04.828        "data_offset": 0,
00:21:04.828        "data_size": 65536
00:21:04.828      },
00:21:04.828      {
00:21:04.828        "name": "BaseBdev3",
00:21:04.828        "uuid": "f4f10704-1449-4129-adb1-2f154ed1cc67",
00:21:04.828        "is_configured": true,
00:21:04.828        "data_offset": 0,
00:21:04.828        "data_size": 65536
00:21:04.828      },
00:21:04.828      {
00:21:04.828        "name": "BaseBdev4",
00:21:04.828        "uuid": "6ea229f6-4516-465b-890d-817524c9d9bf",
00:21:04.828        "is_configured": true,
00:21:04.828        "data_offset": 0,
00:21:04.828        "data_size": 65536
00:21:04.828      }
00:21:04.828    ]
00:21:04.828  }'
00:21:04.828    17:04:57	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:21:04.828   17:04:57	-- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]]
00:21:04.828    17:04:57	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:21:04.828   17:04:57	-- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]]
00:21:04.828   17:04:57	-- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare
00:21:05.114  [2024-11-19 17:04:57.898200] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare
00:21:05.114  [2024-11-19 17:04:57.898512] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:21:05.114  [2024-11-19 17:04:57.941912] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600
00:21:05.114  [2024-11-19 17:04:57.944706] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1
00:21:05.114   17:04:57	-- bdev/bdev_raid.sh@614 -- # sleep 1
00:21:05.372  [2024-11-19 17:04:58.074484] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144
00:21:05.372  [2024-11-19 17:04:58.076011] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144
00:21:05.631  [2024-11-19 17:04:58.304887] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144
00:21:05.631  [2024-11-19 17:04:58.305472] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144
00:21:05.890  [2024-11-19 17:04:58.679148] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288
00:21:06.154  [2024-11-19 17:04:58.951276] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288
00:21:06.154   17:04:58	-- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:21:06.154   17:04:58	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:21:06.154   17:04:58	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:21:06.154   17:04:58	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:21:06.154   17:04:58	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:21:06.154    17:04:58	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:21:06.154    17:04:58	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:21:06.419   17:04:59	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:21:06.419    "name": "raid_bdev1",
00:21:06.419    "uuid": "d3a3590c-f17c-4e6e-9025-ed1ddb7bfb2b",
00:21:06.419    "strip_size_kb": 0,
00:21:06.419    "state": "online",
00:21:06.419    "raid_level": "raid1",
00:21:06.419    "superblock": false,
00:21:06.419    "num_base_bdevs": 4,
00:21:06.419    "num_base_bdevs_discovered": 4,
00:21:06.419    "num_base_bdevs_operational": 4,
00:21:06.419    "process": {
00:21:06.419      "type": "rebuild",
00:21:06.419      "target": "spare",
00:21:06.419      "progress": {
00:21:06.419        "blocks": 12288,
00:21:06.419        "percent": 18
00:21:06.419      }
00:21:06.419    },
00:21:06.419    "base_bdevs_list": [
00:21:06.419      {
00:21:06.419        "name": "spare",
00:21:06.419        "uuid": "0f4d333b-1060-5c79-88c0-090043576526",
00:21:06.419        "is_configured": true,
00:21:06.419        "data_offset": 0,
00:21:06.419        "data_size": 65536
00:21:06.419      },
00:21:06.419      {
00:21:06.419        "name": "BaseBdev2",
00:21:06.419        "uuid": "17ee1669-ebf2-4c13-929b-6abb5d232047",
00:21:06.419        "is_configured": true,
00:21:06.419        "data_offset": 0,
00:21:06.419        "data_size": 65536
00:21:06.419      },
00:21:06.419      {
00:21:06.419        "name": "BaseBdev3",
00:21:06.419        "uuid": "f4f10704-1449-4129-adb1-2f154ed1cc67",
00:21:06.419        "is_configured": true,
00:21:06.419        "data_offset": 0,
00:21:06.419        "data_size": 65536
00:21:06.419      },
00:21:06.419      {
00:21:06.419        "name": "BaseBdev4",
00:21:06.419        "uuid": "6ea229f6-4516-465b-890d-817524c9d9bf",
00:21:06.419        "is_configured": true,
00:21:06.419        "data_offset": 0,
00:21:06.419        "data_size": 65536
00:21:06.419      }
00:21:06.419    ]
00:21:06.419  }'
00:21:06.419    17:04:59	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:21:06.678   17:04:59	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:21:06.678    17:04:59	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:21:06.678   17:04:59	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:21:06.678   17:04:59	-- bdev/bdev_raid.sh@617 -- # '[' false = true ']'
00:21:06.678   17:04:59	-- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4
00:21:06.678   17:04:59	-- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']'
00:21:06.678   17:04:59	-- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']'
00:21:06.679   17:04:59	-- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2
00:21:06.679  [2024-11-19 17:04:59.439523] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432
00:21:06.939  [2024-11-19 17:04:59.589635] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2
00:21:06.939  [2024-11-19 17:04:59.669569] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576
00:21:06.939  [2024-11-19 17:04:59.671555] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576
00:21:06.939  [2024-11-19 17:04:59.773957] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000002390
00:21:06.939  [2024-11-19 17:04:59.774287] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000002600
00:21:06.939  [2024-11-19 17:04:59.792151] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576
00:21:07.198   17:04:59	-- bdev/bdev_raid.sh@649 -- # base_bdevs[1]=
00:21:07.198   17:04:59	-- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- ))
00:21:07.198   17:04:59	-- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:21:07.198   17:04:59	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:21:07.198   17:04:59	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:21:07.198   17:04:59	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:21:07.198   17:04:59	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:21:07.198    17:04:59	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:21:07.198    17:04:59	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:21:07.457   17:05:00	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:21:07.457    "name": "raid_bdev1",
00:21:07.457    "uuid": "d3a3590c-f17c-4e6e-9025-ed1ddb7bfb2b",
00:21:07.457    "strip_size_kb": 0,
00:21:07.457    "state": "online",
00:21:07.457    "raid_level": "raid1",
00:21:07.457    "superblock": false,
00:21:07.457    "num_base_bdevs": 4,
00:21:07.457    "num_base_bdevs_discovered": 3,
00:21:07.457    "num_base_bdevs_operational": 3,
00:21:07.457    "process": {
00:21:07.457      "type": "rebuild",
00:21:07.457      "target": "spare",
00:21:07.457      "progress": {
00:21:07.457        "blocks": 24576,
00:21:07.457        "percent": 37
00:21:07.457      }
00:21:07.457    },
00:21:07.457    "base_bdevs_list": [
00:21:07.457      {
00:21:07.457        "name": "spare",
00:21:07.457        "uuid": "0f4d333b-1060-5c79-88c0-090043576526",
00:21:07.457        "is_configured": true,
00:21:07.457        "data_offset": 0,
00:21:07.457        "data_size": 65536
00:21:07.457      },
00:21:07.457      {
00:21:07.457        "name": null,
00:21:07.457        "uuid": "00000000-0000-0000-0000-000000000000",
00:21:07.457        "is_configured": false,
00:21:07.457        "data_offset": 0,
00:21:07.457        "data_size": 65536
00:21:07.457      },
00:21:07.457      {
00:21:07.457        "name": "BaseBdev3",
00:21:07.457        "uuid": "f4f10704-1449-4129-adb1-2f154ed1cc67",
00:21:07.457        "is_configured": true,
00:21:07.457        "data_offset": 0,
00:21:07.457        "data_size": 65536
00:21:07.457      },
00:21:07.457      {
00:21:07.457        "name": "BaseBdev4",
00:21:07.457        "uuid": "6ea229f6-4516-465b-890d-817524c9d9bf",
00:21:07.457        "is_configured": true,
00:21:07.457        "data_offset": 0,
00:21:07.457        "data_size": 65536
00:21:07.457      }
00:21:07.457    ]
00:21:07.457  }'
00:21:07.457    17:05:00	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:21:07.457   17:05:00	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:21:07.457    17:05:00	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:21:07.457   17:05:00	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:21:07.457   17:05:00	-- bdev/bdev_raid.sh@657 -- # local timeout=501
00:21:07.457   17:05:00	-- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout ))
00:21:07.457   17:05:00	-- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:21:07.457   17:05:00	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:21:07.457   17:05:00	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:21:07.457   17:05:00	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:21:07.457   17:05:00	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:21:07.457    17:05:00	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:21:07.457    17:05:00	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:21:07.716   17:05:00	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:21:07.716    "name": "raid_bdev1",
00:21:07.716    "uuid": "d3a3590c-f17c-4e6e-9025-ed1ddb7bfb2b",
00:21:07.716    "strip_size_kb": 0,
00:21:07.716    "state": "online",
00:21:07.716    "raid_level": "raid1",
00:21:07.716    "superblock": false,
00:21:07.716    "num_base_bdevs": 4,
00:21:07.716    "num_base_bdevs_discovered": 3,
00:21:07.716    "num_base_bdevs_operational": 3,
00:21:07.716    "process": {
00:21:07.716      "type": "rebuild",
00:21:07.716      "target": "spare",
00:21:07.716      "progress": {
00:21:07.716        "blocks": 30720,
00:21:07.716        "percent": 46
00:21:07.716      }
00:21:07.716    },
00:21:07.716    "base_bdevs_list": [
00:21:07.716      {
00:21:07.716        "name": "spare",
00:21:07.716        "uuid": "0f4d333b-1060-5c79-88c0-090043576526",
00:21:07.716        "is_configured": true,
00:21:07.716        "data_offset": 0,
00:21:07.716        "data_size": 65536
00:21:07.716      },
00:21:07.716      {
00:21:07.716        "name": null,
00:21:07.716        "uuid": "00000000-0000-0000-0000-000000000000",
00:21:07.716        "is_configured": false,
00:21:07.716        "data_offset": 0,
00:21:07.716        "data_size": 65536
00:21:07.716      },
00:21:07.716      {
00:21:07.716        "name": "BaseBdev3",
00:21:07.716        "uuid": "f4f10704-1449-4129-adb1-2f154ed1cc67",
00:21:07.716        "is_configured": true,
00:21:07.716        "data_offset": 0,
00:21:07.716        "data_size": 65536
00:21:07.716      },
00:21:07.716      {
00:21:07.716        "name": "BaseBdev4",
00:21:07.716        "uuid": "6ea229f6-4516-465b-890d-817524c9d9bf",
00:21:07.716        "is_configured": true,
00:21:07.716        "data_offset": 0,
00:21:07.716        "data_size": 65536
00:21:07.716      }
00:21:07.716    ]
00:21:07.716  }'
00:21:07.716    17:05:00	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:21:07.716   17:05:00	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:21:07.716    17:05:00	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:21:07.975   17:05:00	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:21:07.975   17:05:00	-- bdev/bdev_raid.sh@662 -- # sleep 1
00:21:08.913   17:05:01	-- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout ))
00:21:08.913   17:05:01	-- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:21:08.913   17:05:01	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:21:08.913   17:05:01	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:21:08.913   17:05:01	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:21:08.913   17:05:01	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:21:08.913    17:05:01	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:21:08.913    17:05:01	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:21:08.913  [2024-11-19 17:05:01.627154] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296
00:21:08.913  [2024-11-19 17:05:01.627657] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296
00:21:09.171  [2024-11-19 17:05:01.855701] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440
00:21:09.171   17:05:01	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:21:09.171    "name": "raid_bdev1",
00:21:09.171    "uuid": "d3a3590c-f17c-4e6e-9025-ed1ddb7bfb2b",
00:21:09.171    "strip_size_kb": 0,
00:21:09.171    "state": "online",
00:21:09.171    "raid_level": "raid1",
00:21:09.171    "superblock": false,
00:21:09.171    "num_base_bdevs": 4,
00:21:09.171    "num_base_bdevs_discovered": 3,
00:21:09.171    "num_base_bdevs_operational": 3,
00:21:09.171    "process": {
00:21:09.171      "type": "rebuild",
00:21:09.171      "target": "spare",
00:21:09.171      "progress": {
00:21:09.171        "blocks": 57344,
00:21:09.171        "percent": 87
00:21:09.171      }
00:21:09.171    },
00:21:09.171    "base_bdevs_list": [
00:21:09.171      {
00:21:09.171        "name": "spare",
00:21:09.171        "uuid": "0f4d333b-1060-5c79-88c0-090043576526",
00:21:09.171        "is_configured": true,
00:21:09.171        "data_offset": 0,
00:21:09.171        "data_size": 65536
00:21:09.171      },
00:21:09.171      {
00:21:09.171        "name": null,
00:21:09.171        "uuid": "00000000-0000-0000-0000-000000000000",
00:21:09.171        "is_configured": false,
00:21:09.171        "data_offset": 0,
00:21:09.171        "data_size": 65536
00:21:09.171      },
00:21:09.171      {
00:21:09.171        "name": "BaseBdev3",
00:21:09.171        "uuid": "f4f10704-1449-4129-adb1-2f154ed1cc67",
00:21:09.171        "is_configured": true,
00:21:09.171        "data_offset": 0,
00:21:09.171        "data_size": 65536
00:21:09.171      },
00:21:09.171      {
00:21:09.171        "name": "BaseBdev4",
00:21:09.171        "uuid": "6ea229f6-4516-465b-890d-817524c9d9bf",
00:21:09.171        "is_configured": true,
00:21:09.171        "data_offset": 0,
00:21:09.171        "data_size": 65536
00:21:09.171      }
00:21:09.171    ]
00:21:09.171  }'
00:21:09.171    17:05:01	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:21:09.171   17:05:01	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:21:09.171    17:05:01	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:21:09.171   17:05:01	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:21:09.171   17:05:01	-- bdev/bdev_raid.sh@662 -- # sleep 1
00:21:09.430  [2024-11-19 17:05:02.081213] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440
00:21:09.689  [2024-11-19 17:05:02.525848] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1
00:21:09.948  [2024-11-19 17:05:02.625850] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1
00:21:09.948  [2024-11-19 17:05:02.628778] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:21:10.207   17:05:02	-- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout ))
00:21:10.207   17:05:02	-- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:21:10.207   17:05:02	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:21:10.207   17:05:02	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:21:10.207   17:05:02	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:21:10.207   17:05:02	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:21:10.207    17:05:02	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:21:10.207    17:05:02	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:21:10.466   17:05:03	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:21:10.466    "name": "raid_bdev1",
00:21:10.466    "uuid": "d3a3590c-f17c-4e6e-9025-ed1ddb7bfb2b",
00:21:10.466    "strip_size_kb": 0,
00:21:10.466    "state": "online",
00:21:10.466    "raid_level": "raid1",
00:21:10.466    "superblock": false,
00:21:10.466    "num_base_bdevs": 4,
00:21:10.466    "num_base_bdevs_discovered": 3,
00:21:10.466    "num_base_bdevs_operational": 3,
00:21:10.466    "base_bdevs_list": [
00:21:10.466      {
00:21:10.466        "name": "spare",
00:21:10.466        "uuid": "0f4d333b-1060-5c79-88c0-090043576526",
00:21:10.466        "is_configured": true,
00:21:10.466        "data_offset": 0,
00:21:10.466        "data_size": 65536
00:21:10.466      },
00:21:10.466      {
00:21:10.466        "name": null,
00:21:10.466        "uuid": "00000000-0000-0000-0000-000000000000",
00:21:10.466        "is_configured": false,
00:21:10.466        "data_offset": 0,
00:21:10.466        "data_size": 65536
00:21:10.466      },
00:21:10.466      {
00:21:10.466        "name": "BaseBdev3",
00:21:10.466        "uuid": "f4f10704-1449-4129-adb1-2f154ed1cc67",
00:21:10.466        "is_configured": true,
00:21:10.466        "data_offset": 0,
00:21:10.466        "data_size": 65536
00:21:10.466      },
00:21:10.466      {
00:21:10.466        "name": "BaseBdev4",
00:21:10.466        "uuid": "6ea229f6-4516-465b-890d-817524c9d9bf",
00:21:10.466        "is_configured": true,
00:21:10.466        "data_offset": 0,
00:21:10.466        "data_size": 65536
00:21:10.466      }
00:21:10.466    ]
00:21:10.466  }'
00:21:10.466    17:05:03	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:21:10.466   17:05:03	-- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]]
00:21:10.466    17:05:03	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:21:10.466   17:05:03	-- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]]
00:21:10.466   17:05:03	-- bdev/bdev_raid.sh@660 -- # break
00:21:10.466   17:05:03	-- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none
00:21:10.466   17:05:03	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:21:10.466   17:05:03	-- bdev/bdev_raid.sh@184 -- # local process_type=none
00:21:10.466   17:05:03	-- bdev/bdev_raid.sh@185 -- # local target=none
00:21:10.466   17:05:03	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:21:10.466    17:05:03	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:21:10.466    17:05:03	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:21:10.723   17:05:03	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:21:10.723    "name": "raid_bdev1",
00:21:10.723    "uuid": "d3a3590c-f17c-4e6e-9025-ed1ddb7bfb2b",
00:21:10.723    "strip_size_kb": 0,
00:21:10.723    "state": "online",
00:21:10.723    "raid_level": "raid1",
00:21:10.723    "superblock": false,
00:21:10.723    "num_base_bdevs": 4,
00:21:10.723    "num_base_bdevs_discovered": 3,
00:21:10.723    "num_base_bdevs_operational": 3,
00:21:10.723    "base_bdevs_list": [
00:21:10.723      {
00:21:10.723        "name": "spare",
00:21:10.723        "uuid": "0f4d333b-1060-5c79-88c0-090043576526",
00:21:10.723        "is_configured": true,
00:21:10.723        "data_offset": 0,
00:21:10.723        "data_size": 65536
00:21:10.723      },
00:21:10.723      {
00:21:10.723        "name": null,
00:21:10.724        "uuid": "00000000-0000-0000-0000-000000000000",
00:21:10.724        "is_configured": false,
00:21:10.724        "data_offset": 0,
00:21:10.724        "data_size": 65536
00:21:10.724      },
00:21:10.724      {
00:21:10.724        "name": "BaseBdev3",
00:21:10.724        "uuid": "f4f10704-1449-4129-adb1-2f154ed1cc67",
00:21:10.724        "is_configured": true,
00:21:10.724        "data_offset": 0,
00:21:10.724        "data_size": 65536
00:21:10.724      },
00:21:10.724      {
00:21:10.724        "name": "BaseBdev4",
00:21:10.724        "uuid": "6ea229f6-4516-465b-890d-817524c9d9bf",
00:21:10.724        "is_configured": true,
00:21:10.724        "data_offset": 0,
00:21:10.724        "data_size": 65536
00:21:10.724      }
00:21:10.724    ]
00:21:10.724  }'
00:21:10.724    17:05:03	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:21:10.982   17:05:03	-- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]]
00:21:10.982    17:05:03	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:21:10.982   17:05:03	-- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]]
00:21:10.982   17:05:03	-- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3
00:21:10.982   17:05:03	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:21:10.982   17:05:03	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:21:10.982   17:05:03	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:21:10.982   17:05:03	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:21:10.982   17:05:03	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:21:10.982   17:05:03	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:21:10.982   17:05:03	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:21:10.982   17:05:03	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:21:10.982   17:05:03	-- bdev/bdev_raid.sh@125 -- # local tmp
00:21:10.982    17:05:03	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:21:10.982    17:05:03	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:21:11.240   17:05:03	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:21:11.240    "name": "raid_bdev1",
00:21:11.240    "uuid": "d3a3590c-f17c-4e6e-9025-ed1ddb7bfb2b",
00:21:11.240    "strip_size_kb": 0,
00:21:11.240    "state": "online",
00:21:11.240    "raid_level": "raid1",
00:21:11.240    "superblock": false,
00:21:11.240    "num_base_bdevs": 4,
00:21:11.240    "num_base_bdevs_discovered": 3,
00:21:11.240    "num_base_bdevs_operational": 3,
00:21:11.240    "base_bdevs_list": [
00:21:11.240      {
00:21:11.240        "name": "spare",
00:21:11.240        "uuid": "0f4d333b-1060-5c79-88c0-090043576526",
00:21:11.240        "is_configured": true,
00:21:11.240        "data_offset": 0,
00:21:11.240        "data_size": 65536
00:21:11.240      },
00:21:11.240      {
00:21:11.240        "name": null,
00:21:11.240        "uuid": "00000000-0000-0000-0000-000000000000",
00:21:11.240        "is_configured": false,
00:21:11.240        "data_offset": 0,
00:21:11.240        "data_size": 65536
00:21:11.240      },
00:21:11.240      {
00:21:11.240        "name": "BaseBdev3",
00:21:11.240        "uuid": "f4f10704-1449-4129-adb1-2f154ed1cc67",
00:21:11.240        "is_configured": true,
00:21:11.240        "data_offset": 0,
00:21:11.240        "data_size": 65536
00:21:11.240      },
00:21:11.240      {
00:21:11.240        "name": "BaseBdev4",
00:21:11.240        "uuid": "6ea229f6-4516-465b-890d-817524c9d9bf",
00:21:11.240        "is_configured": true,
00:21:11.240        "data_offset": 0,
00:21:11.240        "data_size": 65536
00:21:11.240      }
00:21:11.240    ]
00:21:11.240  }'
00:21:11.240   17:05:03	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:21:11.240   17:05:03	-- common/autotest_common.sh@10 -- # set +x
00:21:11.805   17:05:04	-- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1
00:21:12.064  [2024-11-19 17:05:04.789353] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:21:12.065  [2024-11-19 17:05:04.789599] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:21:12.065  
00:21:12.065                                                                                                  Latency(us)
00:21:12.065  
[2024-11-19T17:05:04.929Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:21:12.065  
[2024-11-19T17:05:04.929Z]  Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728)
00:21:12.065  	 raid_bdev1          :      11.43      94.01     282.03       0.00     0.00   15298.89     327.68  123332.51
00:21:12.065  
[2024-11-19T17:05:04.929Z]  ===================================================================================================================
00:21:12.065  
[2024-11-19T17:05:04.929Z]  Total                       :                 94.01     282.03       0.00     0.00   15298.89     327.68  123332.51
00:21:12.065  [2024-11-19 17:05:04.813669] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:21:12.065  [2024-11-19 17:05:04.813868] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:21:12.065  [2024-11-19 17:05:04.814047] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:21:12.065  0
00:21:12.065  [2024-11-19 17:05:04.814169] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline
00:21:12.065    17:05:04	-- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:21:12.065    17:05:04	-- bdev/bdev_raid.sh@671 -- # jq length
00:21:12.323   17:05:05	-- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]]
00:21:12.323   17:05:05	-- bdev/bdev_raid.sh@673 -- # '[' true = true ']'
00:21:12.323   17:05:05	-- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0
00:21:12.323   17:05:05	-- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:21:12.323   17:05:05	-- bdev/nbd_common.sh@10 -- # bdev_list=('spare')
00:21:12.323   17:05:05	-- bdev/nbd_common.sh@10 -- # local bdev_list
00:21:12.323   17:05:05	-- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0')
00:21:12.323   17:05:05	-- bdev/nbd_common.sh@11 -- # local nbd_list
00:21:12.323   17:05:05	-- bdev/nbd_common.sh@12 -- # local i
00:21:12.323   17:05:05	-- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:21:12.323   17:05:05	-- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:21:12.323   17:05:05	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0
00:21:12.582  /dev/nbd0
00:21:12.582    17:05:05	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:21:12.582   17:05:05	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:21:12.582   17:05:05	-- common/autotest_common.sh@866 -- # local nbd_name=nbd0
00:21:12.582   17:05:05	-- common/autotest_common.sh@867 -- # local i
00:21:12.582   17:05:05	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:21:12.582   17:05:05	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:21:12.582   17:05:05	-- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions
00:21:12.582   17:05:05	-- common/autotest_common.sh@871 -- # break
00:21:12.582   17:05:05	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:21:12.582   17:05:05	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:21:12.582   17:05:05	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:21:12.582  1+0 records in
00:21:12.582  1+0 records out
00:21:12.582  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000503851 s, 8.1 MB/s
00:21:12.582    17:05:05	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:21:12.582   17:05:05	-- common/autotest_common.sh@884 -- # size=4096
00:21:12.582   17:05:05	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:21:12.582   17:05:05	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:21:12.582   17:05:05	-- common/autotest_common.sh@887 -- # return 0
00:21:12.582   17:05:05	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:21:12.582   17:05:05	-- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:21:12.582   17:05:05	-- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}"
00:21:12.582   17:05:05	-- bdev/bdev_raid.sh@677 -- # '[' -z '' ']'
00:21:12.582   17:05:05	-- bdev/bdev_raid.sh@678 -- # continue
00:21:12.582   17:05:05	-- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}"
00:21:12.582   17:05:05	-- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev3 ']'
00:21:12.582   17:05:05	-- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1
00:21:12.582   17:05:05	-- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:21:12.582   17:05:05	-- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3')
00:21:12.582   17:05:05	-- bdev/nbd_common.sh@10 -- # local bdev_list
00:21:12.582   17:05:05	-- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1')
00:21:12.582   17:05:05	-- bdev/nbd_common.sh@11 -- # local nbd_list
00:21:12.582   17:05:05	-- bdev/nbd_common.sh@12 -- # local i
00:21:12.582   17:05:05	-- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:21:12.582   17:05:05	-- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:21:12.582   17:05:05	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1
00:21:12.841  /dev/nbd1
00:21:13.099    17:05:05	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:21:13.099   17:05:05	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:21:13.099   17:05:05	-- common/autotest_common.sh@866 -- # local nbd_name=nbd1
00:21:13.099   17:05:05	-- common/autotest_common.sh@867 -- # local i
00:21:13.099   17:05:05	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:21:13.099   17:05:05	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:21:13.099   17:05:05	-- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions
00:21:13.099   17:05:05	-- common/autotest_common.sh@871 -- # break
00:21:13.099   17:05:05	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:21:13.099   17:05:05	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:21:13.099   17:05:05	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:21:13.099  1+0 records in
00:21:13.099  1+0 records out
00:21:13.099  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000512776 s, 8.0 MB/s
00:21:13.099    17:05:05	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:21:13.099   17:05:05	-- common/autotest_common.sh@884 -- # size=4096
00:21:13.099   17:05:05	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:21:13.099   17:05:05	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:21:13.099   17:05:05	-- common/autotest_common.sh@887 -- # return 0
00:21:13.099   17:05:05	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:21:13.099   17:05:05	-- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:21:13.099   17:05:05	-- bdev/bdev_raid.sh@681 -- # cmp -i 0 /dev/nbd0 /dev/nbd1
00:21:13.099   17:05:05	-- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1
00:21:13.100   17:05:05	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:21:13.100   17:05:05	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1')
00:21:13.100   17:05:05	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:21:13.100   17:05:05	-- bdev/nbd_common.sh@51 -- # local i
00:21:13.100   17:05:05	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:21:13.100   17:05:05	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1
00:21:13.358    17:05:06	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:21:13.358   17:05:06	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:21:13.358   17:05:06	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:21:13.358   17:05:06	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:21:13.358   17:05:06	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:21:13.358   17:05:06	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:21:13.358   17:05:06	-- bdev/nbd_common.sh@41 -- # break
00:21:13.358   17:05:06	-- bdev/nbd_common.sh@45 -- # return 0
00:21:13.358   17:05:06	-- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}"
00:21:13.358   17:05:06	-- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev4 ']'
00:21:13.358   17:05:06	-- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1
00:21:13.358   17:05:06	-- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:21:13.358   17:05:06	-- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4')
00:21:13.358   17:05:06	-- bdev/nbd_common.sh@10 -- # local bdev_list
00:21:13.358   17:05:06	-- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1')
00:21:13.358   17:05:06	-- bdev/nbd_common.sh@11 -- # local nbd_list
00:21:13.358   17:05:06	-- bdev/nbd_common.sh@12 -- # local i
00:21:13.358   17:05:06	-- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:21:13.358   17:05:06	-- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:21:13.358   17:05:06	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1
00:21:13.617  /dev/nbd1
00:21:13.618    17:05:06	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:21:13.618   17:05:06	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:21:13.618   17:05:06	-- common/autotest_common.sh@866 -- # local nbd_name=nbd1
00:21:13.618   17:05:06	-- common/autotest_common.sh@867 -- # local i
00:21:13.618   17:05:06	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:21:13.618   17:05:06	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:21:13.618   17:05:06	-- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions
00:21:13.618   17:05:06	-- common/autotest_common.sh@871 -- # break
00:21:13.618   17:05:06	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:21:13.618   17:05:06	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:21:13.618   17:05:06	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:21:13.618  1+0 records in
00:21:13.618  1+0 records out
00:21:13.618  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000541178 s, 7.6 MB/s
00:21:13.618    17:05:06	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:21:13.618   17:05:06	-- common/autotest_common.sh@884 -- # size=4096
00:21:13.618   17:05:06	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:21:13.618   17:05:06	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:21:13.618   17:05:06	-- common/autotest_common.sh@887 -- # return 0
00:21:13.618   17:05:06	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:21:13.618   17:05:06	-- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:21:13.618   17:05:06	-- bdev/bdev_raid.sh@681 -- # cmp -i 0 /dev/nbd0 /dev/nbd1
00:21:13.876   17:05:06	-- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1
00:21:13.876   17:05:06	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:21:13.876   17:05:06	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1')
00:21:13.876   17:05:06	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:21:13.876   17:05:06	-- bdev/nbd_common.sh@51 -- # local i
00:21:13.876   17:05:06	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:21:13.876   17:05:06	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1
00:21:14.134    17:05:06	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:21:14.134   17:05:06	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:21:14.134   17:05:06	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:21:14.134   17:05:06	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:21:14.134   17:05:06	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:21:14.134   17:05:06	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:21:14.134   17:05:06	-- bdev/nbd_common.sh@41 -- # break
00:21:14.134   17:05:06	-- bdev/nbd_common.sh@45 -- # return 0
00:21:14.134   17:05:06	-- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0
00:21:14.134   17:05:06	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:21:14.134   17:05:06	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:21:14.134   17:05:06	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:21:14.134   17:05:06	-- bdev/nbd_common.sh@51 -- # local i
00:21:14.134   17:05:06	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:21:14.134   17:05:06	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0
00:21:14.392    17:05:07	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:21:14.392   17:05:07	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:21:14.392   17:05:07	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:21:14.393   17:05:07	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:21:14.393   17:05:07	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:21:14.393   17:05:07	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:21:14.393   17:05:07	-- bdev/nbd_common.sh@41 -- # break
00:21:14.393   17:05:07	-- bdev/nbd_common.sh@45 -- # return 0
00:21:14.393   17:05:07	-- bdev/bdev_raid.sh@692 -- # '[' false = true ']'
00:21:14.393   17:05:07	-- bdev/bdev_raid.sh@709 -- # killprocess 136156
00:21:14.393   17:05:07	-- common/autotest_common.sh@936 -- # '[' -z 136156 ']'
00:21:14.393   17:05:07	-- common/autotest_common.sh@940 -- # kill -0 136156
00:21:14.393    17:05:07	-- common/autotest_common.sh@941 -- # uname
00:21:14.393   17:05:07	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:21:14.393    17:05:07	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 136156
00:21:14.393   17:05:07	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:21:14.393   17:05:07	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:21:14.393   17:05:07	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 136156'
00:21:14.393  killing process with pid 136156
00:21:14.393   17:05:07	-- common/autotest_common.sh@955 -- # kill 136156
00:21:14.393  Received shutdown signal, test time was about 13.794702 seconds
00:21:14.393  
00:21:14.393                                                                                                  Latency(us)
00:21:14.393  
[2024-11-19T17:05:07.257Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:21:14.393  
[2024-11-19T17:05:07.257Z]  ===================================================================================================================
00:21:14.393  
[2024-11-19T17:05:07.257Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:21:14.393   17:05:07	-- common/autotest_common.sh@960 -- # wait 136156
00:21:14.393  [2024-11-19 17:05:07.169239] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:21:14.393  [2024-11-19 17:05:07.218250] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:21:14.650  ************************************
00:21:14.650  END TEST raid_rebuild_test_io
00:21:14.650  ************************************
00:21:14.650   17:05:07	-- bdev/bdev_raid.sh@711 -- # return 0
00:21:14.650  
00:21:14.650  real	0m18.655s
00:21:14.650  user	0m29.249s
00:21:14.650  sys	0m3.005s
00:21:14.650   17:05:07	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:21:14.650   17:05:07	-- common/autotest_common.sh@10 -- # set +x
00:21:14.908   17:05:07	-- bdev/bdev_raid.sh@738 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true
00:21:14.908   17:05:07	-- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']'
00:21:14.908   17:05:07	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:21:14.908   17:05:07	-- common/autotest_common.sh@10 -- # set +x
00:21:14.908  ************************************
00:21:14.908  START TEST raid_rebuild_test_sb_io
00:21:14.908  ************************************
00:21:14.908   17:05:07	-- common/autotest_common.sh@1114 -- # raid_rebuild_test raid1 4 true true
00:21:14.908   17:05:07	-- bdev/bdev_raid.sh@517 -- # local raid_level=raid1
00:21:14.908   17:05:07	-- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4
00:21:14.908   17:05:07	-- bdev/bdev_raid.sh@519 -- # local superblock=true
00:21:14.908   17:05:07	-- bdev/bdev_raid.sh@520 -- # local background_io=true
00:21:14.908    17:05:07	-- bdev/bdev_raid.sh@521 -- # (( i = 1 ))
00:21:14.908    17:05:07	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:21:14.908    17:05:07	-- bdev/bdev_raid.sh@521 -- # echo BaseBdev1
00:21:14.908    17:05:07	-- bdev/bdev_raid.sh@521 -- # (( i++ ))
00:21:14.908    17:05:07	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:21:14.908    17:05:07	-- bdev/bdev_raid.sh@521 -- # echo BaseBdev2
00:21:14.908    17:05:07	-- bdev/bdev_raid.sh@521 -- # (( i++ ))
00:21:14.908    17:05:07	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:21:14.908    17:05:07	-- bdev/bdev_raid.sh@521 -- # echo BaseBdev3
00:21:14.908    17:05:07	-- bdev/bdev_raid.sh@521 -- # (( i++ ))
00:21:14.908    17:05:07	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:21:14.908    17:05:07	-- bdev/bdev_raid.sh@521 -- # echo BaseBdev4
00:21:14.908    17:05:07	-- bdev/bdev_raid.sh@521 -- # (( i++ ))
00:21:14.908    17:05:07	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:21:14.908   17:05:07	-- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4')
00:21:14.908   17:05:07	-- bdev/bdev_raid.sh@521 -- # local base_bdevs
00:21:14.908   17:05:07	-- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1
00:21:14.908   17:05:07	-- bdev/bdev_raid.sh@523 -- # local strip_size
00:21:14.908   17:05:07	-- bdev/bdev_raid.sh@524 -- # local create_arg
00:21:14.908   17:05:07	-- bdev/bdev_raid.sh@525 -- # local raid_bdev_size
00:21:14.908   17:05:07	-- bdev/bdev_raid.sh@526 -- # local data_offset
00:21:14.908   17:05:07	-- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']'
00:21:14.908   17:05:07	-- bdev/bdev_raid.sh@536 -- # strip_size=0
00:21:14.908   17:05:07	-- bdev/bdev_raid.sh@539 -- # '[' true = true ']'
00:21:14.908   17:05:07	-- bdev/bdev_raid.sh@540 -- # create_arg+=' -s'
00:21:14.908   17:05:07	-- bdev/bdev_raid.sh@544 -- # raid_pid=136668
00:21:14.908   17:05:07	-- bdev/bdev_raid.sh@545 -- # waitforlisten 136668 /var/tmp/spdk-raid.sock
00:21:14.908   17:05:07	-- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid
00:21:14.908   17:05:07	-- common/autotest_common.sh@829 -- # '[' -z 136668 ']'
00:21:14.908   17:05:07	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock
00:21:14.908   17:05:07	-- common/autotest_common.sh@834 -- # local max_retries=100
00:21:14.908   17:05:07	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...'
00:21:14.908  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...
00:21:14.908   17:05:07	-- common/autotest_common.sh@838 -- # xtrace_disable
00:21:14.908   17:05:07	-- common/autotest_common.sh@10 -- # set +x
00:21:14.908  [2024-11-19 17:05:07.625672] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:21:14.908  [2024-11-19 17:05:07.626707] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136668 ]
00:21:14.908  I/O size of 3145728 is greater than zero copy threshold (65536).
00:21:14.908  Zero copy mechanism will not be used.
00:21:15.166  [2024-11-19 17:05:07.785778] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:21:15.166  [2024-11-19 17:05:07.843481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:21:15.166  [2024-11-19 17:05:07.892060] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:21:16.101   17:05:08	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:21:16.101   17:05:08	-- common/autotest_common.sh@862 -- # return 0
00:21:16.101   17:05:08	-- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}"
00:21:16.101   17:05:08	-- bdev/bdev_raid.sh@549 -- # '[' true = true ']'
00:21:16.101   17:05:08	-- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc
00:21:16.101  BaseBdev1_malloc
00:21:16.101   17:05:08	-- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1
00:21:16.360  [2024-11-19 17:05:09.123762] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc
00:21:16.360  [2024-11-19 17:05:09.124137] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:21:16.360  [2024-11-19 17:05:09.124222] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80
00:21:16.360  [2024-11-19 17:05:09.124348] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:21:16.360  [2024-11-19 17:05:09.127117] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:21:16.360  [2024-11-19 17:05:09.127351] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1
00:21:16.360  BaseBdev1
00:21:16.360   17:05:09	-- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}"
00:21:16.360   17:05:09	-- bdev/bdev_raid.sh@549 -- # '[' true = true ']'
00:21:16.360   17:05:09	-- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc
00:21:16.619  BaseBdev2_malloc
00:21:16.619   17:05:09	-- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2
00:21:16.877  [2024-11-19 17:05:09.593216] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc
00:21:16.877  [2024-11-19 17:05:09.593521] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:21:16.877  [2024-11-19 17:05:09.593604] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680
00:21:16.877  [2024-11-19 17:05:09.593771] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:21:16.877  [2024-11-19 17:05:09.596505] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:21:16.877  [2024-11-19 17:05:09.596692] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2
00:21:16.877  BaseBdev2
00:21:16.877   17:05:09	-- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}"
00:21:16.877   17:05:09	-- bdev/bdev_raid.sh@549 -- # '[' true = true ']'
00:21:16.877   17:05:09	-- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc
00:21:17.136  BaseBdev3_malloc
00:21:17.136   17:05:09	-- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3
00:21:17.396  [2024-11-19 17:05:10.083213] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc
00:21:17.396  [2024-11-19 17:05:10.083487] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:21:17.396  [2024-11-19 17:05:10.083569] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280
00:21:17.396  [2024-11-19 17:05:10.083704] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:21:17.396  [2024-11-19 17:05:10.086307] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:21:17.396  [2024-11-19 17:05:10.086501] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3
00:21:17.396  BaseBdev3
00:21:17.396   17:05:10	-- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}"
00:21:17.396   17:05:10	-- bdev/bdev_raid.sh@549 -- # '[' true = true ']'
00:21:17.396   17:05:10	-- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc
00:21:17.655  BaseBdev4_malloc
00:21:17.655   17:05:10	-- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4
00:21:17.914  [2024-11-19 17:05:10.572625] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc
00:21:17.914  [2024-11-19 17:05:10.572971] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:21:17.914  [2024-11-19 17:05:10.573051] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80
00:21:17.914  [2024-11-19 17:05:10.573177] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:21:17.914  [2024-11-19 17:05:10.575823] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:21:17.914  [2024-11-19 17:05:10.576051] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4
00:21:17.914  BaseBdev4
00:21:17.914   17:05:10	-- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc
00:21:18.173  spare_malloc
00:21:18.173   17:05:10	-- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000
00:21:18.432  spare_delay
00:21:18.432   17:05:11	-- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare
00:21:18.689  [2024-11-19 17:05:11.350321] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay
00:21:18.689  [2024-11-19 17:05:11.350619] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:21:18.689  [2024-11-19 17:05:11.350714] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080
00:21:18.689  [2024-11-19 17:05:11.350865] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:21:18.689  [2024-11-19 17:05:11.353560] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:21:18.689  [2024-11-19 17:05:11.353757] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare
00:21:18.689  spare
00:21:18.689   17:05:11	-- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1
00:21:18.947  [2024-11-19 17:05:11.610612] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:21:18.947  [2024-11-19 17:05:11.613166] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:21:18.947  [2024-11-19 17:05:11.613397] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:21:18.947  [2024-11-19 17:05:11.613481] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed
00:21:18.947  [2024-11-19 17:05:11.613822] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680
00:21:18.947  [2024-11-19 17:05:11.613963] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512
00:21:18.947  [2024-11-19 17:05:11.614168] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600
00:21:18.947  [2024-11-19 17:05:11.614757] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680
00:21:18.947  [2024-11-19 17:05:11.614894] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680
00:21:18.947  [2024-11-19 17:05:11.615175] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:21:18.947   17:05:11	-- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4
00:21:18.947   17:05:11	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:21:18.947   17:05:11	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:21:18.947   17:05:11	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:21:18.947   17:05:11	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:21:18.947   17:05:11	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:21:18.947   17:05:11	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:21:18.947   17:05:11	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:21:18.947   17:05:11	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:21:18.947   17:05:11	-- bdev/bdev_raid.sh@125 -- # local tmp
00:21:18.947    17:05:11	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:21:18.947    17:05:11	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:21:19.258   17:05:11	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:21:19.258    "name": "raid_bdev1",
00:21:19.258    "uuid": "ddc180ff-3aa3-4936-8325-9ca064df6a82",
00:21:19.258    "strip_size_kb": 0,
00:21:19.258    "state": "online",
00:21:19.258    "raid_level": "raid1",
00:21:19.258    "superblock": true,
00:21:19.258    "num_base_bdevs": 4,
00:21:19.258    "num_base_bdevs_discovered": 4,
00:21:19.258    "num_base_bdevs_operational": 4,
00:21:19.258    "base_bdevs_list": [
00:21:19.258      {
00:21:19.258        "name": "BaseBdev1",
00:21:19.258        "uuid": "04c4d07b-31b1-538a-9969-4e7087eebb52",
00:21:19.258        "is_configured": true,
00:21:19.258        "data_offset": 2048,
00:21:19.258        "data_size": 63488
00:21:19.258      },
00:21:19.258      {
00:21:19.258        "name": "BaseBdev2",
00:21:19.258        "uuid": "7e8f8911-a94e-5273-b3a7-4b3eef7799e0",
00:21:19.258        "is_configured": true,
00:21:19.258        "data_offset": 2048,
00:21:19.258        "data_size": 63488
00:21:19.258      },
00:21:19.258      {
00:21:19.258        "name": "BaseBdev3",
00:21:19.258        "uuid": "dce01726-b8a3-5dba-8a33-b9ac0dd51c8c",
00:21:19.258        "is_configured": true,
00:21:19.258        "data_offset": 2048,
00:21:19.258        "data_size": 63488
00:21:19.258      },
00:21:19.258      {
00:21:19.258        "name": "BaseBdev4",
00:21:19.258        "uuid": "9c034671-7747-5fa3-be8a-bac4ebc674da",
00:21:19.258        "is_configured": true,
00:21:19.258        "data_offset": 2048,
00:21:19.258        "data_size": 63488
00:21:19.258      }
00:21:19.258    ]
00:21:19.258  }'
00:21:19.258   17:05:11	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:21:19.258   17:05:11	-- common/autotest_common.sh@10 -- # set +x
00:21:19.839    17:05:12	-- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1
00:21:19.839    17:05:12	-- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks'
00:21:19.839  [2024-11-19 17:05:12.671640] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:21:20.098   17:05:12	-- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488
00:21:20.098    17:05:12	-- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:21:20.098    17:05:12	-- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset'
00:21:20.357   17:05:12	-- bdev/bdev_raid.sh@570 -- # data_offset=2048
00:21:20.357   17:05:12	-- bdev/bdev_raid.sh@572 -- # '[' true = true ']'
00:21:20.358   17:05:12	-- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1
00:21:20.358   17:05:12	-- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests
00:21:20.358  [2024-11-19 17:05:13.085627] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0
00:21:20.358  I/O size of 3145728 is greater than zero copy threshold (65536).
00:21:20.358  Zero copy mechanism will not be used.
00:21:20.358  Running I/O for 60 seconds...
00:21:20.358  [2024-11-19 17:05:13.201805] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:21:20.617  [2024-11-19 17:05:13.215561] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000026d0
00:21:20.617   17:05:13	-- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3
00:21:20.617   17:05:13	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:21:20.617   17:05:13	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:21:20.617   17:05:13	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:21:20.617   17:05:13	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:21:20.617   17:05:13	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:21:20.617   17:05:13	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:21:20.617   17:05:13	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:21:20.617   17:05:13	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:21:20.617   17:05:13	-- bdev/bdev_raid.sh@125 -- # local tmp
00:21:20.617    17:05:13	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:21:20.617    17:05:13	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:21:20.875   17:05:13	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:21:20.875    "name": "raid_bdev1",
00:21:20.875    "uuid": "ddc180ff-3aa3-4936-8325-9ca064df6a82",
00:21:20.875    "strip_size_kb": 0,
00:21:20.875    "state": "online",
00:21:20.875    "raid_level": "raid1",
00:21:20.875    "superblock": true,
00:21:20.875    "num_base_bdevs": 4,
00:21:20.875    "num_base_bdevs_discovered": 3,
00:21:20.875    "num_base_bdevs_operational": 3,
00:21:20.875    "base_bdevs_list": [
00:21:20.875      {
00:21:20.875        "name": null,
00:21:20.875        "uuid": "00000000-0000-0000-0000-000000000000",
00:21:20.875        "is_configured": false,
00:21:20.875        "data_offset": 2048,
00:21:20.875        "data_size": 63488
00:21:20.875      },
00:21:20.875      {
00:21:20.875        "name": "BaseBdev2",
00:21:20.875        "uuid": "7e8f8911-a94e-5273-b3a7-4b3eef7799e0",
00:21:20.875        "is_configured": true,
00:21:20.875        "data_offset": 2048,
00:21:20.875        "data_size": 63488
00:21:20.875      },
00:21:20.875      {
00:21:20.875        "name": "BaseBdev3",
00:21:20.875        "uuid": "dce01726-b8a3-5dba-8a33-b9ac0dd51c8c",
00:21:20.875        "is_configured": true,
00:21:20.875        "data_offset": 2048,
00:21:20.875        "data_size": 63488
00:21:20.875      },
00:21:20.875      {
00:21:20.875        "name": "BaseBdev4",
00:21:20.875        "uuid": "9c034671-7747-5fa3-be8a-bac4ebc674da",
00:21:20.875        "is_configured": true,
00:21:20.875        "data_offset": 2048,
00:21:20.875        "data_size": 63488
00:21:20.875      }
00:21:20.875    ]
00:21:20.875  }'
00:21:20.875   17:05:13	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:21:20.875   17:05:13	-- common/autotest_common.sh@10 -- # set +x
00:21:21.442   17:05:14	-- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare
00:21:21.701  [2024-11-19 17:05:14.405039] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare
00:21:21.701  [2024-11-19 17:05:14.405338] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:21:21.701   17:05:14	-- bdev/bdev_raid.sh@598 -- # sleep 1
00:21:21.702  [2024-11-19 17:05:14.458545] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0
00:21:21.702  [2024-11-19 17:05:14.461283] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1
00:21:21.961  [2024-11-19 17:05:14.571221] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144
00:21:21.961  [2024-11-19 17:05:14.571954] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144
00:21:21.961  [2024-11-19 17:05:14.783968] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144
00:21:21.961  [2024-11-19 17:05:14.784451] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144
00:21:22.221  [2024-11-19 17:05:15.033574] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288
00:21:22.789  [2024-11-19 17:05:15.386142] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432
00:21:22.789   17:05:15	-- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:21:22.789   17:05:15	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:21:22.789   17:05:15	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:21:22.789   17:05:15	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:21:22.789   17:05:15	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:21:22.789    17:05:15	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:21:22.789    17:05:15	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:21:22.789  [2024-11-19 17:05:15.496722] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432
00:21:22.789  [2024-11-19 17:05:15.497627] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432
00:21:23.049   17:05:15	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:21:23.049    "name": "raid_bdev1",
00:21:23.049    "uuid": "ddc180ff-3aa3-4936-8325-9ca064df6a82",
00:21:23.049    "strip_size_kb": 0,
00:21:23.049    "state": "online",
00:21:23.049    "raid_level": "raid1",
00:21:23.049    "superblock": true,
00:21:23.049    "num_base_bdevs": 4,
00:21:23.049    "num_base_bdevs_discovered": 4,
00:21:23.049    "num_base_bdevs_operational": 4,
00:21:23.049    "process": {
00:21:23.049      "type": "rebuild",
00:21:23.049      "target": "spare",
00:21:23.049      "progress": {
00:21:23.049        "blocks": 16384,
00:21:23.049        "percent": 25
00:21:23.049      }
00:21:23.049    },
00:21:23.049    "base_bdevs_list": [
00:21:23.049      {
00:21:23.049        "name": "spare",
00:21:23.049        "uuid": "1c44c1c9-2f5c-5dc7-9c81-d80a7c535271",
00:21:23.049        "is_configured": true,
00:21:23.049        "data_offset": 2048,
00:21:23.049        "data_size": 63488
00:21:23.049      },
00:21:23.049      {
00:21:23.049        "name": "BaseBdev2",
00:21:23.049        "uuid": "7e8f8911-a94e-5273-b3a7-4b3eef7799e0",
00:21:23.049        "is_configured": true,
00:21:23.049        "data_offset": 2048,
00:21:23.049        "data_size": 63488
00:21:23.049      },
00:21:23.049      {
00:21:23.049        "name": "BaseBdev3",
00:21:23.049        "uuid": "dce01726-b8a3-5dba-8a33-b9ac0dd51c8c",
00:21:23.049        "is_configured": true,
00:21:23.049        "data_offset": 2048,
00:21:23.049        "data_size": 63488
00:21:23.049      },
00:21:23.049      {
00:21:23.049        "name": "BaseBdev4",
00:21:23.049        "uuid": "9c034671-7747-5fa3-be8a-bac4ebc674da",
00:21:23.049        "is_configured": true,
00:21:23.049        "data_offset": 2048,
00:21:23.049        "data_size": 63488
00:21:23.049      }
00:21:23.049    ]
00:21:23.049  }'
00:21:23.049    17:05:15	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:21:23.049   17:05:15	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:21:23.049    17:05:15	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:21:23.049   17:05:15	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:21:23.049   17:05:15	-- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare
00:21:23.307  [2024-11-19 17:05:16.035168] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:21:23.307  [2024-11-19 17:05:16.057157] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device
00:21:23.307  [2024-11-19 17:05:16.067988] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:21:23.307  [2024-11-19 17:05:16.096469] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000026d0
00:21:23.307   17:05:16	-- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3
00:21:23.307   17:05:16	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:21:23.307   17:05:16	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:21:23.307   17:05:16	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:21:23.307   17:05:16	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:21:23.307   17:05:16	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:21:23.307   17:05:16	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:21:23.307   17:05:16	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:21:23.307   17:05:16	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:21:23.307   17:05:16	-- bdev/bdev_raid.sh@125 -- # local tmp
00:21:23.307    17:05:16	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:21:23.307    17:05:16	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:21:23.566   17:05:16	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:21:23.566    "name": "raid_bdev1",
00:21:23.566    "uuid": "ddc180ff-3aa3-4936-8325-9ca064df6a82",
00:21:23.566    "strip_size_kb": 0,
00:21:23.566    "state": "online",
00:21:23.566    "raid_level": "raid1",
00:21:23.566    "superblock": true,
00:21:23.566    "num_base_bdevs": 4,
00:21:23.566    "num_base_bdevs_discovered": 3,
00:21:23.566    "num_base_bdevs_operational": 3,
00:21:23.566    "base_bdevs_list": [
00:21:23.566      {
00:21:23.566        "name": null,
00:21:23.566        "uuid": "00000000-0000-0000-0000-000000000000",
00:21:23.566        "is_configured": false,
00:21:23.566        "data_offset": 2048,
00:21:23.566        "data_size": 63488
00:21:23.566      },
00:21:23.566      {
00:21:23.566        "name": "BaseBdev2",
00:21:23.566        "uuid": "7e8f8911-a94e-5273-b3a7-4b3eef7799e0",
00:21:23.566        "is_configured": true,
00:21:23.566        "data_offset": 2048,
00:21:23.566        "data_size": 63488
00:21:23.566      },
00:21:23.566      {
00:21:23.566        "name": "BaseBdev3",
00:21:23.566        "uuid": "dce01726-b8a3-5dba-8a33-b9ac0dd51c8c",
00:21:23.566        "is_configured": true,
00:21:23.566        "data_offset": 2048,
00:21:23.566        "data_size": 63488
00:21:23.566      },
00:21:23.566      {
00:21:23.566        "name": "BaseBdev4",
00:21:23.566        "uuid": "9c034671-7747-5fa3-be8a-bac4ebc674da",
00:21:23.566        "is_configured": true,
00:21:23.566        "data_offset": 2048,
00:21:23.566        "data_size": 63488
00:21:23.566      }
00:21:23.566    ]
00:21:23.566  }'
00:21:23.566   17:05:16	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:21:23.566   17:05:16	-- common/autotest_common.sh@10 -- # set +x
00:21:24.502   17:05:17	-- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none
00:21:24.502   17:05:17	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:21:24.502   17:05:17	-- bdev/bdev_raid.sh@184 -- # local process_type=none
00:21:24.502   17:05:17	-- bdev/bdev_raid.sh@185 -- # local target=none
00:21:24.502   17:05:17	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:21:24.502    17:05:17	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:21:24.502    17:05:17	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:21:24.502   17:05:17	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:21:24.502    "name": "raid_bdev1",
00:21:24.502    "uuid": "ddc180ff-3aa3-4936-8325-9ca064df6a82",
00:21:24.502    "strip_size_kb": 0,
00:21:24.502    "state": "online",
00:21:24.502    "raid_level": "raid1",
00:21:24.502    "superblock": true,
00:21:24.502    "num_base_bdevs": 4,
00:21:24.502    "num_base_bdevs_discovered": 3,
00:21:24.502    "num_base_bdevs_operational": 3,
00:21:24.502    "base_bdevs_list": [
00:21:24.502      {
00:21:24.502        "name": null,
00:21:24.502        "uuid": "00000000-0000-0000-0000-000000000000",
00:21:24.502        "is_configured": false,
00:21:24.502        "data_offset": 2048,
00:21:24.502        "data_size": 63488
00:21:24.502      },
00:21:24.502      {
00:21:24.502        "name": "BaseBdev2",
00:21:24.502        "uuid": "7e8f8911-a94e-5273-b3a7-4b3eef7799e0",
00:21:24.502        "is_configured": true,
00:21:24.502        "data_offset": 2048,
00:21:24.502        "data_size": 63488
00:21:24.502      },
00:21:24.502      {
00:21:24.502        "name": "BaseBdev3",
00:21:24.502        "uuid": "dce01726-b8a3-5dba-8a33-b9ac0dd51c8c",
00:21:24.502        "is_configured": true,
00:21:24.502        "data_offset": 2048,
00:21:24.502        "data_size": 63488
00:21:24.502      },
00:21:24.502      {
00:21:24.502        "name": "BaseBdev4",
00:21:24.502        "uuid": "9c034671-7747-5fa3-be8a-bac4ebc674da",
00:21:24.502        "is_configured": true,
00:21:24.502        "data_offset": 2048,
00:21:24.502        "data_size": 63488
00:21:24.502      }
00:21:24.502    ]
00:21:24.502  }'
00:21:24.502    17:05:17	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:21:24.502   17:05:17	-- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]]
00:21:24.502    17:05:17	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:21:24.762   17:05:17	-- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]]
00:21:24.762   17:05:17	-- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare
00:21:25.021  [2024-11-19 17:05:17.638154] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare
00:21:25.021  [2024-11-19 17:05:17.638480] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:21:25.021  [2024-11-19 17:05:17.686849] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002940
00:21:25.021  [2024-11-19 17:05:17.689420] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1
00:21:25.021   17:05:17	-- bdev/bdev_raid.sh@614 -- # sleep 1
00:21:25.021  [2024-11-19 17:05:17.800471] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144
00:21:25.021  [2024-11-19 17:05:17.801256] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144
00:21:25.280  [2024-11-19 17:05:17.931954] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144
00:21:25.539  [2024-11-19 17:05:18.336173] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288
00:21:25.799  [2024-11-19 17:05:18.575175] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432
00:21:25.799  [2024-11-19 17:05:18.576660] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432
00:21:26.058   17:05:18	-- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:21:26.058   17:05:18	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:21:26.058   17:05:18	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:21:26.058   17:05:18	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:21:26.058   17:05:18	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:21:26.058    17:05:18	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:21:26.058    17:05:18	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:21:26.058  [2024-11-19 17:05:18.788168] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432
00:21:26.058  [2024-11-19 17:05:18.788728] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432
00:21:26.316   17:05:18	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:21:26.316    "name": "raid_bdev1",
00:21:26.316    "uuid": "ddc180ff-3aa3-4936-8325-9ca064df6a82",
00:21:26.316    "strip_size_kb": 0,
00:21:26.316    "state": "online",
00:21:26.316    "raid_level": "raid1",
00:21:26.316    "superblock": true,
00:21:26.316    "num_base_bdevs": 4,
00:21:26.316    "num_base_bdevs_discovered": 4,
00:21:26.316    "num_base_bdevs_operational": 4,
00:21:26.316    "process": {
00:21:26.316      "type": "rebuild",
00:21:26.316      "target": "spare",
00:21:26.316      "progress": {
00:21:26.316        "blocks": 16384,
00:21:26.316        "percent": 25
00:21:26.316      }
00:21:26.316    },
00:21:26.316    "base_bdevs_list": [
00:21:26.316      {
00:21:26.316        "name": "spare",
00:21:26.316        "uuid": "1c44c1c9-2f5c-5dc7-9c81-d80a7c535271",
00:21:26.316        "is_configured": true,
00:21:26.316        "data_offset": 2048,
00:21:26.316        "data_size": 63488
00:21:26.316      },
00:21:26.316      {
00:21:26.316        "name": "BaseBdev2",
00:21:26.316        "uuid": "7e8f8911-a94e-5273-b3a7-4b3eef7799e0",
00:21:26.316        "is_configured": true,
00:21:26.316        "data_offset": 2048,
00:21:26.316        "data_size": 63488
00:21:26.316      },
00:21:26.316      {
00:21:26.316        "name": "BaseBdev3",
00:21:26.316        "uuid": "dce01726-b8a3-5dba-8a33-b9ac0dd51c8c",
00:21:26.316        "is_configured": true,
00:21:26.316        "data_offset": 2048,
00:21:26.316        "data_size": 63488
00:21:26.316      },
00:21:26.316      {
00:21:26.317        "name": "BaseBdev4",
00:21:26.317        "uuid": "9c034671-7747-5fa3-be8a-bac4ebc674da",
00:21:26.317        "is_configured": true,
00:21:26.317        "data_offset": 2048,
00:21:26.317        "data_size": 63488
00:21:26.317      }
00:21:26.317    ]
00:21:26.317  }'
00:21:26.317    17:05:18	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:21:26.317   17:05:19	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:21:26.317    17:05:19	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:21:26.317   17:05:19	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:21:26.317   17:05:19	-- bdev/bdev_raid.sh@617 -- # '[' true = true ']'
00:21:26.317   17:05:19	-- bdev/bdev_raid.sh@617 -- # '[' = false ']'
00:21:26.317  /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected
00:21:26.317   17:05:19	-- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4
00:21:26.317   17:05:19	-- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']'
00:21:26.317   17:05:19	-- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']'
00:21:26.317   17:05:19	-- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2
00:21:26.317  [2024-11-19 17:05:19.128720] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576
00:21:26.317  [2024-11-19 17:05:19.129479] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576
00:21:26.575  [2024-11-19 17:05:19.266355] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576
00:21:26.575  [2024-11-19 17:05:19.342484] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2
00:21:26.833  [2024-11-19 17:05:19.495496] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000026d0
00:21:26.833  [2024-11-19 17:05:19.495775] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000002940
00:21:26.833  [2024-11-19 17:05:19.607601] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720
00:21:26.833   17:05:19	-- bdev/bdev_raid.sh@649 -- # base_bdevs[1]=
00:21:26.833   17:05:19	-- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- ))
00:21:26.833   17:05:19	-- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:21:26.834   17:05:19	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:21:26.834   17:05:19	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:21:26.834   17:05:19	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:21:26.834   17:05:19	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:21:26.834    17:05:19	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:21:26.834    17:05:19	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:21:27.155  [2024-11-19 17:05:19.725621] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720
00:21:27.155   17:05:19	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:21:27.155    "name": "raid_bdev1",
00:21:27.155    "uuid": "ddc180ff-3aa3-4936-8325-9ca064df6a82",
00:21:27.155    "strip_size_kb": 0,
00:21:27.155    "state": "online",
00:21:27.155    "raid_level": "raid1",
00:21:27.155    "superblock": true,
00:21:27.155    "num_base_bdevs": 4,
00:21:27.155    "num_base_bdevs_discovered": 3,
00:21:27.155    "num_base_bdevs_operational": 3,
00:21:27.155    "process": {
00:21:27.155      "type": "rebuild",
00:21:27.155      "target": "spare",
00:21:27.155      "progress": {
00:21:27.155        "blocks": 28672,
00:21:27.155        "percent": 45
00:21:27.155      }
00:21:27.155    },
00:21:27.155    "base_bdevs_list": [
00:21:27.155      {
00:21:27.155        "name": "spare",
00:21:27.155        "uuid": "1c44c1c9-2f5c-5dc7-9c81-d80a7c535271",
00:21:27.155        "is_configured": true,
00:21:27.155        "data_offset": 2048,
00:21:27.155        "data_size": 63488
00:21:27.155      },
00:21:27.155      {
00:21:27.155        "name": null,
00:21:27.155        "uuid": "00000000-0000-0000-0000-000000000000",
00:21:27.155        "is_configured": false,
00:21:27.155        "data_offset": 2048,
00:21:27.155        "data_size": 63488
00:21:27.155      },
00:21:27.155      {
00:21:27.155        "name": "BaseBdev3",
00:21:27.155        "uuid": "dce01726-b8a3-5dba-8a33-b9ac0dd51c8c",
00:21:27.155        "is_configured": true,
00:21:27.155        "data_offset": 2048,
00:21:27.155        "data_size": 63488
00:21:27.155      },
00:21:27.155      {
00:21:27.155        "name": "BaseBdev4",
00:21:27.155        "uuid": "9c034671-7747-5fa3-be8a-bac4ebc674da",
00:21:27.155        "is_configured": true,
00:21:27.155        "data_offset": 2048,
00:21:27.155        "data_size": 63488
00:21:27.155      }
00:21:27.155    ]
00:21:27.155  }'
00:21:27.155    17:05:19	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:21:27.155   17:05:19	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:21:27.155    17:05:19	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:21:27.155   17:05:19	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:21:27.155   17:05:19	-- bdev/bdev_raid.sh@657 -- # local timeout=520
00:21:27.155   17:05:19	-- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout ))
00:21:27.155   17:05:19	-- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:21:27.155   17:05:19	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:21:27.155   17:05:19	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:21:27.155   17:05:19	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:21:27.155   17:05:19	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:21:27.155    17:05:19	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:21:27.155    17:05:19	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:21:27.156  [2024-11-19 17:05:19.953580] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864
00:21:27.437   17:05:20	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:21:27.437    "name": "raid_bdev1",
00:21:27.437    "uuid": "ddc180ff-3aa3-4936-8325-9ca064df6a82",
00:21:27.437    "strip_size_kb": 0,
00:21:27.437    "state": "online",
00:21:27.437    "raid_level": "raid1",
00:21:27.437    "superblock": true,
00:21:27.437    "num_base_bdevs": 4,
00:21:27.437    "num_base_bdevs_discovered": 3,
00:21:27.437    "num_base_bdevs_operational": 3,
00:21:27.437    "process": {
00:21:27.437      "type": "rebuild",
00:21:27.437      "target": "spare",
00:21:27.437      "progress": {
00:21:27.437        "blocks": 34816,
00:21:27.437        "percent": 54
00:21:27.437      }
00:21:27.437    },
00:21:27.437    "base_bdevs_list": [
00:21:27.437      {
00:21:27.437        "name": "spare",
00:21:27.438        "uuid": "1c44c1c9-2f5c-5dc7-9c81-d80a7c535271",
00:21:27.438        "is_configured": true,
00:21:27.438        "data_offset": 2048,
00:21:27.438        "data_size": 63488
00:21:27.438      },
00:21:27.438      {
00:21:27.438        "name": null,
00:21:27.438        "uuid": "00000000-0000-0000-0000-000000000000",
00:21:27.438        "is_configured": false,
00:21:27.438        "data_offset": 2048,
00:21:27.438        "data_size": 63488
00:21:27.438      },
00:21:27.438      {
00:21:27.438        "name": "BaseBdev3",
00:21:27.438        "uuid": "dce01726-b8a3-5dba-8a33-b9ac0dd51c8c",
00:21:27.438        "is_configured": true,
00:21:27.438        "data_offset": 2048,
00:21:27.438        "data_size": 63488
00:21:27.438      },
00:21:27.438      {
00:21:27.438        "name": "BaseBdev4",
00:21:27.438        "uuid": "9c034671-7747-5fa3-be8a-bac4ebc674da",
00:21:27.438        "is_configured": true,
00:21:27.438        "data_offset": 2048,
00:21:27.438        "data_size": 63488
00:21:27.438      }
00:21:27.438    ]
00:21:27.438  }'
00:21:27.438    17:05:20	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:21:27.438   17:05:20	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:21:27.438    17:05:20	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:21:27.438   17:05:20	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:21:27.438   17:05:20	-- bdev/bdev_raid.sh@662 -- # sleep 1
00:21:27.697  [2024-11-19 17:05:20.431648] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008
00:21:28.634   17:05:21	-- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout ))
00:21:28.634   17:05:21	-- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:21:28.634   17:05:21	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:21:28.634   17:05:21	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:21:28.634   17:05:21	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:21:28.634   17:05:21	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:21:28.634    17:05:21	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:21:28.634    17:05:21	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:21:28.634  [2024-11-19 17:05:21.450308] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440
00:21:28.893   17:05:21	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:21:28.893    "name": "raid_bdev1",
00:21:28.893    "uuid": "ddc180ff-3aa3-4936-8325-9ca064df6a82",
00:21:28.893    "strip_size_kb": 0,
00:21:28.893    "state": "online",
00:21:28.893    "raid_level": "raid1",
00:21:28.893    "superblock": true,
00:21:28.893    "num_base_bdevs": 4,
00:21:28.893    "num_base_bdevs_discovered": 3,
00:21:28.893    "num_base_bdevs_operational": 3,
00:21:28.893    "process": {
00:21:28.893      "type": "rebuild",
00:21:28.893      "target": "spare",
00:21:28.893      "progress": {
00:21:28.893        "blocks": 57344,
00:21:28.893        "percent": 90
00:21:28.893      }
00:21:28.893    },
00:21:28.893    "base_bdevs_list": [
00:21:28.893      {
00:21:28.893        "name": "spare",
00:21:28.893        "uuid": "1c44c1c9-2f5c-5dc7-9c81-d80a7c535271",
00:21:28.893        "is_configured": true,
00:21:28.893        "data_offset": 2048,
00:21:28.893        "data_size": 63488
00:21:28.893      },
00:21:28.893      {
00:21:28.893        "name": null,
00:21:28.893        "uuid": "00000000-0000-0000-0000-000000000000",
00:21:28.893        "is_configured": false,
00:21:28.893        "data_offset": 2048,
00:21:28.893        "data_size": 63488
00:21:28.893      },
00:21:28.893      {
00:21:28.893        "name": "BaseBdev3",
00:21:28.893        "uuid": "dce01726-b8a3-5dba-8a33-b9ac0dd51c8c",
00:21:28.893        "is_configured": true,
00:21:28.893        "data_offset": 2048,
00:21:28.893        "data_size": 63488
00:21:28.893      },
00:21:28.893      {
00:21:28.893        "name": "BaseBdev4",
00:21:28.893        "uuid": "9c034671-7747-5fa3-be8a-bac4ebc674da",
00:21:28.893        "is_configured": true,
00:21:28.893        "data_offset": 2048,
00:21:28.893        "data_size": 63488
00:21:28.893      }
00:21:28.893    ]
00:21:28.893  }'
00:21:28.893    17:05:21	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:21:28.893   17:05:21	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:21:28.893    17:05:21	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:21:28.893   17:05:21	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:21:28.893   17:05:21	-- bdev/bdev_raid.sh@662 -- # sleep 1
00:21:29.152  [2024-11-19 17:05:21.897590] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1
00:21:29.152  [2024-11-19 17:05:22.004246] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1
00:21:29.152  [2024-11-19 17:05:22.007232] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:21:30.089   17:05:22	-- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout ))
00:21:30.089   17:05:22	-- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:21:30.089   17:05:22	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:21:30.089   17:05:22	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:21:30.089   17:05:22	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:21:30.089   17:05:22	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:21:30.089    17:05:22	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:21:30.089    17:05:22	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:21:30.089   17:05:22	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:21:30.089    "name": "raid_bdev1",
00:21:30.089    "uuid": "ddc180ff-3aa3-4936-8325-9ca064df6a82",
00:21:30.089    "strip_size_kb": 0,
00:21:30.089    "state": "online",
00:21:30.089    "raid_level": "raid1",
00:21:30.089    "superblock": true,
00:21:30.089    "num_base_bdevs": 4,
00:21:30.089    "num_base_bdevs_discovered": 3,
00:21:30.089    "num_base_bdevs_operational": 3,
00:21:30.089    "base_bdevs_list": [
00:21:30.089      {
00:21:30.089        "name": "spare",
00:21:30.089        "uuid": "1c44c1c9-2f5c-5dc7-9c81-d80a7c535271",
00:21:30.089        "is_configured": true,
00:21:30.089        "data_offset": 2048,
00:21:30.089        "data_size": 63488
00:21:30.089      },
00:21:30.089      {
00:21:30.089        "name": null,
00:21:30.089        "uuid": "00000000-0000-0000-0000-000000000000",
00:21:30.089        "is_configured": false,
00:21:30.089        "data_offset": 2048,
00:21:30.089        "data_size": 63488
00:21:30.089      },
00:21:30.089      {
00:21:30.089        "name": "BaseBdev3",
00:21:30.089        "uuid": "dce01726-b8a3-5dba-8a33-b9ac0dd51c8c",
00:21:30.089        "is_configured": true,
00:21:30.089        "data_offset": 2048,
00:21:30.089        "data_size": 63488
00:21:30.089      },
00:21:30.089      {
00:21:30.089        "name": "BaseBdev4",
00:21:30.089        "uuid": "9c034671-7747-5fa3-be8a-bac4ebc674da",
00:21:30.089        "is_configured": true,
00:21:30.089        "data_offset": 2048,
00:21:30.089        "data_size": 63488
00:21:30.089      }
00:21:30.089    ]
00:21:30.089  }'
00:21:30.089    17:05:22	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:21:30.349   17:05:22	-- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]]
00:21:30.349    17:05:22	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:21:30.349   17:05:22	-- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]]
00:21:30.349   17:05:22	-- bdev/bdev_raid.sh@660 -- # break
00:21:30.349   17:05:22	-- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none
00:21:30.349   17:05:22	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:21:30.349   17:05:22	-- bdev/bdev_raid.sh@184 -- # local process_type=none
00:21:30.349   17:05:22	-- bdev/bdev_raid.sh@185 -- # local target=none
00:21:30.349   17:05:22	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:21:30.349    17:05:22	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:21:30.349    17:05:22	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:21:30.608   17:05:23	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:21:30.608    "name": "raid_bdev1",
00:21:30.608    "uuid": "ddc180ff-3aa3-4936-8325-9ca064df6a82",
00:21:30.608    "strip_size_kb": 0,
00:21:30.608    "state": "online",
00:21:30.608    "raid_level": "raid1",
00:21:30.608    "superblock": true,
00:21:30.608    "num_base_bdevs": 4,
00:21:30.608    "num_base_bdevs_discovered": 3,
00:21:30.608    "num_base_bdevs_operational": 3,
00:21:30.608    "base_bdevs_list": [
00:21:30.608      {
00:21:30.608        "name": "spare",
00:21:30.608        "uuid": "1c44c1c9-2f5c-5dc7-9c81-d80a7c535271",
00:21:30.608        "is_configured": true,
00:21:30.608        "data_offset": 2048,
00:21:30.608        "data_size": 63488
00:21:30.608      },
00:21:30.608      {
00:21:30.608        "name": null,
00:21:30.608        "uuid": "00000000-0000-0000-0000-000000000000",
00:21:30.608        "is_configured": false,
00:21:30.608        "data_offset": 2048,
00:21:30.608        "data_size": 63488
00:21:30.608      },
00:21:30.608      {
00:21:30.608        "name": "BaseBdev3",
00:21:30.608        "uuid": "dce01726-b8a3-5dba-8a33-b9ac0dd51c8c",
00:21:30.608        "is_configured": true,
00:21:30.608        "data_offset": 2048,
00:21:30.608        "data_size": 63488
00:21:30.608      },
00:21:30.608      {
00:21:30.608        "name": "BaseBdev4",
00:21:30.608        "uuid": "9c034671-7747-5fa3-be8a-bac4ebc674da",
00:21:30.608        "is_configured": true,
00:21:30.608        "data_offset": 2048,
00:21:30.608        "data_size": 63488
00:21:30.608      }
00:21:30.608    ]
00:21:30.608  }'
00:21:30.608    17:05:23	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:21:30.608   17:05:23	-- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]]
00:21:30.608    17:05:23	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:21:30.608   17:05:23	-- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]]
00:21:30.608   17:05:23	-- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3
00:21:30.608   17:05:23	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:21:30.608   17:05:23	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:21:30.608   17:05:23	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:21:30.608   17:05:23	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:21:30.608   17:05:23	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:21:30.608   17:05:23	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:21:30.608   17:05:23	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:21:30.608   17:05:23	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:21:30.608   17:05:23	-- bdev/bdev_raid.sh@125 -- # local tmp
00:21:30.608    17:05:23	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:21:30.608    17:05:23	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:21:30.866   17:05:23	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:21:30.866    "name": "raid_bdev1",
00:21:30.866    "uuid": "ddc180ff-3aa3-4936-8325-9ca064df6a82",
00:21:30.866    "strip_size_kb": 0,
00:21:30.866    "state": "online",
00:21:30.866    "raid_level": "raid1",
00:21:30.866    "superblock": true,
00:21:30.866    "num_base_bdevs": 4,
00:21:30.866    "num_base_bdevs_discovered": 3,
00:21:30.866    "num_base_bdevs_operational": 3,
00:21:30.866    "base_bdevs_list": [
00:21:30.866      {
00:21:30.866        "name": "spare",
00:21:30.866        "uuid": "1c44c1c9-2f5c-5dc7-9c81-d80a7c535271",
00:21:30.866        "is_configured": true,
00:21:30.866        "data_offset": 2048,
00:21:30.866        "data_size": 63488
00:21:30.866      },
00:21:30.866      {
00:21:30.866        "name": null,
00:21:30.866        "uuid": "00000000-0000-0000-0000-000000000000",
00:21:30.866        "is_configured": false,
00:21:30.866        "data_offset": 2048,
00:21:30.866        "data_size": 63488
00:21:30.866      },
00:21:30.866      {
00:21:30.866        "name": "BaseBdev3",
00:21:30.866        "uuid": "dce01726-b8a3-5dba-8a33-b9ac0dd51c8c",
00:21:30.866        "is_configured": true,
00:21:30.867        "data_offset": 2048,
00:21:30.867        "data_size": 63488
00:21:30.867      },
00:21:30.867      {
00:21:30.867        "name": "BaseBdev4",
00:21:30.867        "uuid": "9c034671-7747-5fa3-be8a-bac4ebc674da",
00:21:30.867        "is_configured": true,
00:21:30.867        "data_offset": 2048,
00:21:30.867        "data_size": 63488
00:21:30.867      }
00:21:30.867    ]
00:21:30.867  }'
00:21:30.867   17:05:23	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:21:30.867   17:05:23	-- common/autotest_common.sh@10 -- # set +x
00:21:31.435   17:05:24	-- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1
00:21:31.695  [2024-11-19 17:05:24.379378] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:21:31.695  [2024-11-19 17:05:24.379594] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:21:31.695  
00:21:31.695                                                                                                  Latency(us)
00:21:31.695  
[2024-11-19T17:05:24.559Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:21:31.695  
[2024-11-19T17:05:24.559Z]  Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728)
00:21:31.695  	 raid_bdev1          :      11.32     104.52     313.56       0.00     0.00   13498.84     388.14  118838.61
00:21:31.695  
[2024-11-19T17:05:24.559Z]  ===================================================================================================================
00:21:31.695  
[2024-11-19T17:05:24.559Z]  Total                       :                104.52     313.56       0.00     0.00   13498.84     388.14  118838.61
00:21:31.695  [2024-11-19 17:05:24.411783] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:21:31.695  0
00:21:31.695  [2024-11-19 17:05:24.411988] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:21:31.695  [2024-11-19 17:05:24.412227] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:21:31.695  [2024-11-19 17:05:24.412329] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline
00:21:31.695    17:05:24	-- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:21:31.695    17:05:24	-- bdev/bdev_raid.sh@671 -- # jq length
00:21:31.955   17:05:24	-- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]]
00:21:31.955   17:05:24	-- bdev/bdev_raid.sh@673 -- # '[' true = true ']'
00:21:31.955   17:05:24	-- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0
00:21:31.955   17:05:24	-- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:21:31.955   17:05:24	-- bdev/nbd_common.sh@10 -- # bdev_list=('spare')
00:21:31.955   17:05:24	-- bdev/nbd_common.sh@10 -- # local bdev_list
00:21:31.955   17:05:24	-- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0')
00:21:31.955   17:05:24	-- bdev/nbd_common.sh@11 -- # local nbd_list
00:21:31.955   17:05:24	-- bdev/nbd_common.sh@12 -- # local i
00:21:31.955   17:05:24	-- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:21:31.955   17:05:24	-- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:21:31.955   17:05:24	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0
00:21:32.214  /dev/nbd0
00:21:32.214    17:05:24	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:21:32.214   17:05:25	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:21:32.214   17:05:25	-- common/autotest_common.sh@866 -- # local nbd_name=nbd0
00:21:32.214   17:05:25	-- common/autotest_common.sh@867 -- # local i
00:21:32.214   17:05:25	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:21:32.214   17:05:25	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:21:32.214   17:05:25	-- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions
00:21:32.214   17:05:25	-- common/autotest_common.sh@871 -- # break
00:21:32.214   17:05:25	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:21:32.214   17:05:25	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:21:32.214   17:05:25	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:21:32.214  1+0 records in
00:21:32.214  1+0 records out
00:21:32.214  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000623621 s, 6.6 MB/s
00:21:32.214    17:05:25	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:21:32.214   17:05:25	-- common/autotest_common.sh@884 -- # size=4096
00:21:32.214   17:05:25	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:21:32.214   17:05:25	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:21:32.214   17:05:25	-- common/autotest_common.sh@887 -- # return 0
00:21:32.214   17:05:25	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:21:32.214   17:05:25	-- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:21:32.214   17:05:25	-- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}"
00:21:32.214   17:05:25	-- bdev/bdev_raid.sh@677 -- # '[' -z '' ']'
00:21:32.214   17:05:25	-- bdev/bdev_raid.sh@678 -- # continue
00:21:32.214   17:05:25	-- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}"
00:21:32.214   17:05:25	-- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev3 ']'
00:21:32.214   17:05:25	-- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1
00:21:32.214   17:05:25	-- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:21:32.214   17:05:25	-- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3')
00:21:32.214   17:05:25	-- bdev/nbd_common.sh@10 -- # local bdev_list
00:21:32.214   17:05:25	-- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1')
00:21:32.214   17:05:25	-- bdev/nbd_common.sh@11 -- # local nbd_list
00:21:32.214   17:05:25	-- bdev/nbd_common.sh@12 -- # local i
00:21:32.214   17:05:25	-- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:21:32.214   17:05:25	-- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:21:32.214   17:05:25	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1
00:21:32.473  /dev/nbd1
00:21:32.732    17:05:25	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:21:32.732   17:05:25	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:21:32.732   17:05:25	-- common/autotest_common.sh@866 -- # local nbd_name=nbd1
00:21:32.732   17:05:25	-- common/autotest_common.sh@867 -- # local i
00:21:32.732   17:05:25	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:21:32.732   17:05:25	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:21:32.732   17:05:25	-- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions
00:21:32.732   17:05:25	-- common/autotest_common.sh@871 -- # break
00:21:32.732   17:05:25	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:21:32.732   17:05:25	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:21:32.732   17:05:25	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:21:32.732  1+0 records in
00:21:32.732  1+0 records out
00:21:32.732  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000611835 s, 6.7 MB/s
00:21:32.732    17:05:25	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:21:32.732   17:05:25	-- common/autotest_common.sh@884 -- # size=4096
00:21:32.732   17:05:25	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:21:32.732   17:05:25	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:21:32.732   17:05:25	-- common/autotest_common.sh@887 -- # return 0
00:21:32.732   17:05:25	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:21:32.732   17:05:25	-- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:21:32.732   17:05:25	-- bdev/bdev_raid.sh@681 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1
00:21:32.732   17:05:25	-- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1
00:21:32.732   17:05:25	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:21:32.732   17:05:25	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1')
00:21:32.732   17:05:25	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:21:32.732   17:05:25	-- bdev/nbd_common.sh@51 -- # local i
00:21:32.732   17:05:25	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:21:32.732   17:05:25	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1
00:21:32.992    17:05:25	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:21:32.992   17:05:25	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:21:32.992   17:05:25	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:21:32.992   17:05:25	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:21:32.992   17:05:25	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:21:32.992   17:05:25	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:21:32.992   17:05:25	-- bdev/nbd_common.sh@41 -- # break
00:21:32.992   17:05:25	-- bdev/nbd_common.sh@45 -- # return 0
00:21:32.992   17:05:25	-- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}"
00:21:32.992   17:05:25	-- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev4 ']'
00:21:32.992   17:05:25	-- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1
00:21:32.992   17:05:25	-- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:21:32.992   17:05:25	-- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4')
00:21:32.992   17:05:25	-- bdev/nbd_common.sh@10 -- # local bdev_list
00:21:32.992   17:05:25	-- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1')
00:21:32.992   17:05:25	-- bdev/nbd_common.sh@11 -- # local nbd_list
00:21:32.992   17:05:25	-- bdev/nbd_common.sh@12 -- # local i
00:21:32.992   17:05:25	-- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:21:32.992   17:05:25	-- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:21:32.992   17:05:25	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1
00:21:33.252  /dev/nbd1
00:21:33.252    17:05:26	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:21:33.252   17:05:26	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:21:33.252   17:05:26	-- common/autotest_common.sh@866 -- # local nbd_name=nbd1
00:21:33.252   17:05:26	-- common/autotest_common.sh@867 -- # local i
00:21:33.252   17:05:26	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:21:33.252   17:05:26	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:21:33.252   17:05:26	-- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions
00:21:33.252   17:05:26	-- common/autotest_common.sh@871 -- # break
00:21:33.252   17:05:26	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:21:33.252   17:05:26	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:21:33.252   17:05:26	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:21:33.252  1+0 records in
00:21:33.252  1+0 records out
00:21:33.252  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000313 s, 13.1 MB/s
00:21:33.252    17:05:26	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:21:33.252   17:05:26	-- common/autotest_common.sh@884 -- # size=4096
00:21:33.252   17:05:26	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:21:33.252   17:05:26	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:21:33.252   17:05:26	-- common/autotest_common.sh@887 -- # return 0
00:21:33.252   17:05:26	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:21:33.252   17:05:26	-- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:21:33.252   17:05:26	-- bdev/bdev_raid.sh@681 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1
00:21:33.252   17:05:26	-- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1
00:21:33.252   17:05:26	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:21:33.252   17:05:26	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1')
00:21:33.252   17:05:26	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:21:33.252   17:05:26	-- bdev/nbd_common.sh@51 -- # local i
00:21:33.252   17:05:26	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:21:33.252   17:05:26	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1
00:21:33.820    17:05:26	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:21:33.820   17:05:26	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:21:33.820   17:05:26	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:21:33.820   17:05:26	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:21:33.820   17:05:26	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:21:33.820   17:05:26	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:21:33.820   17:05:26	-- bdev/nbd_common.sh@41 -- # break
00:21:33.820   17:05:26	-- bdev/nbd_common.sh@45 -- # return 0
00:21:33.820   17:05:26	-- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0
00:21:33.820   17:05:26	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:21:33.820   17:05:26	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:21:33.820   17:05:26	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:21:33.820   17:05:26	-- bdev/nbd_common.sh@51 -- # local i
00:21:33.820   17:05:26	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:21:33.820   17:05:26	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0
00:21:33.820    17:05:26	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:21:33.820   17:05:26	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:21:33.820   17:05:26	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:21:33.820   17:05:26	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:21:33.820   17:05:26	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:21:33.820   17:05:26	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:21:33.820   17:05:26	-- bdev/nbd_common.sh@41 -- # break
00:21:33.820   17:05:26	-- bdev/nbd_common.sh@45 -- # return 0
00:21:33.820   17:05:26	-- bdev/bdev_raid.sh@692 -- # '[' true = true ']'
00:21:33.820   17:05:26	-- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}"
00:21:33.820   17:05:26	-- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']'
00:21:33.820   17:05:26	-- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1
00:21:34.079   17:05:26	-- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1
00:21:34.338  [2024-11-19 17:05:27.011983] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc
00:21:34.338  [2024-11-19 17:05:27.012091] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:21:34.338  [2024-11-19 17:05:27.012133] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880
00:21:34.338  [2024-11-19 17:05:27.012155] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:21:34.338  [2024-11-19 17:05:27.014844] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:21:34.338  [2024-11-19 17:05:27.014938] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1
00:21:34.338  [2024-11-19 17:05:27.015039] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1
00:21:34.338  [2024-11-19 17:05:27.015108] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:21:34.338  BaseBdev1
00:21:34.338   17:05:27	-- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}"
00:21:34.338   17:05:27	-- bdev/bdev_raid.sh@695 -- # '[' -z '' ']'
00:21:34.338   17:05:27	-- bdev/bdev_raid.sh@696 -- # continue
00:21:34.338   17:05:27	-- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}"
00:21:34.338   17:05:27	-- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']'
00:21:34.338   17:05:27	-- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3
00:21:34.598   17:05:27	-- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3
00:21:34.857  [2024-11-19 17:05:27.500125] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc
00:21:34.857  [2024-11-19 17:05:27.500227] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:21:34.857  [2024-11-19 17:05:27.500269] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180
00:21:34.857  [2024-11-19 17:05:27.500294] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:21:34.857  [2024-11-19 17:05:27.500749] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:21:34.857  [2024-11-19 17:05:27.500825] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3
00:21:34.857  [2024-11-19 17:05:27.500911] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3
00:21:34.857  [2024-11-19 17:05:27.500924] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev3 (4) greater than existing raid bdev raid_bdev1 (1)
00:21:34.857  [2024-11-19 17:05:27.500932] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:21:34.857  [2024-11-19 17:05:27.500966] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ae80 name raid_bdev1, state configuring
00:21:34.857  [2024-11-19 17:05:27.501027] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:21:34.857  BaseBdev3
00:21:34.857   17:05:27	-- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}"
00:21:34.857   17:05:27	-- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev4 ']'
00:21:34.857   17:05:27	-- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev4
00:21:35.115   17:05:27	-- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4
00:21:35.409  [2024-11-19 17:05:28.072361] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc
00:21:35.409  [2024-11-19 17:05:28.072460] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:21:35.409  [2024-11-19 17:05:28.072503] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780
00:21:35.409  [2024-11-19 17:05:28.072531] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:21:35.409  [2024-11-19 17:05:28.072978] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:21:35.409  [2024-11-19 17:05:28.073036] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4
00:21:35.409  [2024-11-19 17:05:28.073121] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev4
00:21:35.409  [2024-11-19 17:05:28.073161] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed
00:21:35.409  BaseBdev4
00:21:35.409   17:05:28	-- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare
00:21:35.668   17:05:28	-- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare
00:21:35.927  [2024-11-19 17:05:28.656565] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay
00:21:35.927  [2024-11-19 17:05:28.656688] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:21:35.927  [2024-11-19 17:05:28.656724] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80
00:21:35.927  [2024-11-19 17:05:28.656753] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:21:35.927  [2024-11-19 17:05:28.657217] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:21:35.927  [2024-11-19 17:05:28.657287] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare
00:21:35.927  [2024-11-19 17:05:28.657385] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare
00:21:35.927  [2024-11-19 17:05:28.657424] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:21:35.927  spare
00:21:35.927   17:05:28	-- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3
00:21:35.927   17:05:28	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:21:35.927   17:05:28	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:21:35.927   17:05:28	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid1
00:21:35.927   17:05:28	-- bdev/bdev_raid.sh@120 -- # local strip_size=0
00:21:35.927   17:05:28	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:21:35.927   17:05:28	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:21:35.927   17:05:28	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:21:35.927   17:05:28	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:21:35.927   17:05:28	-- bdev/bdev_raid.sh@125 -- # local tmp
00:21:35.927    17:05:28	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:21:35.927    17:05:28	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:21:35.928  [2024-11-19 17:05:28.757548] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000b480
00:21:35.928  [2024-11-19 17:05:28.757584] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512
00:21:35.928  [2024-11-19 17:05:28.757763] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000033bc0
00:21:35.928  [2024-11-19 17:05:28.758256] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000b480
00:21:35.928  [2024-11-19 17:05:28.758280] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000b480
00:21:35.928  [2024-11-19 17:05:28.758420] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:21:36.187   17:05:28	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:21:36.187    "name": "raid_bdev1",
00:21:36.187    "uuid": "ddc180ff-3aa3-4936-8325-9ca064df6a82",
00:21:36.187    "strip_size_kb": 0,
00:21:36.187    "state": "online",
00:21:36.187    "raid_level": "raid1",
00:21:36.187    "superblock": true,
00:21:36.187    "num_base_bdevs": 4,
00:21:36.187    "num_base_bdevs_discovered": 3,
00:21:36.187    "num_base_bdevs_operational": 3,
00:21:36.187    "base_bdevs_list": [
00:21:36.187      {
00:21:36.187        "name": "spare",
00:21:36.187        "uuid": "1c44c1c9-2f5c-5dc7-9c81-d80a7c535271",
00:21:36.187        "is_configured": true,
00:21:36.187        "data_offset": 2048,
00:21:36.187        "data_size": 63488
00:21:36.187      },
00:21:36.187      {
00:21:36.187        "name": null,
00:21:36.187        "uuid": "00000000-0000-0000-0000-000000000000",
00:21:36.187        "is_configured": false,
00:21:36.187        "data_offset": 2048,
00:21:36.187        "data_size": 63488
00:21:36.187      },
00:21:36.187      {
00:21:36.187        "name": "BaseBdev3",
00:21:36.187        "uuid": "dce01726-b8a3-5dba-8a33-b9ac0dd51c8c",
00:21:36.187        "is_configured": true,
00:21:36.187        "data_offset": 2048,
00:21:36.187        "data_size": 63488
00:21:36.187      },
00:21:36.187      {
00:21:36.187        "name": "BaseBdev4",
00:21:36.187        "uuid": "9c034671-7747-5fa3-be8a-bac4ebc674da",
00:21:36.187        "is_configured": true,
00:21:36.187        "data_offset": 2048,
00:21:36.187        "data_size": 63488
00:21:36.187      }
00:21:36.187    ]
00:21:36.187  }'
00:21:36.187   17:05:28	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:21:36.187   17:05:28	-- common/autotest_common.sh@10 -- # set +x
00:21:36.755   17:05:29	-- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none
00:21:36.755   17:05:29	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:21:36.755   17:05:29	-- bdev/bdev_raid.sh@184 -- # local process_type=none
00:21:36.755   17:05:29	-- bdev/bdev_raid.sh@185 -- # local target=none
00:21:36.755   17:05:29	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:21:36.755    17:05:29	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:21:36.755    17:05:29	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:21:37.015   17:05:29	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:21:37.015    "name": "raid_bdev1",
00:21:37.015    "uuid": "ddc180ff-3aa3-4936-8325-9ca064df6a82",
00:21:37.015    "strip_size_kb": 0,
00:21:37.015    "state": "online",
00:21:37.015    "raid_level": "raid1",
00:21:37.015    "superblock": true,
00:21:37.015    "num_base_bdevs": 4,
00:21:37.015    "num_base_bdevs_discovered": 3,
00:21:37.015    "num_base_bdevs_operational": 3,
00:21:37.015    "base_bdevs_list": [
00:21:37.015      {
00:21:37.015        "name": "spare",
00:21:37.015        "uuid": "1c44c1c9-2f5c-5dc7-9c81-d80a7c535271",
00:21:37.015        "is_configured": true,
00:21:37.015        "data_offset": 2048,
00:21:37.015        "data_size": 63488
00:21:37.015      },
00:21:37.015      {
00:21:37.015        "name": null,
00:21:37.015        "uuid": "00000000-0000-0000-0000-000000000000",
00:21:37.015        "is_configured": false,
00:21:37.015        "data_offset": 2048,
00:21:37.015        "data_size": 63488
00:21:37.015      },
00:21:37.015      {
00:21:37.015        "name": "BaseBdev3",
00:21:37.015        "uuid": "dce01726-b8a3-5dba-8a33-b9ac0dd51c8c",
00:21:37.015        "is_configured": true,
00:21:37.015        "data_offset": 2048,
00:21:37.015        "data_size": 63488
00:21:37.015      },
00:21:37.015      {
00:21:37.015        "name": "BaseBdev4",
00:21:37.015        "uuid": "9c034671-7747-5fa3-be8a-bac4ebc674da",
00:21:37.015        "is_configured": true,
00:21:37.015        "data_offset": 2048,
00:21:37.015        "data_size": 63488
00:21:37.015      }
00:21:37.015    ]
00:21:37.015  }'
00:21:37.015    17:05:29	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:21:37.015   17:05:29	-- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]]
00:21:37.015    17:05:29	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:21:37.274   17:05:29	-- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]]
00:21:37.274    17:05:29	-- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:21:37.274    17:05:29	-- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name'
00:21:37.533   17:05:30	-- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]]
00:21:37.534   17:05:30	-- bdev/bdev_raid.sh@709 -- # killprocess 136668
00:21:37.534   17:05:30	-- common/autotest_common.sh@936 -- # '[' -z 136668 ']'
00:21:37.534   17:05:30	-- common/autotest_common.sh@940 -- # kill -0 136668
00:21:37.534    17:05:30	-- common/autotest_common.sh@941 -- # uname
00:21:37.534   17:05:30	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:21:37.534    17:05:30	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 136668
00:21:37.534   17:05:30	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:21:37.534   17:05:30	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:21:37.534  killing process with pid 136668
00:21:37.534   17:05:30	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 136668'
00:21:37.534  Received shutdown signal, test time was about 17.133039 seconds
00:21:37.534  
00:21:37.534                                                                                                  Latency(us)
00:21:37.534  
[2024-11-19T17:05:30.398Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:21:37.534  
[2024-11-19T17:05:30.398Z]  ===================================================================================================================
00:21:37.534  
[2024-11-19T17:05:30.398Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:21:37.534   17:05:30	-- common/autotest_common.sh@955 -- # kill 136668
00:21:37.534  [2024-11-19 17:05:30.221779] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:21:37.534   17:05:30	-- common/autotest_common.sh@960 -- # wait 136668
00:21:37.534  [2024-11-19 17:05:30.221908] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:21:37.534  [2024-11-19 17:05:30.222002] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:21:37.534  [2024-11-19 17:05:30.222013] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000b480 name raid_bdev1, state offline
00:21:37.534  [2024-11-19 17:05:30.269646] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:21:37.793   17:05:30	-- bdev/bdev_raid.sh@711 -- # return 0
00:21:37.793  
00:21:37.793  real	0m22.979s
00:21:37.793  user	0m37.383s
00:21:37.793  sys	0m3.559s
00:21:37.793   17:05:30	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:21:37.793   17:05:30	-- common/autotest_common.sh@10 -- # set +x
00:21:37.793  ************************************
00:21:37.793  END TEST raid_rebuild_test_sb_io
00:21:37.793  ************************************
00:21:37.793   17:05:30	-- bdev/bdev_raid.sh@742 -- # '[' y == y ']'
00:21:37.793   17:05:30	-- bdev/bdev_raid.sh@743 -- # for n in {3..4}
00:21:37.793   17:05:30	-- bdev/bdev_raid.sh@744 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false
00:21:37.793   17:05:30	-- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']'
00:21:37.793   17:05:30	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:21:37.793   17:05:30	-- common/autotest_common.sh@10 -- # set +x
00:21:37.793  ************************************
00:21:37.793  START TEST raid5f_state_function_test
00:21:37.793  ************************************
00:21:37.793   17:05:30	-- common/autotest_common.sh@1114 -- # raid_state_function_test raid5f 3 false
00:21:37.793   17:05:30	-- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f
00:21:37.793   17:05:30	-- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3
00:21:37.793   17:05:30	-- bdev/bdev_raid.sh@204 -- # local superblock=false
00:21:37.793   17:05:30	-- bdev/bdev_raid.sh@205 -- # local raid_bdev
00:21:37.793    17:05:30	-- bdev/bdev_raid.sh@206 -- # (( i = 1 ))
00:21:37.793    17:05:30	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:21:37.793    17:05:30	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev1
00:21:37.793    17:05:30	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:21:37.793    17:05:30	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:21:37.793    17:05:30	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev2
00:21:37.793    17:05:30	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:21:37.793    17:05:30	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:21:37.793    17:05:30	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev3
00:21:37.793    17:05:30	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:21:37.793    17:05:30	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:21:37.793   17:05:30	-- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3')
00:21:37.793   17:05:30	-- bdev/bdev_raid.sh@206 -- # local base_bdevs
00:21:37.793   17:05:30	-- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid
00:21:37.793   17:05:30	-- bdev/bdev_raid.sh@208 -- # local strip_size
00:21:37.793   17:05:30	-- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg
00:21:37.793   17:05:30	-- bdev/bdev_raid.sh@210 -- # local superblock_create_arg
00:21:37.793   17:05:30	-- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']'
00:21:37.793   17:05:30	-- bdev/bdev_raid.sh@213 -- # strip_size=64
00:21:37.793   17:05:30	-- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64'
00:21:37.793   17:05:30	-- bdev/bdev_raid.sh@219 -- # '[' false = true ']'
00:21:37.793   17:05:30	-- bdev/bdev_raid.sh@222 -- # superblock_create_arg=
00:21:37.793   17:05:30	-- bdev/bdev_raid.sh@226 -- # raid_pid=137281
00:21:37.793   17:05:30	-- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid
00:21:37.793   17:05:30	-- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 137281'
00:21:37.793  Process raid pid: 137281
00:21:37.793   17:05:30	-- bdev/bdev_raid.sh@228 -- # waitforlisten 137281 /var/tmp/spdk-raid.sock
00:21:37.794   17:05:30	-- common/autotest_common.sh@829 -- # '[' -z 137281 ']'
00:21:37.794   17:05:30	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock
00:21:37.794   17:05:30	-- common/autotest_common.sh@834 -- # local max_retries=100
00:21:37.794  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...
00:21:37.794   17:05:30	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...'
00:21:37.794   17:05:30	-- common/autotest_common.sh@838 -- # xtrace_disable
00:21:37.794   17:05:30	-- common/autotest_common.sh@10 -- # set +x
00:21:38.052  [2024-11-19 17:05:30.662543] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:21:38.052  [2024-11-19 17:05:30.662739] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:21:38.053  [2024-11-19 17:05:30.809554] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:21:38.053  [2024-11-19 17:05:30.859888] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:21:38.053  [2024-11-19 17:05:30.902039] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:21:38.988   17:05:31	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:21:38.988   17:05:31	-- common/autotest_common.sh@862 -- # return 0
00:21:38.988   17:05:31	-- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid
00:21:39.246  [2024-11-19 17:05:31.941161] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:21:39.246  [2024-11-19 17:05:31.941281] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:21:39.246  [2024-11-19 17:05:31.941292] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:21:39.246  [2024-11-19 17:05:31.941322] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:21:39.246  [2024-11-19 17:05:31.941329] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:21:39.246  [2024-11-19 17:05:31.941375] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:21:39.246   17:05:31	-- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3
00:21:39.246   17:05:31	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:21:39.246   17:05:31	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:21:39.246   17:05:31	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:21:39.246   17:05:31	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:21:39.246   17:05:31	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:21:39.246   17:05:31	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:21:39.246   17:05:31	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:21:39.246   17:05:31	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:21:39.246   17:05:31	-- bdev/bdev_raid.sh@125 -- # local tmp
00:21:39.246    17:05:31	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:21:39.246    17:05:31	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:21:39.505   17:05:32	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:21:39.505    "name": "Existed_Raid",
00:21:39.506    "uuid": "00000000-0000-0000-0000-000000000000",
00:21:39.506    "strip_size_kb": 64,
00:21:39.506    "state": "configuring",
00:21:39.506    "raid_level": "raid5f",
00:21:39.506    "superblock": false,
00:21:39.506    "num_base_bdevs": 3,
00:21:39.506    "num_base_bdevs_discovered": 0,
00:21:39.506    "num_base_bdevs_operational": 3,
00:21:39.506    "base_bdevs_list": [
00:21:39.506      {
00:21:39.506        "name": "BaseBdev1",
00:21:39.506        "uuid": "00000000-0000-0000-0000-000000000000",
00:21:39.506        "is_configured": false,
00:21:39.506        "data_offset": 0,
00:21:39.506        "data_size": 0
00:21:39.506      },
00:21:39.506      {
00:21:39.506        "name": "BaseBdev2",
00:21:39.506        "uuid": "00000000-0000-0000-0000-000000000000",
00:21:39.506        "is_configured": false,
00:21:39.506        "data_offset": 0,
00:21:39.506        "data_size": 0
00:21:39.506      },
00:21:39.506      {
00:21:39.506        "name": "BaseBdev3",
00:21:39.506        "uuid": "00000000-0000-0000-0000-000000000000",
00:21:39.506        "is_configured": false,
00:21:39.506        "data_offset": 0,
00:21:39.506        "data_size": 0
00:21:39.506      }
00:21:39.506    ]
00:21:39.506  }'
00:21:39.506   17:05:32	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:21:39.506   17:05:32	-- common/autotest_common.sh@10 -- # set +x
00:21:40.071   17:05:32	-- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid
00:21:40.330  [2024-11-19 17:05:33.049232] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:21:40.330  [2024-11-19 17:05:33.049294] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring
00:21:40.330   17:05:33	-- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid
00:21:40.589  [2024-11-19 17:05:33.257293] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:21:40.589  [2024-11-19 17:05:33.257382] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:21:40.589  [2024-11-19 17:05:33.257391] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:21:40.589  [2024-11-19 17:05:33.257413] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:21:40.589  [2024-11-19 17:05:33.257419] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:21:40.589  [2024-11-19 17:05:33.257444] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:21:40.589   17:05:33	-- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1
00:21:40.847  [2024-11-19 17:05:33.546979] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:21:40.847  BaseBdev1
00:21:40.847   17:05:33	-- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1
00:21:40.847   17:05:33	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1
00:21:40.847   17:05:33	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:21:40.847   17:05:33	-- common/autotest_common.sh@899 -- # local i
00:21:40.847   17:05:33	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:21:40.847   17:05:33	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:21:40.847   17:05:33	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:21:41.106   17:05:33	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000
00:21:41.106  [
00:21:41.106    {
00:21:41.106      "name": "BaseBdev1",
00:21:41.106      "aliases": [
00:21:41.106        "65fd1dbe-88e5-41ea-8cef-50069b6001f8"
00:21:41.106      ],
00:21:41.106      "product_name": "Malloc disk",
00:21:41.106      "block_size": 512,
00:21:41.106      "num_blocks": 65536,
00:21:41.106      "uuid": "65fd1dbe-88e5-41ea-8cef-50069b6001f8",
00:21:41.106      "assigned_rate_limits": {
00:21:41.106        "rw_ios_per_sec": 0,
00:21:41.106        "rw_mbytes_per_sec": 0,
00:21:41.106        "r_mbytes_per_sec": 0,
00:21:41.106        "w_mbytes_per_sec": 0
00:21:41.106      },
00:21:41.106      "claimed": true,
00:21:41.106      "claim_type": "exclusive_write",
00:21:41.106      "zoned": false,
00:21:41.106      "supported_io_types": {
00:21:41.106        "read": true,
00:21:41.106        "write": true,
00:21:41.106        "unmap": true,
00:21:41.106        "write_zeroes": true,
00:21:41.106        "flush": true,
00:21:41.106        "reset": true,
00:21:41.106        "compare": false,
00:21:41.106        "compare_and_write": false,
00:21:41.106        "abort": true,
00:21:41.106        "nvme_admin": false,
00:21:41.106        "nvme_io": false
00:21:41.106      },
00:21:41.106      "memory_domains": [
00:21:41.106        {
00:21:41.106          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:21:41.106          "dma_device_type": 2
00:21:41.106        }
00:21:41.106      ],
00:21:41.106      "driver_specific": {}
00:21:41.106    }
00:21:41.106  ]
00:21:41.364   17:05:33	-- common/autotest_common.sh@905 -- # return 0
00:21:41.364   17:05:33	-- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3
00:21:41.364   17:05:33	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:21:41.364   17:05:33	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:21:41.364   17:05:33	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:21:41.364   17:05:33	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:21:41.365   17:05:33	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:21:41.365   17:05:33	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:21:41.365   17:05:33	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:21:41.365   17:05:33	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:21:41.365   17:05:33	-- bdev/bdev_raid.sh@125 -- # local tmp
00:21:41.365    17:05:33	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:21:41.365    17:05:33	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:21:41.623   17:05:34	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:21:41.623    "name": "Existed_Raid",
00:21:41.623    "uuid": "00000000-0000-0000-0000-000000000000",
00:21:41.623    "strip_size_kb": 64,
00:21:41.623    "state": "configuring",
00:21:41.623    "raid_level": "raid5f",
00:21:41.623    "superblock": false,
00:21:41.623    "num_base_bdevs": 3,
00:21:41.623    "num_base_bdevs_discovered": 1,
00:21:41.623    "num_base_bdevs_operational": 3,
00:21:41.623    "base_bdevs_list": [
00:21:41.623      {
00:21:41.623        "name": "BaseBdev1",
00:21:41.623        "uuid": "65fd1dbe-88e5-41ea-8cef-50069b6001f8",
00:21:41.623        "is_configured": true,
00:21:41.623        "data_offset": 0,
00:21:41.623        "data_size": 65536
00:21:41.623      },
00:21:41.623      {
00:21:41.623        "name": "BaseBdev2",
00:21:41.623        "uuid": "00000000-0000-0000-0000-000000000000",
00:21:41.623        "is_configured": false,
00:21:41.623        "data_offset": 0,
00:21:41.623        "data_size": 0
00:21:41.623      },
00:21:41.623      {
00:21:41.623        "name": "BaseBdev3",
00:21:41.623        "uuid": "00000000-0000-0000-0000-000000000000",
00:21:41.623        "is_configured": false,
00:21:41.623        "data_offset": 0,
00:21:41.623        "data_size": 0
00:21:41.623      }
00:21:41.623    ]
00:21:41.623  }'
00:21:41.623   17:05:34	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:21:41.623   17:05:34	-- common/autotest_common.sh@10 -- # set +x
00:21:42.191   17:05:34	-- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid
00:21:42.449  [2024-11-19 17:05:35.143362] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:21:42.449  [2024-11-19 17:05:35.143460] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring
00:21:42.449   17:05:35	-- bdev/bdev_raid.sh@244 -- # '[' false = true ']'
00:21:42.449   17:05:35	-- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid
00:21:42.708  [2024-11-19 17:05:35.379513] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:21:42.708  [2024-11-19 17:05:35.381730] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:21:42.708  [2024-11-19 17:05:35.381800] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:21:42.708  [2024-11-19 17:05:35.381810] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:21:42.708  [2024-11-19 17:05:35.381851] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:21:42.708   17:05:35	-- bdev/bdev_raid.sh@254 -- # (( i = 1 ))
00:21:42.708   17:05:35	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:21:42.708   17:05:35	-- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3
00:21:42.708   17:05:35	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:21:42.708   17:05:35	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:21:42.708   17:05:35	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:21:42.708   17:05:35	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:21:42.708   17:05:35	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:21:42.708   17:05:35	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:21:42.708   17:05:35	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:21:42.708   17:05:35	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:21:42.708   17:05:35	-- bdev/bdev_raid.sh@125 -- # local tmp
00:21:42.708    17:05:35	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:21:42.708    17:05:35	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:21:42.972   17:05:35	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:21:42.972    "name": "Existed_Raid",
00:21:42.972    "uuid": "00000000-0000-0000-0000-000000000000",
00:21:42.972    "strip_size_kb": 64,
00:21:42.972    "state": "configuring",
00:21:42.972    "raid_level": "raid5f",
00:21:42.972    "superblock": false,
00:21:42.972    "num_base_bdevs": 3,
00:21:42.972    "num_base_bdevs_discovered": 1,
00:21:42.972    "num_base_bdevs_operational": 3,
00:21:42.972    "base_bdevs_list": [
00:21:42.972      {
00:21:42.972        "name": "BaseBdev1",
00:21:42.972        "uuid": "65fd1dbe-88e5-41ea-8cef-50069b6001f8",
00:21:42.972        "is_configured": true,
00:21:42.972        "data_offset": 0,
00:21:42.972        "data_size": 65536
00:21:42.972      },
00:21:42.972      {
00:21:42.972        "name": "BaseBdev2",
00:21:42.972        "uuid": "00000000-0000-0000-0000-000000000000",
00:21:42.972        "is_configured": false,
00:21:42.972        "data_offset": 0,
00:21:42.972        "data_size": 0
00:21:42.972      },
00:21:42.972      {
00:21:42.972        "name": "BaseBdev3",
00:21:42.972        "uuid": "00000000-0000-0000-0000-000000000000",
00:21:42.972        "is_configured": false,
00:21:42.972        "data_offset": 0,
00:21:42.972        "data_size": 0
00:21:42.972      }
00:21:42.972    ]
00:21:42.972  }'
00:21:42.972   17:05:35	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:21:42.972   17:05:35	-- common/autotest_common.sh@10 -- # set +x
00:21:43.550   17:05:36	-- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2
00:21:43.809  [2024-11-19 17:05:36.574132] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:21:43.809  BaseBdev2
00:21:43.809   17:05:36	-- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2
00:21:43.809   17:05:36	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2
00:21:43.809   17:05:36	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:21:43.809   17:05:36	-- common/autotest_common.sh@899 -- # local i
00:21:43.809   17:05:36	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:21:43.809   17:05:36	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:21:43.809   17:05:36	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:21:44.068   17:05:36	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000
00:21:44.327  [
00:21:44.327    {
00:21:44.327      "name": "BaseBdev2",
00:21:44.327      "aliases": [
00:21:44.327        "e4a31326-604c-4e3e-831f-4a13097ca6dc"
00:21:44.327      ],
00:21:44.327      "product_name": "Malloc disk",
00:21:44.327      "block_size": 512,
00:21:44.327      "num_blocks": 65536,
00:21:44.327      "uuid": "e4a31326-604c-4e3e-831f-4a13097ca6dc",
00:21:44.327      "assigned_rate_limits": {
00:21:44.327        "rw_ios_per_sec": 0,
00:21:44.327        "rw_mbytes_per_sec": 0,
00:21:44.327        "r_mbytes_per_sec": 0,
00:21:44.327        "w_mbytes_per_sec": 0
00:21:44.327      },
00:21:44.327      "claimed": true,
00:21:44.327      "claim_type": "exclusive_write",
00:21:44.327      "zoned": false,
00:21:44.327      "supported_io_types": {
00:21:44.327        "read": true,
00:21:44.327        "write": true,
00:21:44.327        "unmap": true,
00:21:44.327        "write_zeroes": true,
00:21:44.327        "flush": true,
00:21:44.327        "reset": true,
00:21:44.327        "compare": false,
00:21:44.327        "compare_and_write": false,
00:21:44.327        "abort": true,
00:21:44.327        "nvme_admin": false,
00:21:44.327        "nvme_io": false
00:21:44.327      },
00:21:44.327      "memory_domains": [
00:21:44.327        {
00:21:44.327          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:21:44.327          "dma_device_type": 2
00:21:44.327        }
00:21:44.327      ],
00:21:44.327      "driver_specific": {}
00:21:44.327    }
00:21:44.327  ]
00:21:44.327   17:05:37	-- common/autotest_common.sh@905 -- # return 0
00:21:44.327   17:05:37	-- bdev/bdev_raid.sh@254 -- # (( i++ ))
00:21:44.327   17:05:37	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:21:44.327   17:05:37	-- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3
00:21:44.327   17:05:37	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:21:44.327   17:05:37	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:21:44.327   17:05:37	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:21:44.327   17:05:37	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:21:44.327   17:05:37	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:21:44.327   17:05:37	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:21:44.327   17:05:37	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:21:44.327   17:05:37	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:21:44.327   17:05:37	-- bdev/bdev_raid.sh@125 -- # local tmp
00:21:44.327    17:05:37	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:21:44.327    17:05:37	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:21:44.586   17:05:37	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:21:44.586    "name": "Existed_Raid",
00:21:44.586    "uuid": "00000000-0000-0000-0000-000000000000",
00:21:44.586    "strip_size_kb": 64,
00:21:44.586    "state": "configuring",
00:21:44.586    "raid_level": "raid5f",
00:21:44.586    "superblock": false,
00:21:44.586    "num_base_bdevs": 3,
00:21:44.586    "num_base_bdevs_discovered": 2,
00:21:44.586    "num_base_bdevs_operational": 3,
00:21:44.586    "base_bdevs_list": [
00:21:44.586      {
00:21:44.586        "name": "BaseBdev1",
00:21:44.586        "uuid": "65fd1dbe-88e5-41ea-8cef-50069b6001f8",
00:21:44.586        "is_configured": true,
00:21:44.586        "data_offset": 0,
00:21:44.586        "data_size": 65536
00:21:44.586      },
00:21:44.586      {
00:21:44.586        "name": "BaseBdev2",
00:21:44.586        "uuid": "e4a31326-604c-4e3e-831f-4a13097ca6dc",
00:21:44.586        "is_configured": true,
00:21:44.586        "data_offset": 0,
00:21:44.586        "data_size": 65536
00:21:44.586      },
00:21:44.586      {
00:21:44.586        "name": "BaseBdev3",
00:21:44.586        "uuid": "00000000-0000-0000-0000-000000000000",
00:21:44.586        "is_configured": false,
00:21:44.586        "data_offset": 0,
00:21:44.586        "data_size": 0
00:21:44.586      }
00:21:44.586    ]
00:21:44.586  }'
00:21:44.586   17:05:37	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:21:44.586   17:05:37	-- common/autotest_common.sh@10 -- # set +x
00:21:45.154   17:05:37	-- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3
00:21:45.413  [2024-11-19 17:05:38.145735] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:21:45.413  [2024-11-19 17:05:38.145847] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080
00:21:45.413  [2024-11-19 17:05:38.145858] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512
00:21:45.413  [2024-11-19 17:05:38.146006] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002050
00:21:45.413  [2024-11-19 17:05:38.146796] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080
00:21:45.413  [2024-11-19 17:05:38.146819] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080
00:21:45.413  [2024-11-19 17:05:38.147084] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:21:45.413  BaseBdev3
00:21:45.413   17:05:38	-- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3
00:21:45.413   17:05:38	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3
00:21:45.413   17:05:38	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:21:45.413   17:05:38	-- common/autotest_common.sh@899 -- # local i
00:21:45.413   17:05:38	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:21:45.413   17:05:38	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:21:45.413   17:05:38	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:21:45.672   17:05:38	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000
00:21:45.931  [
00:21:45.931    {
00:21:45.931      "name": "BaseBdev3",
00:21:45.931      "aliases": [
00:21:45.931        "56ff64da-a82e-4bcc-ac7b-446df704215d"
00:21:45.931      ],
00:21:45.931      "product_name": "Malloc disk",
00:21:45.931      "block_size": 512,
00:21:45.931      "num_blocks": 65536,
00:21:45.931      "uuid": "56ff64da-a82e-4bcc-ac7b-446df704215d",
00:21:45.931      "assigned_rate_limits": {
00:21:45.931        "rw_ios_per_sec": 0,
00:21:45.931        "rw_mbytes_per_sec": 0,
00:21:45.931        "r_mbytes_per_sec": 0,
00:21:45.931        "w_mbytes_per_sec": 0
00:21:45.931      },
00:21:45.931      "claimed": true,
00:21:45.931      "claim_type": "exclusive_write",
00:21:45.931      "zoned": false,
00:21:45.931      "supported_io_types": {
00:21:45.931        "read": true,
00:21:45.931        "write": true,
00:21:45.931        "unmap": true,
00:21:45.931        "write_zeroes": true,
00:21:45.931        "flush": true,
00:21:45.931        "reset": true,
00:21:45.931        "compare": false,
00:21:45.931        "compare_and_write": false,
00:21:45.931        "abort": true,
00:21:45.931        "nvme_admin": false,
00:21:45.931        "nvme_io": false
00:21:45.931      },
00:21:45.931      "memory_domains": [
00:21:45.931        {
00:21:45.931          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:21:45.931          "dma_device_type": 2
00:21:45.931        }
00:21:45.931      ],
00:21:45.931      "driver_specific": {}
00:21:45.931    }
00:21:45.931  ]
00:21:45.931   17:05:38	-- common/autotest_common.sh@905 -- # return 0
00:21:45.931   17:05:38	-- bdev/bdev_raid.sh@254 -- # (( i++ ))
00:21:45.931   17:05:38	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:21:45.931   17:05:38	-- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3
00:21:45.931   17:05:38	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:21:45.931   17:05:38	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:21:45.931   17:05:38	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:21:45.931   17:05:38	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:21:45.931   17:05:38	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:21:45.931   17:05:38	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:21:45.931   17:05:38	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:21:45.931   17:05:38	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:21:45.931   17:05:38	-- bdev/bdev_raid.sh@125 -- # local tmp
00:21:45.932    17:05:38	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:21:45.932    17:05:38	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:21:46.190   17:05:38	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:21:46.190    "name": "Existed_Raid",
00:21:46.191    "uuid": "1c57c4ef-6548-4edf-a810-3c0e1b619fca",
00:21:46.191    "strip_size_kb": 64,
00:21:46.191    "state": "online",
00:21:46.191    "raid_level": "raid5f",
00:21:46.191    "superblock": false,
00:21:46.191    "num_base_bdevs": 3,
00:21:46.191    "num_base_bdevs_discovered": 3,
00:21:46.191    "num_base_bdevs_operational": 3,
00:21:46.191    "base_bdevs_list": [
00:21:46.191      {
00:21:46.191        "name": "BaseBdev1",
00:21:46.191        "uuid": "65fd1dbe-88e5-41ea-8cef-50069b6001f8",
00:21:46.191        "is_configured": true,
00:21:46.191        "data_offset": 0,
00:21:46.191        "data_size": 65536
00:21:46.191      },
00:21:46.191      {
00:21:46.191        "name": "BaseBdev2",
00:21:46.191        "uuid": "e4a31326-604c-4e3e-831f-4a13097ca6dc",
00:21:46.191        "is_configured": true,
00:21:46.191        "data_offset": 0,
00:21:46.191        "data_size": 65536
00:21:46.191      },
00:21:46.191      {
00:21:46.191        "name": "BaseBdev3",
00:21:46.191        "uuid": "56ff64da-a82e-4bcc-ac7b-446df704215d",
00:21:46.191        "is_configured": true,
00:21:46.191        "data_offset": 0,
00:21:46.191        "data_size": 65536
00:21:46.191      }
00:21:46.191    ]
00:21:46.191  }'
00:21:46.191   17:05:38	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:21:46.191   17:05:38	-- common/autotest_common.sh@10 -- # set +x
00:21:46.757   17:05:39	-- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1
00:21:47.017  [2024-11-19 17:05:39.630206] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:21:47.017   17:05:39	-- bdev/bdev_raid.sh@263 -- # local expected_state
00:21:47.017   17:05:39	-- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f
00:21:47.017   17:05:39	-- bdev/bdev_raid.sh@195 -- # case $1 in
00:21:47.017   17:05:39	-- bdev/bdev_raid.sh@196 -- # return 0
00:21:47.017   17:05:39	-- bdev/bdev_raid.sh@267 -- # expected_state=online
00:21:47.017   17:05:39	-- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2
00:21:47.017   17:05:39	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:21:47.017   17:05:39	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:21:47.017   17:05:39	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:21:47.017   17:05:39	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:21:47.017   17:05:39	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:21:47.017   17:05:39	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:21:47.017   17:05:39	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:21:47.017   17:05:39	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:21:47.017   17:05:39	-- bdev/bdev_raid.sh@125 -- # local tmp
00:21:47.017    17:05:39	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:21:47.017    17:05:39	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:21:47.017   17:05:39	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:21:47.017    "name": "Existed_Raid",
00:21:47.017    "uuid": "1c57c4ef-6548-4edf-a810-3c0e1b619fca",
00:21:47.017    "strip_size_kb": 64,
00:21:47.017    "state": "online",
00:21:47.017    "raid_level": "raid5f",
00:21:47.017    "superblock": false,
00:21:47.017    "num_base_bdevs": 3,
00:21:47.017    "num_base_bdevs_discovered": 2,
00:21:47.017    "num_base_bdevs_operational": 2,
00:21:47.017    "base_bdevs_list": [
00:21:47.017      {
00:21:47.017        "name": null,
00:21:47.017        "uuid": "00000000-0000-0000-0000-000000000000",
00:21:47.017        "is_configured": false,
00:21:47.017        "data_offset": 0,
00:21:47.017        "data_size": 65536
00:21:47.017      },
00:21:47.017      {
00:21:47.017        "name": "BaseBdev2",
00:21:47.017        "uuid": "e4a31326-604c-4e3e-831f-4a13097ca6dc",
00:21:47.017        "is_configured": true,
00:21:47.017        "data_offset": 0,
00:21:47.017        "data_size": 65536
00:21:47.017      },
00:21:47.017      {
00:21:47.017        "name": "BaseBdev3",
00:21:47.017        "uuid": "56ff64da-a82e-4bcc-ac7b-446df704215d",
00:21:47.017        "is_configured": true,
00:21:47.017        "data_offset": 0,
00:21:47.017        "data_size": 65536
00:21:47.017      }
00:21:47.017    ]
00:21:47.017  }'
00:21:47.017   17:05:39	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:21:47.017   17:05:39	-- common/autotest_common.sh@10 -- # set +x
00:21:47.953   17:05:40	-- bdev/bdev_raid.sh@273 -- # (( i = 1 ))
00:21:47.953   17:05:40	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:21:47.953    17:05:40	-- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:21:47.953    17:05:40	-- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]'
00:21:47.953   17:05:40	-- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid
00:21:47.953   17:05:40	-- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:21:47.953   17:05:40	-- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2
00:21:48.211  [2024-11-19 17:05:40.931395] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2
00:21:48.211  [2024-11-19 17:05:40.931437] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:21:48.211  [2024-11-19 17:05:40.931503] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:21:48.211   17:05:40	-- bdev/bdev_raid.sh@273 -- # (( i++ ))
00:21:48.211   17:05:40	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:21:48.211    17:05:40	-- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:21:48.211    17:05:40	-- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]'
00:21:48.469   17:05:41	-- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid
00:21:48.469   17:05:41	-- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:21:48.469   17:05:41	-- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3
00:21:48.728  [2024-11-19 17:05:41.431802] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3
00:21:48.728  [2024-11-19 17:05:41.431900] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline
00:21:48.728   17:05:41	-- bdev/bdev_raid.sh@273 -- # (( i++ ))
00:21:48.728   17:05:41	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:21:48.728    17:05:41	-- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:21:48.728    17:05:41	-- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)'
00:21:48.987   17:05:41	-- bdev/bdev_raid.sh@281 -- # raid_bdev=
00:21:48.987   17:05:41	-- bdev/bdev_raid.sh@282 -- # '[' -n '' ']'
00:21:48.987   17:05:41	-- bdev/bdev_raid.sh@287 -- # killprocess 137281
00:21:48.987   17:05:41	-- common/autotest_common.sh@936 -- # '[' -z 137281 ']'
00:21:48.987   17:05:41	-- common/autotest_common.sh@940 -- # kill -0 137281
00:21:48.987    17:05:41	-- common/autotest_common.sh@941 -- # uname
00:21:48.987   17:05:41	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:21:48.987    17:05:41	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 137281
00:21:48.987   17:05:41	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:21:48.987   17:05:41	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:21:48.987   17:05:41	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 137281'
00:21:48.987  killing process with pid 137281
00:21:48.987   17:05:41	-- common/autotest_common.sh@955 -- # kill 137281
00:21:48.987   17:05:41	-- common/autotest_common.sh@960 -- # wait 137281
00:21:48.987  [2024-11-19 17:05:41.804933] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:21:48.987  [2024-11-19 17:05:41.805009] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:21:49.246   17:05:42	-- bdev/bdev_raid.sh@289 -- # return 0
00:21:49.246  
00:21:49.246  real	0m11.458s
00:21:49.246  user	0m20.543s
00:21:49.246  sys	0m1.945s
00:21:49.246   17:05:42	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:21:49.246   17:05:42	-- common/autotest_common.sh@10 -- # set +x
00:21:49.246  ************************************
00:21:49.246  END TEST raid5f_state_function_test
00:21:49.246  ************************************
00:21:49.505   17:05:42	-- bdev/bdev_raid.sh@745 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true
00:21:49.505   17:05:42	-- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']'
00:21:49.505   17:05:42	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:21:49.505   17:05:42	-- common/autotest_common.sh@10 -- # set +x
00:21:49.505  ************************************
00:21:49.505  START TEST raid5f_state_function_test_sb
00:21:49.505  ************************************
00:21:49.505   17:05:42	-- common/autotest_common.sh@1114 -- # raid_state_function_test raid5f 3 true
00:21:49.505   17:05:42	-- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f
00:21:49.505   17:05:42	-- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3
00:21:49.505   17:05:42	-- bdev/bdev_raid.sh@204 -- # local superblock=true
00:21:49.505   17:05:42	-- bdev/bdev_raid.sh@205 -- # local raid_bdev
00:21:49.505    17:05:42	-- bdev/bdev_raid.sh@206 -- # (( i = 1 ))
00:21:49.505    17:05:42	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:21:49.505    17:05:42	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev1
00:21:49.505    17:05:42	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:21:49.505    17:05:42	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:21:49.505    17:05:42	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev2
00:21:49.505    17:05:42	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:21:49.505    17:05:42	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:21:49.505    17:05:42	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev3
00:21:49.505    17:05:42	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:21:49.505    17:05:42	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:21:49.505   17:05:42	-- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3')
00:21:49.505   17:05:42	-- bdev/bdev_raid.sh@206 -- # local base_bdevs
00:21:49.505   17:05:42	-- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid
00:21:49.505   17:05:42	-- bdev/bdev_raid.sh@208 -- # local strip_size
00:21:49.505   17:05:42	-- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg
00:21:49.505   17:05:42	-- bdev/bdev_raid.sh@210 -- # local superblock_create_arg
00:21:49.505   17:05:42	-- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']'
00:21:49.505   17:05:42	-- bdev/bdev_raid.sh@213 -- # strip_size=64
00:21:49.505   17:05:42	-- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64'
00:21:49.505   17:05:42	-- bdev/bdev_raid.sh@219 -- # '[' true = true ']'
00:21:49.505   17:05:42	-- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s
00:21:49.505   17:05:42	-- bdev/bdev_raid.sh@226 -- # raid_pid=137646
00:21:49.505   17:05:42	-- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid
00:21:49.505   17:05:42	-- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 137646'
00:21:49.505  Process raid pid: 137646
00:21:49.505   17:05:42	-- bdev/bdev_raid.sh@228 -- # waitforlisten 137646 /var/tmp/spdk-raid.sock
00:21:49.505   17:05:42	-- common/autotest_common.sh@829 -- # '[' -z 137646 ']'
00:21:49.505   17:05:42	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock
00:21:49.505   17:05:42	-- common/autotest_common.sh@834 -- # local max_retries=100
00:21:49.505  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...
00:21:49.505   17:05:42	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...'
00:21:49.505   17:05:42	-- common/autotest_common.sh@838 -- # xtrace_disable
00:21:49.505   17:05:42	-- common/autotest_common.sh@10 -- # set +x
00:21:49.505  [2024-11-19 17:05:42.194820] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:21:49.505  [2024-11-19 17:05:42.195061] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:21:49.505  [2024-11-19 17:05:42.353210] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:21:49.764  [2024-11-19 17:05:42.403028] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:21:49.764  [2024-11-19 17:05:42.447088] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:21:50.331   17:05:43	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:21:50.331   17:05:43	-- common/autotest_common.sh@862 -- # return 0
00:21:50.331   17:05:43	-- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid
00:21:50.603  [2024-11-19 17:05:43.394370] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:21:50.603  [2024-11-19 17:05:43.394466] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:21:50.603  [2024-11-19 17:05:43.394478] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:21:50.603  [2024-11-19 17:05:43.394497] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:21:50.603  [2024-11-19 17:05:43.394504] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:21:50.603  [2024-11-19 17:05:43.394561] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:21:50.603   17:05:43	-- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3
00:21:50.603   17:05:43	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:21:50.603   17:05:43	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:21:50.603   17:05:43	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:21:50.603   17:05:43	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:21:50.603   17:05:43	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:21:50.603   17:05:43	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:21:50.603   17:05:43	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:21:50.603   17:05:43	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:21:50.603   17:05:43	-- bdev/bdev_raid.sh@125 -- # local tmp
00:21:50.603    17:05:43	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:21:50.603    17:05:43	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:21:50.926   17:05:43	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:21:50.926    "name": "Existed_Raid",
00:21:50.926    "uuid": "bb480e53-0319-4a9f-a9d6-c08c9d579099",
00:21:50.926    "strip_size_kb": 64,
00:21:50.926    "state": "configuring",
00:21:50.926    "raid_level": "raid5f",
00:21:50.926    "superblock": true,
00:21:50.926    "num_base_bdevs": 3,
00:21:50.926    "num_base_bdevs_discovered": 0,
00:21:50.926    "num_base_bdevs_operational": 3,
00:21:50.926    "base_bdevs_list": [
00:21:50.926      {
00:21:50.926        "name": "BaseBdev1",
00:21:50.926        "uuid": "00000000-0000-0000-0000-000000000000",
00:21:50.926        "is_configured": false,
00:21:50.926        "data_offset": 0,
00:21:50.926        "data_size": 0
00:21:50.926      },
00:21:50.926      {
00:21:50.926        "name": "BaseBdev2",
00:21:50.926        "uuid": "00000000-0000-0000-0000-000000000000",
00:21:50.926        "is_configured": false,
00:21:50.926        "data_offset": 0,
00:21:50.926        "data_size": 0
00:21:50.926      },
00:21:50.926      {
00:21:50.926        "name": "BaseBdev3",
00:21:50.926        "uuid": "00000000-0000-0000-0000-000000000000",
00:21:50.926        "is_configured": false,
00:21:50.926        "data_offset": 0,
00:21:50.926        "data_size": 0
00:21:50.926      }
00:21:50.926    ]
00:21:50.926  }'
00:21:50.926   17:05:43	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:21:50.926   17:05:43	-- common/autotest_common.sh@10 -- # set +x
00:21:51.494   17:05:44	-- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid
00:21:51.792  [2024-11-19 17:05:44.398377] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:21:51.792  [2024-11-19 17:05:44.398434] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring
00:21:51.792   17:05:44	-- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid
00:21:52.051  [2024-11-19 17:05:44.654458] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:21:52.051  [2024-11-19 17:05:44.654532] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:21:52.051  [2024-11-19 17:05:44.654542] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:21:52.051  [2024-11-19 17:05:44.654565] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:21:52.051  [2024-11-19 17:05:44.654572] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:21:52.051  [2024-11-19 17:05:44.654601] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:21:52.051   17:05:44	-- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1
00:21:52.051  [2024-11-19 17:05:44.888211] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:21:52.051  BaseBdev1
00:21:52.051   17:05:44	-- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1
00:21:52.051   17:05:44	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1
00:21:52.051   17:05:44	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:21:52.051   17:05:44	-- common/autotest_common.sh@899 -- # local i
00:21:52.309   17:05:44	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:21:52.309   17:05:44	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:21:52.309   17:05:44	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:21:52.309   17:05:45	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000
00:21:52.568  [
00:21:52.568    {
00:21:52.568      "name": "BaseBdev1",
00:21:52.568      "aliases": [
00:21:52.568        "c4ada1a9-be2a-4823-add2-0f10d23ae709"
00:21:52.568      ],
00:21:52.568      "product_name": "Malloc disk",
00:21:52.568      "block_size": 512,
00:21:52.568      "num_blocks": 65536,
00:21:52.568      "uuid": "c4ada1a9-be2a-4823-add2-0f10d23ae709",
00:21:52.568      "assigned_rate_limits": {
00:21:52.568        "rw_ios_per_sec": 0,
00:21:52.568        "rw_mbytes_per_sec": 0,
00:21:52.568        "r_mbytes_per_sec": 0,
00:21:52.568        "w_mbytes_per_sec": 0
00:21:52.568      },
00:21:52.568      "claimed": true,
00:21:52.568      "claim_type": "exclusive_write",
00:21:52.568      "zoned": false,
00:21:52.568      "supported_io_types": {
00:21:52.568        "read": true,
00:21:52.568        "write": true,
00:21:52.568        "unmap": true,
00:21:52.568        "write_zeroes": true,
00:21:52.568        "flush": true,
00:21:52.568        "reset": true,
00:21:52.568        "compare": false,
00:21:52.568        "compare_and_write": false,
00:21:52.568        "abort": true,
00:21:52.568        "nvme_admin": false,
00:21:52.568        "nvme_io": false
00:21:52.568      },
00:21:52.568      "memory_domains": [
00:21:52.568        {
00:21:52.568          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:21:52.568          "dma_device_type": 2
00:21:52.568        }
00:21:52.568      ],
00:21:52.568      "driver_specific": {}
00:21:52.568    }
00:21:52.568  ]
00:21:52.568   17:05:45	-- common/autotest_common.sh@905 -- # return 0
00:21:52.568   17:05:45	-- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3
00:21:52.568   17:05:45	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:21:52.568   17:05:45	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:21:52.568   17:05:45	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:21:52.568   17:05:45	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:21:52.568   17:05:45	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:21:52.568   17:05:45	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:21:52.568   17:05:45	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:21:52.568   17:05:45	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:21:52.568   17:05:45	-- bdev/bdev_raid.sh@125 -- # local tmp
00:21:52.568    17:05:45	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:21:52.568    17:05:45	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:21:52.826   17:05:45	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:21:52.827    "name": "Existed_Raid",
00:21:52.827    "uuid": "c5041fc4-27a9-47b4-b544-085fc8b72cd8",
00:21:52.827    "strip_size_kb": 64,
00:21:52.827    "state": "configuring",
00:21:52.827    "raid_level": "raid5f",
00:21:52.827    "superblock": true,
00:21:52.827    "num_base_bdevs": 3,
00:21:52.827    "num_base_bdevs_discovered": 1,
00:21:52.827    "num_base_bdevs_operational": 3,
00:21:52.827    "base_bdevs_list": [
00:21:52.827      {
00:21:52.827        "name": "BaseBdev1",
00:21:52.827        "uuid": "c4ada1a9-be2a-4823-add2-0f10d23ae709",
00:21:52.827        "is_configured": true,
00:21:52.827        "data_offset": 2048,
00:21:52.827        "data_size": 63488
00:21:52.827      },
00:21:52.827      {
00:21:52.827        "name": "BaseBdev2",
00:21:52.827        "uuid": "00000000-0000-0000-0000-000000000000",
00:21:52.827        "is_configured": false,
00:21:52.827        "data_offset": 0,
00:21:52.827        "data_size": 0
00:21:52.827      },
00:21:52.827      {
00:21:52.827        "name": "BaseBdev3",
00:21:52.827        "uuid": "00000000-0000-0000-0000-000000000000",
00:21:52.827        "is_configured": false,
00:21:52.827        "data_offset": 0,
00:21:52.827        "data_size": 0
00:21:52.827      }
00:21:52.827    ]
00:21:52.827  }'
00:21:52.827   17:05:45	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:21:52.827   17:05:45	-- common/autotest_common.sh@10 -- # set +x
00:21:53.397   17:05:46	-- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid
00:21:53.656  [2024-11-19 17:05:46.444568] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:21:53.656  [2024-11-19 17:05:46.444657] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring
00:21:53.656   17:05:46	-- bdev/bdev_raid.sh@244 -- # '[' true = true ']'
00:21:53.656   17:05:46	-- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1
00:21:53.914   17:05:46	-- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1
00:21:54.173  BaseBdev1
00:21:54.173   17:05:46	-- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1
00:21:54.173   17:05:46	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1
00:21:54.173   17:05:46	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:21:54.173   17:05:46	-- common/autotest_common.sh@899 -- # local i
00:21:54.173   17:05:46	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:21:54.173   17:05:46	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:21:54.173   17:05:46	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:21:54.432   17:05:47	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000
00:21:54.432  [
00:21:54.432    {
00:21:54.432      "name": "BaseBdev1",
00:21:54.432      "aliases": [
00:21:54.432        "f39acb7a-ce97-4de9-97bb-5c297ac20ee9"
00:21:54.432      ],
00:21:54.432      "product_name": "Malloc disk",
00:21:54.432      "block_size": 512,
00:21:54.432      "num_blocks": 65536,
00:21:54.432      "uuid": "f39acb7a-ce97-4de9-97bb-5c297ac20ee9",
00:21:54.432      "assigned_rate_limits": {
00:21:54.432        "rw_ios_per_sec": 0,
00:21:54.432        "rw_mbytes_per_sec": 0,
00:21:54.432        "r_mbytes_per_sec": 0,
00:21:54.432        "w_mbytes_per_sec": 0
00:21:54.432      },
00:21:54.432      "claimed": false,
00:21:54.432      "zoned": false,
00:21:54.432      "supported_io_types": {
00:21:54.432        "read": true,
00:21:54.432        "write": true,
00:21:54.432        "unmap": true,
00:21:54.432        "write_zeroes": true,
00:21:54.432        "flush": true,
00:21:54.432        "reset": true,
00:21:54.432        "compare": false,
00:21:54.432        "compare_and_write": false,
00:21:54.432        "abort": true,
00:21:54.432        "nvme_admin": false,
00:21:54.432        "nvme_io": false
00:21:54.432      },
00:21:54.432      "memory_domains": [
00:21:54.432        {
00:21:54.432          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:21:54.432          "dma_device_type": 2
00:21:54.432        }
00:21:54.432      ],
00:21:54.432      "driver_specific": {}
00:21:54.432    }
00:21:54.432  ]
00:21:54.432   17:05:47	-- common/autotest_common.sh@905 -- # return 0
00:21:54.432   17:05:47	-- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid
00:21:54.691  [2024-11-19 17:05:47.525843] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:21:54.691  [2024-11-19 17:05:47.528099] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:21:54.691  [2024-11-19 17:05:47.528185] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:21:54.691  [2024-11-19 17:05:47.528196] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:21:54.691  [2024-11-19 17:05:47.528223] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:21:54.951   17:05:47	-- bdev/bdev_raid.sh@254 -- # (( i = 1 ))
00:21:54.951   17:05:47	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:21:54.951   17:05:47	-- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3
00:21:54.951   17:05:47	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:21:54.951   17:05:47	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:21:54.951   17:05:47	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:21:54.951   17:05:47	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:21:54.951   17:05:47	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:21:54.951   17:05:47	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:21:54.951   17:05:47	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:21:54.951   17:05:47	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:21:54.951   17:05:47	-- bdev/bdev_raid.sh@125 -- # local tmp
00:21:54.951    17:05:47	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:21:54.951    17:05:47	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:21:54.951   17:05:47	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:21:54.951    "name": "Existed_Raid",
00:21:54.951    "uuid": "0e44472a-0df5-4f04-9223-6fd9bbb473ad",
00:21:54.951    "strip_size_kb": 64,
00:21:54.951    "state": "configuring",
00:21:54.951    "raid_level": "raid5f",
00:21:54.951    "superblock": true,
00:21:54.951    "num_base_bdevs": 3,
00:21:54.951    "num_base_bdevs_discovered": 1,
00:21:54.951    "num_base_bdevs_operational": 3,
00:21:54.951    "base_bdevs_list": [
00:21:54.951      {
00:21:54.951        "name": "BaseBdev1",
00:21:54.951        "uuid": "f39acb7a-ce97-4de9-97bb-5c297ac20ee9",
00:21:54.951        "is_configured": true,
00:21:54.951        "data_offset": 2048,
00:21:54.951        "data_size": 63488
00:21:54.951      },
00:21:54.951      {
00:21:54.951        "name": "BaseBdev2",
00:21:54.951        "uuid": "00000000-0000-0000-0000-000000000000",
00:21:54.951        "is_configured": false,
00:21:54.951        "data_offset": 0,
00:21:54.951        "data_size": 0
00:21:54.951      },
00:21:54.951      {
00:21:54.951        "name": "BaseBdev3",
00:21:54.951        "uuid": "00000000-0000-0000-0000-000000000000",
00:21:54.951        "is_configured": false,
00:21:54.951        "data_offset": 0,
00:21:54.951        "data_size": 0
00:21:54.951      }
00:21:54.951    ]
00:21:54.951  }'
00:21:54.951   17:05:47	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:21:54.951   17:05:47	-- common/autotest_common.sh@10 -- # set +x
00:21:55.519   17:05:48	-- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2
00:21:55.778  [2024-11-19 17:05:48.626455] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:21:55.778  BaseBdev2
00:21:56.037   17:05:48	-- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2
00:21:56.037   17:05:48	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2
00:21:56.037   17:05:48	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:21:56.037   17:05:48	-- common/autotest_common.sh@899 -- # local i
00:21:56.037   17:05:48	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:21:56.037   17:05:48	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:21:56.037   17:05:48	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:21:56.037   17:05:48	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000
00:21:56.295  [
00:21:56.295    {
00:21:56.295      "name": "BaseBdev2",
00:21:56.295      "aliases": [
00:21:56.295        "5b766649-2280-4ab4-b149-a71de8b9d6f1"
00:21:56.295      ],
00:21:56.295      "product_name": "Malloc disk",
00:21:56.295      "block_size": 512,
00:21:56.295      "num_blocks": 65536,
00:21:56.295      "uuid": "5b766649-2280-4ab4-b149-a71de8b9d6f1",
00:21:56.295      "assigned_rate_limits": {
00:21:56.295        "rw_ios_per_sec": 0,
00:21:56.295        "rw_mbytes_per_sec": 0,
00:21:56.295        "r_mbytes_per_sec": 0,
00:21:56.295        "w_mbytes_per_sec": 0
00:21:56.295      },
00:21:56.295      "claimed": true,
00:21:56.295      "claim_type": "exclusive_write",
00:21:56.295      "zoned": false,
00:21:56.295      "supported_io_types": {
00:21:56.295        "read": true,
00:21:56.295        "write": true,
00:21:56.295        "unmap": true,
00:21:56.296        "write_zeroes": true,
00:21:56.296        "flush": true,
00:21:56.296        "reset": true,
00:21:56.296        "compare": false,
00:21:56.296        "compare_and_write": false,
00:21:56.296        "abort": true,
00:21:56.296        "nvme_admin": false,
00:21:56.296        "nvme_io": false
00:21:56.296      },
00:21:56.296      "memory_domains": [
00:21:56.296        {
00:21:56.296          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:21:56.296          "dma_device_type": 2
00:21:56.296        }
00:21:56.296      ],
00:21:56.296      "driver_specific": {}
00:21:56.296    }
00:21:56.296  ]
00:21:56.296   17:05:49	-- common/autotest_common.sh@905 -- # return 0
00:21:56.296   17:05:49	-- bdev/bdev_raid.sh@254 -- # (( i++ ))
00:21:56.296   17:05:49	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:21:56.296   17:05:49	-- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3
00:21:56.296   17:05:49	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:21:56.296   17:05:49	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:21:56.296   17:05:49	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:21:56.296   17:05:49	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:21:56.296   17:05:49	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:21:56.296   17:05:49	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:21:56.296   17:05:49	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:21:56.296   17:05:49	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:21:56.296   17:05:49	-- bdev/bdev_raid.sh@125 -- # local tmp
00:21:56.296    17:05:49	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:21:56.296    17:05:49	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:21:56.554   17:05:49	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:21:56.554    "name": "Existed_Raid",
00:21:56.554    "uuid": "0e44472a-0df5-4f04-9223-6fd9bbb473ad",
00:21:56.554    "strip_size_kb": 64,
00:21:56.554    "state": "configuring",
00:21:56.554    "raid_level": "raid5f",
00:21:56.554    "superblock": true,
00:21:56.554    "num_base_bdevs": 3,
00:21:56.554    "num_base_bdevs_discovered": 2,
00:21:56.554    "num_base_bdevs_operational": 3,
00:21:56.554    "base_bdevs_list": [
00:21:56.554      {
00:21:56.554        "name": "BaseBdev1",
00:21:56.554        "uuid": "f39acb7a-ce97-4de9-97bb-5c297ac20ee9",
00:21:56.554        "is_configured": true,
00:21:56.554        "data_offset": 2048,
00:21:56.554        "data_size": 63488
00:21:56.554      },
00:21:56.554      {
00:21:56.554        "name": "BaseBdev2",
00:21:56.554        "uuid": "5b766649-2280-4ab4-b149-a71de8b9d6f1",
00:21:56.554        "is_configured": true,
00:21:56.554        "data_offset": 2048,
00:21:56.554        "data_size": 63488
00:21:56.554      },
00:21:56.554      {
00:21:56.554        "name": "BaseBdev3",
00:21:56.554        "uuid": "00000000-0000-0000-0000-000000000000",
00:21:56.554        "is_configured": false,
00:21:56.554        "data_offset": 0,
00:21:56.554        "data_size": 0
00:21:56.554      }
00:21:56.554    ]
00:21:56.554  }'
00:21:56.554   17:05:49	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:21:56.554   17:05:49	-- common/autotest_common.sh@10 -- # set +x
00:21:57.122   17:05:49	-- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3
00:21:57.381  [2024-11-19 17:05:50.189879] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:21:57.381  [2024-11-19 17:05:50.190110] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006680
00:21:57.381  [2024-11-19 17:05:50.190124] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512
00:21:57.381  [2024-11-19 17:05:50.190249] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120
00:21:57.381  [2024-11-19 17:05:50.191058] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006680
00:21:57.381  [2024-11-19 17:05:50.191080] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006680
00:21:57.381  [2024-11-19 17:05:50.191228] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:21:57.381  BaseBdev3
00:21:57.381   17:05:50	-- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3
00:21:57.381   17:05:50	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3
00:21:57.381   17:05:50	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:21:57.381   17:05:50	-- common/autotest_common.sh@899 -- # local i
00:21:57.381   17:05:50	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:21:57.381   17:05:50	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:21:57.381   17:05:50	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:21:57.640   17:05:50	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000
00:21:57.899  [
00:21:57.899    {
00:21:57.899      "name": "BaseBdev3",
00:21:57.899      "aliases": [
00:21:57.899        "eecaed95-0628-46e9-91bf-eeecf629e8f1"
00:21:57.899      ],
00:21:57.899      "product_name": "Malloc disk",
00:21:57.899      "block_size": 512,
00:21:57.899      "num_blocks": 65536,
00:21:57.899      "uuid": "eecaed95-0628-46e9-91bf-eeecf629e8f1",
00:21:57.899      "assigned_rate_limits": {
00:21:57.899        "rw_ios_per_sec": 0,
00:21:57.899        "rw_mbytes_per_sec": 0,
00:21:57.899        "r_mbytes_per_sec": 0,
00:21:57.899        "w_mbytes_per_sec": 0
00:21:57.899      },
00:21:57.899      "claimed": true,
00:21:57.899      "claim_type": "exclusive_write",
00:21:57.899      "zoned": false,
00:21:57.899      "supported_io_types": {
00:21:57.899        "read": true,
00:21:57.899        "write": true,
00:21:57.899        "unmap": true,
00:21:57.899        "write_zeroes": true,
00:21:57.899        "flush": true,
00:21:57.899        "reset": true,
00:21:57.899        "compare": false,
00:21:57.899        "compare_and_write": false,
00:21:57.899        "abort": true,
00:21:57.899        "nvme_admin": false,
00:21:57.899        "nvme_io": false
00:21:57.899      },
00:21:57.899      "memory_domains": [
00:21:57.899        {
00:21:57.899          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:21:57.899          "dma_device_type": 2
00:21:57.899        }
00:21:57.899      ],
00:21:57.899      "driver_specific": {}
00:21:57.899    }
00:21:57.899  ]
00:21:57.899   17:05:50	-- common/autotest_common.sh@905 -- # return 0
00:21:57.899   17:05:50	-- bdev/bdev_raid.sh@254 -- # (( i++ ))
00:21:57.899   17:05:50	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:21:57.899   17:05:50	-- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3
00:21:57.899   17:05:50	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:21:57.899   17:05:50	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:21:57.899   17:05:50	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:21:57.899   17:05:50	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:21:57.899   17:05:50	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:21:57.899   17:05:50	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:21:57.899   17:05:50	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:21:57.899   17:05:50	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:21:57.899   17:05:50	-- bdev/bdev_raid.sh@125 -- # local tmp
00:21:57.899    17:05:50	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:21:57.899    17:05:50	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:21:58.158   17:05:50	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:21:58.158    "name": "Existed_Raid",
00:21:58.158    "uuid": "0e44472a-0df5-4f04-9223-6fd9bbb473ad",
00:21:58.158    "strip_size_kb": 64,
00:21:58.158    "state": "online",
00:21:58.158    "raid_level": "raid5f",
00:21:58.158    "superblock": true,
00:21:58.158    "num_base_bdevs": 3,
00:21:58.158    "num_base_bdevs_discovered": 3,
00:21:58.158    "num_base_bdevs_operational": 3,
00:21:58.158    "base_bdevs_list": [
00:21:58.158      {
00:21:58.158        "name": "BaseBdev1",
00:21:58.158        "uuid": "f39acb7a-ce97-4de9-97bb-5c297ac20ee9",
00:21:58.158        "is_configured": true,
00:21:58.158        "data_offset": 2048,
00:21:58.158        "data_size": 63488
00:21:58.158      },
00:21:58.158      {
00:21:58.158        "name": "BaseBdev2",
00:21:58.158        "uuid": "5b766649-2280-4ab4-b149-a71de8b9d6f1",
00:21:58.158        "is_configured": true,
00:21:58.158        "data_offset": 2048,
00:21:58.158        "data_size": 63488
00:21:58.158      },
00:21:58.158      {
00:21:58.158        "name": "BaseBdev3",
00:21:58.158        "uuid": "eecaed95-0628-46e9-91bf-eeecf629e8f1",
00:21:58.158        "is_configured": true,
00:21:58.158        "data_offset": 2048,
00:21:58.158        "data_size": 63488
00:21:58.158      }
00:21:58.158    ]
00:21:58.158  }'
00:21:58.158   17:05:50	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:21:58.158   17:05:50	-- common/autotest_common.sh@10 -- # set +x
00:21:58.725   17:05:51	-- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1
00:21:58.984  [2024-11-19 17:05:51.694334] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:21:58.984   17:05:51	-- bdev/bdev_raid.sh@263 -- # local expected_state
00:21:58.984   17:05:51	-- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f
00:21:58.984   17:05:51	-- bdev/bdev_raid.sh@195 -- # case $1 in
00:21:58.984   17:05:51	-- bdev/bdev_raid.sh@196 -- # return 0
00:21:58.984   17:05:51	-- bdev/bdev_raid.sh@267 -- # expected_state=online
00:21:58.984   17:05:51	-- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2
00:21:58.984   17:05:51	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:21:58.984   17:05:51	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:21:58.984   17:05:51	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:21:58.984   17:05:51	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:21:58.984   17:05:51	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:21:58.984   17:05:51	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:21:58.984   17:05:51	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:21:58.984   17:05:51	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:21:58.984   17:05:51	-- bdev/bdev_raid.sh@125 -- # local tmp
00:21:58.984    17:05:51	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:21:58.984    17:05:51	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:21:59.242   17:05:52	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:21:59.242    "name": "Existed_Raid",
00:21:59.242    "uuid": "0e44472a-0df5-4f04-9223-6fd9bbb473ad",
00:21:59.242    "strip_size_kb": 64,
00:21:59.242    "state": "online",
00:21:59.242    "raid_level": "raid5f",
00:21:59.242    "superblock": true,
00:21:59.242    "num_base_bdevs": 3,
00:21:59.242    "num_base_bdevs_discovered": 2,
00:21:59.242    "num_base_bdevs_operational": 2,
00:21:59.242    "base_bdevs_list": [
00:21:59.242      {
00:21:59.242        "name": null,
00:21:59.242        "uuid": "00000000-0000-0000-0000-000000000000",
00:21:59.242        "is_configured": false,
00:21:59.242        "data_offset": 2048,
00:21:59.242        "data_size": 63488
00:21:59.242      },
00:21:59.242      {
00:21:59.242        "name": "BaseBdev2",
00:21:59.242        "uuid": "5b766649-2280-4ab4-b149-a71de8b9d6f1",
00:21:59.242        "is_configured": true,
00:21:59.242        "data_offset": 2048,
00:21:59.242        "data_size": 63488
00:21:59.242      },
00:21:59.242      {
00:21:59.242        "name": "BaseBdev3",
00:21:59.242        "uuid": "eecaed95-0628-46e9-91bf-eeecf629e8f1",
00:21:59.242        "is_configured": true,
00:21:59.242        "data_offset": 2048,
00:21:59.242        "data_size": 63488
00:21:59.242      }
00:21:59.242    ]
00:21:59.242  }'
00:21:59.242   17:05:52	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:21:59.242   17:05:52	-- common/autotest_common.sh@10 -- # set +x
00:21:59.808   17:05:52	-- bdev/bdev_raid.sh@273 -- # (( i = 1 ))
00:21:59.808   17:05:52	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:21:59.808    17:05:52	-- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:21:59.808    17:05:52	-- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]'
00:22:00.066   17:05:52	-- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid
00:22:00.066   17:05:52	-- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:22:00.066   17:05:52	-- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2
00:22:00.329  [2024-11-19 17:05:52.949762] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2
00:22:00.329  [2024-11-19 17:05:52.949803] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:22:00.329  [2024-11-19 17:05:52.949886] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:22:00.329   17:05:52	-- bdev/bdev_raid.sh@273 -- # (( i++ ))
00:22:00.329   17:05:52	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:22:00.329    17:05:52	-- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:22:00.329    17:05:52	-- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]'
00:22:00.586   17:05:53	-- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid
00:22:00.586   17:05:53	-- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:22:00.586   17:05:53	-- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3
00:22:00.844  [2024-11-19 17:05:53.502541] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3
00:22:00.844  [2024-11-19 17:05:53.502641] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state offline
00:22:00.844   17:05:53	-- bdev/bdev_raid.sh@273 -- # (( i++ ))
00:22:00.844   17:05:53	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:22:00.844    17:05:53	-- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:22:00.844    17:05:53	-- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)'
00:22:01.102   17:05:53	-- bdev/bdev_raid.sh@281 -- # raid_bdev=
00:22:01.102   17:05:53	-- bdev/bdev_raid.sh@282 -- # '[' -n '' ']'
00:22:01.102   17:05:53	-- bdev/bdev_raid.sh@287 -- # killprocess 137646
00:22:01.102   17:05:53	-- common/autotest_common.sh@936 -- # '[' -z 137646 ']'
00:22:01.102   17:05:53	-- common/autotest_common.sh@940 -- # kill -0 137646
00:22:01.102    17:05:53	-- common/autotest_common.sh@941 -- # uname
00:22:01.102   17:05:53	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:22:01.102    17:05:53	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 137646
00:22:01.102  killing process with pid 137646
00:22:01.102   17:05:53	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:22:01.102   17:05:53	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:22:01.102   17:05:53	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 137646'
00:22:01.102   17:05:53	-- common/autotest_common.sh@955 -- # kill 137646
00:22:01.102   17:05:53	-- common/autotest_common.sh@960 -- # wait 137646
00:22:01.102  [2024-11-19 17:05:53.841423] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:22:01.102  [2024-11-19 17:05:53.841508] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:22:01.360  ************************************
00:22:01.360  END TEST raid5f_state_function_test_sb
00:22:01.360  ************************************
00:22:01.360   17:05:54	-- bdev/bdev_raid.sh@289 -- # return 0
00:22:01.360  
00:22:01.360  real	0m11.973s
00:22:01.360  user	0m21.536s
00:22:01.360  sys	0m1.942s
00:22:01.360   17:05:54	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:22:01.360   17:05:54	-- common/autotest_common.sh@10 -- # set +x
00:22:01.360   17:05:54	-- bdev/bdev_raid.sh@746 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3
00:22:01.360   17:05:54	-- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']'
00:22:01.360   17:05:54	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:22:01.360   17:05:54	-- common/autotest_common.sh@10 -- # set +x
00:22:01.360  ************************************
00:22:01.360  START TEST raid5f_superblock_test
00:22:01.360  ************************************
00:22:01.360   17:05:54	-- common/autotest_common.sh@1114 -- # raid_superblock_test raid5f 3
00:22:01.360   17:05:54	-- bdev/bdev_raid.sh@338 -- # local raid_level=raid5f
00:22:01.360   17:05:54	-- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3
00:22:01.360   17:05:54	-- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=()
00:22:01.360   17:05:54	-- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc
00:22:01.360   17:05:54	-- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=()
00:22:01.360   17:05:54	-- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt
00:22:01.360   17:05:54	-- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=()
00:22:01.360   17:05:54	-- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid
00:22:01.360   17:05:54	-- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1
00:22:01.360   17:05:54	-- bdev/bdev_raid.sh@344 -- # local strip_size
00:22:01.360   17:05:54	-- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg
00:22:01.360   17:05:54	-- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid
00:22:01.360   17:05:54	-- bdev/bdev_raid.sh@347 -- # local raid_bdev
00:22:01.360   17:05:54	-- bdev/bdev_raid.sh@349 -- # '[' raid5f '!=' raid1 ']'
00:22:01.360   17:05:54	-- bdev/bdev_raid.sh@350 -- # strip_size=64
00:22:01.360   17:05:54	-- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64'
00:22:01.360   17:05:54	-- bdev/bdev_raid.sh@357 -- # raid_pid=138028
00:22:01.360   17:05:54	-- bdev/bdev_raid.sh@358 -- # waitforlisten 138028 /var/tmp/spdk-raid.sock
00:22:01.360   17:05:54	-- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid
00:22:01.360   17:05:54	-- common/autotest_common.sh@829 -- # '[' -z 138028 ']'
00:22:01.360   17:05:54	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock
00:22:01.360   17:05:54	-- common/autotest_common.sh@834 -- # local max_retries=100
00:22:01.360   17:05:54	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...'
00:22:01.360  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...
00:22:01.360   17:05:54	-- common/autotest_common.sh@838 -- # xtrace_disable
00:22:01.360   17:05:54	-- common/autotest_common.sh@10 -- # set +x
00:22:01.618  [2024-11-19 17:05:54.245686] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:22:01.618  [2024-11-19 17:05:54.245944] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138028 ]
00:22:01.618  [2024-11-19 17:05:54.401225] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:22:01.618  [2024-11-19 17:05:54.449371] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:22:01.876  [2024-11-19 17:05:54.492488] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:22:02.444   17:05:55	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:22:02.444   17:05:55	-- common/autotest_common.sh@862 -- # return 0
00:22:02.444   17:05:55	-- bdev/bdev_raid.sh@361 -- # (( i = 1 ))
00:22:02.444   17:05:55	-- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs ))
00:22:02.444   17:05:55	-- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1
00:22:02.444   17:05:55	-- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1
00:22:02.444   17:05:55	-- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001
00:22:02.444   17:05:55	-- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc)
00:22:02.444   17:05:55	-- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt)
00:22:02.444   17:05:55	-- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:22:02.444   17:05:55	-- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1
00:22:02.444  malloc1
00:22:02.444   17:05:55	-- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001
00:22:03.012  [2024-11-19 17:05:55.572731] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1
00:22:03.012  [2024-11-19 17:05:55.572859] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:22:03.012  [2024-11-19 17:05:55.572909] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80
00:22:03.012  [2024-11-19 17:05:55.572971] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:22:03.012  [2024-11-19 17:05:55.575812] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:22:03.012  [2024-11-19 17:05:55.575894] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1
00:22:03.012  pt1
00:22:03.012   17:05:55	-- bdev/bdev_raid.sh@361 -- # (( i++ ))
00:22:03.012   17:05:55	-- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs ))
00:22:03.012   17:05:55	-- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2
00:22:03.012   17:05:55	-- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2
00:22:03.012   17:05:55	-- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002
00:22:03.012   17:05:55	-- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc)
00:22:03.012   17:05:55	-- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt)
00:22:03.012   17:05:55	-- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:22:03.012   17:05:55	-- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2
00:22:03.012  malloc2
00:22:03.272   17:05:55	-- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:22:03.531  [2024-11-19 17:05:56.138216] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:22:03.531  [2024-11-19 17:05:56.138302] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:22:03.531  [2024-11-19 17:05:56.138340] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680
00:22:03.531  [2024-11-19 17:05:56.138389] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:22:03.531  [2024-11-19 17:05:56.141020] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:22:03.531  [2024-11-19 17:05:56.141083] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:22:03.531  pt2
00:22:03.531   17:05:56	-- bdev/bdev_raid.sh@361 -- # (( i++ ))
00:22:03.531   17:05:56	-- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs ))
00:22:03.531   17:05:56	-- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3
00:22:03.531   17:05:56	-- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3
00:22:03.531   17:05:56	-- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003
00:22:03.531   17:05:56	-- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc)
00:22:03.531   17:05:56	-- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt)
00:22:03.531   17:05:56	-- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:22:03.531   17:05:56	-- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3
00:22:03.790  malloc3
00:22:03.790   17:05:56	-- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003
00:22:04.049  [2024-11-19 17:05:56.717090] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3
00:22:04.049  [2024-11-19 17:05:56.717203] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:22:04.049  [2024-11-19 17:05:56.717247] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280
00:22:04.049  [2024-11-19 17:05:56.717298] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:22:04.049  [2024-11-19 17:05:56.720019] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:22:04.049  [2024-11-19 17:05:56.720107] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3
00:22:04.049  pt3
00:22:04.049   17:05:56	-- bdev/bdev_raid.sh@361 -- # (( i++ ))
00:22:04.049   17:05:56	-- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs ))
00:22:04.049   17:05:56	-- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'pt1 pt2 pt3' -n raid_bdev1 -s
00:22:04.308  [2024-11-19 17:05:56.925249] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed
00:22:04.308  [2024-11-19 17:05:56.927586] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:22:04.308  [2024-11-19 17:05:56.927651] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed
00:22:04.308  [2024-11-19 17:05:56.927854] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880
00:22:04.308  [2024-11-19 17:05:56.927864] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512
00:22:04.308  [2024-11-19 17:05:56.928061] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0
00:22:04.308  [2024-11-19 17:05:56.928854] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880
00:22:04.308  [2024-11-19 17:05:56.928876] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007880
00:22:04.308  [2024-11-19 17:05:56.929061] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:22:04.308   17:05:56	-- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3
00:22:04.308   17:05:56	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:22:04.308   17:05:56	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:22:04.308   17:05:56	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:22:04.308   17:05:56	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:22:04.308   17:05:56	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:22:04.308   17:05:56	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:22:04.308   17:05:56	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:22:04.308   17:05:56	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:22:04.308   17:05:56	-- bdev/bdev_raid.sh@125 -- # local tmp
00:22:04.308    17:05:56	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:22:04.308    17:05:56	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:22:04.567   17:05:57	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:22:04.567    "name": "raid_bdev1",
00:22:04.567    "uuid": "ef123c05-41d4-4957-9319-80b79a1910a2",
00:22:04.568    "strip_size_kb": 64,
00:22:04.568    "state": "online",
00:22:04.568    "raid_level": "raid5f",
00:22:04.568    "superblock": true,
00:22:04.568    "num_base_bdevs": 3,
00:22:04.568    "num_base_bdevs_discovered": 3,
00:22:04.568    "num_base_bdevs_operational": 3,
00:22:04.568    "base_bdevs_list": [
00:22:04.568      {
00:22:04.568        "name": "pt1",
00:22:04.568        "uuid": "9486484d-a7c8-533c-bc80-838be38939db",
00:22:04.568        "is_configured": true,
00:22:04.568        "data_offset": 2048,
00:22:04.568        "data_size": 63488
00:22:04.568      },
00:22:04.568      {
00:22:04.568        "name": "pt2",
00:22:04.568        "uuid": "9eac1f2c-a04c-5db2-980e-de3d578136b7",
00:22:04.568        "is_configured": true,
00:22:04.568        "data_offset": 2048,
00:22:04.568        "data_size": 63488
00:22:04.568      },
00:22:04.568      {
00:22:04.568        "name": "pt3",
00:22:04.568        "uuid": "29a6a5ea-9e5c-5ccd-bf40-4b14c476ac01",
00:22:04.568        "is_configured": true,
00:22:04.568        "data_offset": 2048,
00:22:04.568        "data_size": 63488
00:22:04.568      }
00:22:04.568    ]
00:22:04.568  }'
00:22:04.568   17:05:57	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:22:04.568   17:05:57	-- common/autotest_common.sh@10 -- # set +x
00:22:05.135    17:05:57	-- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1
00:22:05.135    17:05:57	-- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid'
00:22:05.135  [2024-11-19 17:05:57.961568] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:22:05.135   17:05:57	-- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=ef123c05-41d4-4957-9319-80b79a1910a2
00:22:05.135   17:05:57	-- bdev/bdev_raid.sh@380 -- # '[' -z ef123c05-41d4-4957-9319-80b79a1910a2 ']'
00:22:05.135   17:05:57	-- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1
00:22:05.394  [2024-11-19 17:05:58.241379] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:22:05.394  [2024-11-19 17:05:58.241417] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:22:05.394  [2024-11-19 17:05:58.241512] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:22:05.394  [2024-11-19 17:05:58.241602] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:22:05.394  [2024-11-19 17:05:58.241614] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name raid_bdev1, state offline
00:22:05.653    17:05:58	-- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:22:05.653    17:05:58	-- bdev/bdev_raid.sh@386 -- # jq -r '.[]'
00:22:05.912   17:05:58	-- bdev/bdev_raid.sh@386 -- # raid_bdev=
00:22:05.912   17:05:58	-- bdev/bdev_raid.sh@387 -- # '[' -n '' ']'
00:22:05.912   17:05:58	-- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}"
00:22:05.912   17:05:58	-- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1
00:22:06.171   17:05:58	-- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}"
00:22:06.171   17:05:58	-- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2
00:22:06.171   17:05:59	-- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}"
00:22:06.171   17:05:59	-- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3
00:22:06.779    17:05:59	-- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs
00:22:06.779    17:05:59	-- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any'
00:22:06.779   17:05:59	-- bdev/bdev_raid.sh@395 -- # '[' false == true ']'
00:22:06.779   17:05:59	-- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1
00:22:06.779   17:05:59	-- common/autotest_common.sh@650 -- # local es=0
00:22:06.779   17:05:59	-- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1
00:22:06.779   17:05:59	-- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:22:06.779   17:05:59	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:22:06.779    17:05:59	-- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:22:06.779   17:05:59	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:22:06.779    17:05:59	-- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:22:06.779   17:05:59	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:22:06.779   17:05:59	-- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:22:06.779   17:05:59	-- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]]
00:22:06.779   17:05:59	-- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1
00:22:07.051  [2024-11-19 17:05:59.837749] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed
00:22:07.051  [2024-11-19 17:05:59.840057] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed
00:22:07.051  [2024-11-19 17:05:59.840110] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed
00:22:07.051  [2024-11-19 17:05:59.840153] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1
00:22:07.051  [2024-11-19 17:05:59.840237] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2
00:22:07.051  [2024-11-19 17:05:59.840265] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3
00:22:07.051  [2024-11-19 17:05:59.840309] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:22:07.051  [2024-11-19 17:05:59.840336] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state configuring
00:22:07.051  request:
00:22:07.051  {
00:22:07.051    "name": "raid_bdev1",
00:22:07.051    "raid_level": "raid5f",
00:22:07.051    "base_bdevs": [
00:22:07.051      "malloc1",
00:22:07.051      "malloc2",
00:22:07.051      "malloc3"
00:22:07.051    ],
00:22:07.051    "superblock": false,
00:22:07.051    "strip_size_kb": 64,
00:22:07.051    "method": "bdev_raid_create",
00:22:07.051    "req_id": 1
00:22:07.051  }
00:22:07.051  Got JSON-RPC error response
00:22:07.051  response:
00:22:07.051  {
00:22:07.051    "code": -17,
00:22:07.051    "message": "Failed to create RAID bdev raid_bdev1: File exists"
00:22:07.051  }
00:22:07.051   17:05:59	-- common/autotest_common.sh@653 -- # es=1
00:22:07.051   17:05:59	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:22:07.051   17:05:59	-- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:22:07.051   17:05:59	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:22:07.051    17:05:59	-- bdev/bdev_raid.sh@403 -- # jq -r '.[]'
00:22:07.051    17:05:59	-- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:22:07.310   17:06:00	-- bdev/bdev_raid.sh@403 -- # raid_bdev=
00:22:07.310   17:06:00	-- bdev/bdev_raid.sh@404 -- # '[' -n '' ']'
00:22:07.310   17:06:00	-- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001
00:22:07.569  [2024-11-19 17:06:00.329750] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1
00:22:07.569  [2024-11-19 17:06:00.329849] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:22:07.569  [2024-11-19 17:06:00.329889] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480
00:22:07.569  [2024-11-19 17:06:00.329914] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:22:07.569  [2024-11-19 17:06:00.332537] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:22:07.569  [2024-11-19 17:06:00.332599] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1
00:22:07.569  [2024-11-19 17:06:00.332705] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1
00:22:07.569  [2024-11-19 17:06:00.332777] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed
00:22:07.569  pt1
00:22:07.569   17:06:00	-- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3
00:22:07.569   17:06:00	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:22:07.569   17:06:00	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:22:07.569   17:06:00	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:22:07.569   17:06:00	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:22:07.569   17:06:00	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:22:07.569   17:06:00	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:22:07.569   17:06:00	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:22:07.569   17:06:00	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:22:07.569   17:06:00	-- bdev/bdev_raid.sh@125 -- # local tmp
00:22:07.569    17:06:00	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:22:07.569    17:06:00	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:22:07.828   17:06:00	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:22:07.829    "name": "raid_bdev1",
00:22:07.829    "uuid": "ef123c05-41d4-4957-9319-80b79a1910a2",
00:22:07.829    "strip_size_kb": 64,
00:22:07.829    "state": "configuring",
00:22:07.829    "raid_level": "raid5f",
00:22:07.829    "superblock": true,
00:22:07.829    "num_base_bdevs": 3,
00:22:07.829    "num_base_bdevs_discovered": 1,
00:22:07.829    "num_base_bdevs_operational": 3,
00:22:07.829    "base_bdevs_list": [
00:22:07.829      {
00:22:07.829        "name": "pt1",
00:22:07.829        "uuid": "9486484d-a7c8-533c-bc80-838be38939db",
00:22:07.829        "is_configured": true,
00:22:07.829        "data_offset": 2048,
00:22:07.829        "data_size": 63488
00:22:07.829      },
00:22:07.829      {
00:22:07.829        "name": null,
00:22:07.829        "uuid": "9eac1f2c-a04c-5db2-980e-de3d578136b7",
00:22:07.829        "is_configured": false,
00:22:07.829        "data_offset": 2048,
00:22:07.829        "data_size": 63488
00:22:07.829      },
00:22:07.829      {
00:22:07.829        "name": null,
00:22:07.829        "uuid": "29a6a5ea-9e5c-5ccd-bf40-4b14c476ac01",
00:22:07.829        "is_configured": false,
00:22:07.829        "data_offset": 2048,
00:22:07.829        "data_size": 63488
00:22:07.829      }
00:22:07.829    ]
00:22:07.829  }'
00:22:07.829   17:06:00	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:22:07.829   17:06:00	-- common/autotest_common.sh@10 -- # set +x
00:22:08.396   17:06:01	-- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']'
00:22:08.396   17:06:01	-- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:22:08.654  [2024-11-19 17:06:01.434024] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:22:08.654  [2024-11-19 17:06:01.434126] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:22:08.654  [2024-11-19 17:06:01.434170] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80
00:22:08.654  [2024-11-19 17:06:01.434212] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:22:08.654  [2024-11-19 17:06:01.434642] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:22:08.654  [2024-11-19 17:06:01.434700] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:22:08.654  [2024-11-19 17:06:01.434796] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2
00:22:08.654  [2024-11-19 17:06:01.434821] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:22:08.654  pt2
00:22:08.654   17:06:01	-- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2
00:22:08.913  [2024-11-19 17:06:01.710089] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2
00:22:08.913   17:06:01	-- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3
00:22:08.914   17:06:01	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:22:08.914   17:06:01	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:22:08.914   17:06:01	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:22:08.914   17:06:01	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:22:08.914   17:06:01	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:22:08.914   17:06:01	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:22:08.914   17:06:01	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:22:08.914   17:06:01	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:22:08.914   17:06:01	-- bdev/bdev_raid.sh@125 -- # local tmp
00:22:08.914    17:06:01	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:22:08.914    17:06:01	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:22:09.172   17:06:01	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:22:09.172    "name": "raid_bdev1",
00:22:09.172    "uuid": "ef123c05-41d4-4957-9319-80b79a1910a2",
00:22:09.172    "strip_size_kb": 64,
00:22:09.172    "state": "configuring",
00:22:09.172    "raid_level": "raid5f",
00:22:09.172    "superblock": true,
00:22:09.172    "num_base_bdevs": 3,
00:22:09.172    "num_base_bdevs_discovered": 1,
00:22:09.172    "num_base_bdevs_operational": 3,
00:22:09.172    "base_bdevs_list": [
00:22:09.172      {
00:22:09.172        "name": "pt1",
00:22:09.172        "uuid": "9486484d-a7c8-533c-bc80-838be38939db",
00:22:09.172        "is_configured": true,
00:22:09.172        "data_offset": 2048,
00:22:09.172        "data_size": 63488
00:22:09.172      },
00:22:09.172      {
00:22:09.172        "name": null,
00:22:09.172        "uuid": "9eac1f2c-a04c-5db2-980e-de3d578136b7",
00:22:09.172        "is_configured": false,
00:22:09.172        "data_offset": 2048,
00:22:09.172        "data_size": 63488
00:22:09.172      },
00:22:09.172      {
00:22:09.172        "name": null,
00:22:09.172        "uuid": "29a6a5ea-9e5c-5ccd-bf40-4b14c476ac01",
00:22:09.172        "is_configured": false,
00:22:09.172        "data_offset": 2048,
00:22:09.172        "data_size": 63488
00:22:09.172      }
00:22:09.172    ]
00:22:09.172  }'
00:22:09.172   17:06:01	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:22:09.172   17:06:01	-- common/autotest_common.sh@10 -- # set +x
00:22:09.740   17:06:02	-- bdev/bdev_raid.sh@422 -- # (( i = 1 ))
00:22:09.740   17:06:02	-- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs ))
00:22:09.740   17:06:02	-- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:22:09.999  [2024-11-19 17:06:02.803691] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:22:09.999  [2024-11-19 17:06:02.803804] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:22:09.999  [2024-11-19 17:06:02.803843] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080
00:22:09.999  [2024-11-19 17:06:02.803872] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:22:09.999  [2024-11-19 17:06:02.804526] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:22:09.999  [2024-11-19 17:06:02.804580] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:22:09.999  [2024-11-19 17:06:02.804686] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2
00:22:09.999  [2024-11-19 17:06:02.804711] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:22:09.999  pt2
00:22:09.999   17:06:02	-- bdev/bdev_raid.sh@422 -- # (( i++ ))
00:22:09.999   17:06:02	-- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs ))
00:22:09.999   17:06:02	-- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003
00:22:10.258  [2024-11-19 17:06:03.071364] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3
00:22:10.258  [2024-11-19 17:06:03.071476] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:22:10.258  [2024-11-19 17:06:03.071518] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380
00:22:10.258  [2024-11-19 17:06:03.071548] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:22:10.258  [2024-11-19 17:06:03.072026] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:22:10.258  [2024-11-19 17:06:03.072075] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3
00:22:10.258  [2024-11-19 17:06:03.072182] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3
00:22:10.258  [2024-11-19 17:06:03.072220] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed
00:22:10.258  [2024-11-19 17:06:03.072368] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80
00:22:10.258  [2024-11-19 17:06:03.072396] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512
00:22:10.258  [2024-11-19 17:06:03.072462] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0
00:22:10.258  [2024-11-19 17:06:03.073077] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80
00:22:10.258  [2024-11-19 17:06:03.073101] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80
00:22:10.258  [2024-11-19 17:06:03.073213] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:22:10.258  pt3
00:22:10.258   17:06:03	-- bdev/bdev_raid.sh@422 -- # (( i++ ))
00:22:10.258   17:06:03	-- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs ))
00:22:10.258   17:06:03	-- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3
00:22:10.258   17:06:03	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:22:10.258   17:06:03	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:22:10.258   17:06:03	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:22:10.258   17:06:03	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:22:10.258   17:06:03	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:22:10.258   17:06:03	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:22:10.258   17:06:03	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:22:10.258   17:06:03	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:22:10.258   17:06:03	-- bdev/bdev_raid.sh@125 -- # local tmp
00:22:10.258    17:06:03	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:22:10.258    17:06:03	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:22:10.826   17:06:03	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:22:10.826    "name": "raid_bdev1",
00:22:10.826    "uuid": "ef123c05-41d4-4957-9319-80b79a1910a2",
00:22:10.826    "strip_size_kb": 64,
00:22:10.826    "state": "online",
00:22:10.826    "raid_level": "raid5f",
00:22:10.826    "superblock": true,
00:22:10.826    "num_base_bdevs": 3,
00:22:10.826    "num_base_bdevs_discovered": 3,
00:22:10.826    "num_base_bdevs_operational": 3,
00:22:10.826    "base_bdevs_list": [
00:22:10.826      {
00:22:10.826        "name": "pt1",
00:22:10.826        "uuid": "9486484d-a7c8-533c-bc80-838be38939db",
00:22:10.826        "is_configured": true,
00:22:10.826        "data_offset": 2048,
00:22:10.826        "data_size": 63488
00:22:10.826      },
00:22:10.826      {
00:22:10.826        "name": "pt2",
00:22:10.826        "uuid": "9eac1f2c-a04c-5db2-980e-de3d578136b7",
00:22:10.827        "is_configured": true,
00:22:10.827        "data_offset": 2048,
00:22:10.827        "data_size": 63488
00:22:10.827      },
00:22:10.827      {
00:22:10.827        "name": "pt3",
00:22:10.827        "uuid": "29a6a5ea-9e5c-5ccd-bf40-4b14c476ac01",
00:22:10.827        "is_configured": true,
00:22:10.827        "data_offset": 2048,
00:22:10.827        "data_size": 63488
00:22:10.827      }
00:22:10.827    ]
00:22:10.827  }'
00:22:10.827   17:06:03	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:22:10.827   17:06:03	-- common/autotest_common.sh@10 -- # set +x
00:22:11.395    17:06:04	-- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1
00:22:11.395    17:06:04	-- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid'
00:22:11.654  [2024-11-19 17:06:04.303758] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:22:11.654   17:06:04	-- bdev/bdev_raid.sh@430 -- # '[' ef123c05-41d4-4957-9319-80b79a1910a2 '!=' ef123c05-41d4-4957-9319-80b79a1910a2 ']'
00:22:11.654   17:06:04	-- bdev/bdev_raid.sh@434 -- # has_redundancy raid5f
00:22:11.654   17:06:04	-- bdev/bdev_raid.sh@195 -- # case $1 in
00:22:11.654   17:06:04	-- bdev/bdev_raid.sh@196 -- # return 0
00:22:11.654   17:06:04	-- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1
00:22:11.913  [2024-11-19 17:06:04.567703] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1
00:22:11.913   17:06:04	-- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2
00:22:11.913   17:06:04	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:22:11.913   17:06:04	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:22:11.913   17:06:04	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:22:11.913   17:06:04	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:22:11.913   17:06:04	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:22:11.913   17:06:04	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:22:11.913   17:06:04	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:22:11.913   17:06:04	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:22:11.913   17:06:04	-- bdev/bdev_raid.sh@125 -- # local tmp
00:22:11.913    17:06:04	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:22:11.913    17:06:04	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:22:12.172   17:06:04	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:22:12.172    "name": "raid_bdev1",
00:22:12.172    "uuid": "ef123c05-41d4-4957-9319-80b79a1910a2",
00:22:12.172    "strip_size_kb": 64,
00:22:12.172    "state": "online",
00:22:12.172    "raid_level": "raid5f",
00:22:12.172    "superblock": true,
00:22:12.172    "num_base_bdevs": 3,
00:22:12.172    "num_base_bdevs_discovered": 2,
00:22:12.172    "num_base_bdevs_operational": 2,
00:22:12.172    "base_bdevs_list": [
00:22:12.172      {
00:22:12.172        "name": null,
00:22:12.172        "uuid": "00000000-0000-0000-0000-000000000000",
00:22:12.172        "is_configured": false,
00:22:12.172        "data_offset": 2048,
00:22:12.172        "data_size": 63488
00:22:12.172      },
00:22:12.172      {
00:22:12.172        "name": "pt2",
00:22:12.172        "uuid": "9eac1f2c-a04c-5db2-980e-de3d578136b7",
00:22:12.172        "is_configured": true,
00:22:12.172        "data_offset": 2048,
00:22:12.172        "data_size": 63488
00:22:12.172      },
00:22:12.172      {
00:22:12.172        "name": "pt3",
00:22:12.172        "uuid": "29a6a5ea-9e5c-5ccd-bf40-4b14c476ac01",
00:22:12.172        "is_configured": true,
00:22:12.172        "data_offset": 2048,
00:22:12.172        "data_size": 63488
00:22:12.172      }
00:22:12.172    ]
00:22:12.172  }'
00:22:12.172   17:06:04	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:22:12.172   17:06:04	-- common/autotest_common.sh@10 -- # set +x
00:22:12.754   17:06:05	-- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1
00:22:12.754  [2024-11-19 17:06:05.531841] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:22:12.754  [2024-11-19 17:06:05.531891] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:22:12.754  [2024-11-19 17:06:05.531971] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:22:12.754  [2024-11-19 17:06:05.532036] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:22:12.754  [2024-11-19 17:06:05.532045] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline
00:22:12.754    17:06:05	-- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:22:12.754    17:06:05	-- bdev/bdev_raid.sh@443 -- # jq -r '.[]'
00:22:13.012   17:06:05	-- bdev/bdev_raid.sh@443 -- # raid_bdev=
00:22:13.012   17:06:05	-- bdev/bdev_raid.sh@444 -- # '[' -n '' ']'
00:22:13.012   17:06:05	-- bdev/bdev_raid.sh@449 -- # (( i = 1 ))
00:22:13.012   17:06:05	-- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs ))
00:22:13.012   17:06:05	-- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2
00:22:13.270   17:06:05	-- bdev/bdev_raid.sh@449 -- # (( i++ ))
00:22:13.270   17:06:06	-- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs ))
00:22:13.270   17:06:06	-- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3
00:22:13.528   17:06:06	-- bdev/bdev_raid.sh@449 -- # (( i++ ))
00:22:13.528   17:06:06	-- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs ))
00:22:13.528   17:06:06	-- bdev/bdev_raid.sh@454 -- # (( i = 1 ))
00:22:13.528   17:06:06	-- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 ))
00:22:13.528   17:06:06	-- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:22:13.788  [2024-11-19 17:06:06.388056] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:22:13.788  [2024-11-19 17:06:06.388189] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:22:13.788  [2024-11-19 17:06:06.388239] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680
00:22:13.788  [2024-11-19 17:06:06.388265] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:22:13.788  [2024-11-19 17:06:06.391393] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:22:13.788  [2024-11-19 17:06:06.391465] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:22:13.788  [2024-11-19 17:06:06.391601] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2
00:22:13.788  [2024-11-19 17:06:06.391643] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:22:13.788  pt2
00:22:13.788   17:06:06	-- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2
00:22:13.788   17:06:06	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:22:13.788   17:06:06	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:22:13.788   17:06:06	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:22:13.788   17:06:06	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:22:13.788   17:06:06	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:22:13.788   17:06:06	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:22:13.788   17:06:06	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:22:13.788   17:06:06	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:22:13.788   17:06:06	-- bdev/bdev_raid.sh@125 -- # local tmp
00:22:13.788    17:06:06	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:22:13.788    17:06:06	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:22:14.046   17:06:06	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:22:14.046    "name": "raid_bdev1",
00:22:14.046    "uuid": "ef123c05-41d4-4957-9319-80b79a1910a2",
00:22:14.046    "strip_size_kb": 64,
00:22:14.046    "state": "configuring",
00:22:14.046    "raid_level": "raid5f",
00:22:14.046    "superblock": true,
00:22:14.046    "num_base_bdevs": 3,
00:22:14.046    "num_base_bdevs_discovered": 1,
00:22:14.046    "num_base_bdevs_operational": 2,
00:22:14.046    "base_bdevs_list": [
00:22:14.046      {
00:22:14.046        "name": null,
00:22:14.046        "uuid": "00000000-0000-0000-0000-000000000000",
00:22:14.046        "is_configured": false,
00:22:14.046        "data_offset": 2048,
00:22:14.046        "data_size": 63488
00:22:14.046      },
00:22:14.046      {
00:22:14.046        "name": "pt2",
00:22:14.046        "uuid": "9eac1f2c-a04c-5db2-980e-de3d578136b7",
00:22:14.046        "is_configured": true,
00:22:14.046        "data_offset": 2048,
00:22:14.047        "data_size": 63488
00:22:14.047      },
00:22:14.047      {
00:22:14.047        "name": null,
00:22:14.047        "uuid": "29a6a5ea-9e5c-5ccd-bf40-4b14c476ac01",
00:22:14.047        "is_configured": false,
00:22:14.047        "data_offset": 2048,
00:22:14.047        "data_size": 63488
00:22:14.047      }
00:22:14.047    ]
00:22:14.047  }'
00:22:14.047   17:06:06	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:22:14.047   17:06:06	-- common/autotest_common.sh@10 -- # set +x
00:22:14.615   17:06:07	-- bdev/bdev_raid.sh@454 -- # (( i++ ))
00:22:14.615   17:06:07	-- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 ))
00:22:14.615   17:06:07	-- bdev/bdev_raid.sh@462 -- # i=2
00:22:14.615   17:06:07	-- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003
00:22:14.874  [2024-11-19 17:06:07.480305] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3
00:22:14.874  [2024-11-19 17:06:07.480441] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:22:14.874  [2024-11-19 17:06:07.480492] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80
00:22:14.874  [2024-11-19 17:06:07.480519] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:22:14.874  [2024-11-19 17:06:07.481079] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:22:14.874  [2024-11-19 17:06:07.481119] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3
00:22:14.874  [2024-11-19 17:06:07.481242] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3
00:22:14.874  [2024-11-19 17:06:07.481271] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed
00:22:14.874  [2024-11-19 17:06:07.481393] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80
00:22:14.874  [2024-11-19 17:06:07.481403] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512
00:22:14.874  [2024-11-19 17:06:07.481478] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002940
00:22:14.874  [2024-11-19 17:06:07.482300] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80
00:22:14.874  [2024-11-19 17:06:07.482326] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80
00:22:14.874  [2024-11-19 17:06:07.482595] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:22:14.874  pt3
00:22:14.874   17:06:07	-- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2
00:22:14.874   17:06:07	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:22:14.874   17:06:07	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:22:14.874   17:06:07	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:22:14.874   17:06:07	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:22:14.874   17:06:07	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:22:14.874   17:06:07	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:22:14.874   17:06:07	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:22:14.874   17:06:07	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:22:14.874   17:06:07	-- bdev/bdev_raid.sh@125 -- # local tmp
00:22:14.874    17:06:07	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:22:14.874    17:06:07	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:22:15.133   17:06:07	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:22:15.133    "name": "raid_bdev1",
00:22:15.133    "uuid": "ef123c05-41d4-4957-9319-80b79a1910a2",
00:22:15.133    "strip_size_kb": 64,
00:22:15.133    "state": "online",
00:22:15.133    "raid_level": "raid5f",
00:22:15.133    "superblock": true,
00:22:15.133    "num_base_bdevs": 3,
00:22:15.133    "num_base_bdevs_discovered": 2,
00:22:15.133    "num_base_bdevs_operational": 2,
00:22:15.133    "base_bdevs_list": [
00:22:15.133      {
00:22:15.133        "name": null,
00:22:15.133        "uuid": "00000000-0000-0000-0000-000000000000",
00:22:15.133        "is_configured": false,
00:22:15.133        "data_offset": 2048,
00:22:15.133        "data_size": 63488
00:22:15.133      },
00:22:15.133      {
00:22:15.133        "name": "pt2",
00:22:15.133        "uuid": "9eac1f2c-a04c-5db2-980e-de3d578136b7",
00:22:15.133        "is_configured": true,
00:22:15.133        "data_offset": 2048,
00:22:15.133        "data_size": 63488
00:22:15.133      },
00:22:15.133      {
00:22:15.133        "name": "pt3",
00:22:15.133        "uuid": "29a6a5ea-9e5c-5ccd-bf40-4b14c476ac01",
00:22:15.133        "is_configured": true,
00:22:15.133        "data_offset": 2048,
00:22:15.133        "data_size": 63488
00:22:15.133      }
00:22:15.133    ]
00:22:15.133  }'
00:22:15.133   17:06:07	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:22:15.133   17:06:07	-- common/autotest_common.sh@10 -- # set +x
00:22:15.699   17:06:08	-- bdev/bdev_raid.sh@468 -- # '[' 3 -gt 2 ']'
00:22:15.699   17:06:08	-- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1
00:22:15.958  [2024-11-19 17:06:08.713114] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:22:15.958  [2024-11-19 17:06:08.713185] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:22:15.958  [2024-11-19 17:06:08.713294] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:22:15.958  [2024-11-19 17:06:08.713387] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:22:15.958  [2024-11-19 17:06:08.713401] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline
00:22:15.958    17:06:08	-- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:22:15.958    17:06:08	-- bdev/bdev_raid.sh@471 -- # jq -r '.[]'
00:22:16.217   17:06:09	-- bdev/bdev_raid.sh@471 -- # raid_bdev=
00:22:16.217   17:06:09	-- bdev/bdev_raid.sh@472 -- # '[' -n '' ']'
00:22:16.217   17:06:09	-- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001
00:22:16.475  [2024-11-19 17:06:09.187276] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1
00:22:16.476  [2024-11-19 17:06:09.187423] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:22:16.476  [2024-11-19 17:06:09.187477] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280
00:22:16.476  [2024-11-19 17:06:09.187505] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:22:16.476  [2024-11-19 17:06:09.190736] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:22:16.476  [2024-11-19 17:06:09.190810] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1
00:22:16.476  [2024-11-19 17:06:09.190958] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1
00:22:16.476  [2024-11-19 17:06:09.191009] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed
00:22:16.476  pt1
00:22:16.476   17:06:09	-- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3
00:22:16.476   17:06:09	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:22:16.476   17:06:09	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:22:16.476   17:06:09	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:22:16.476   17:06:09	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:22:16.476   17:06:09	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:22:16.476   17:06:09	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:22:16.476   17:06:09	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:22:16.476   17:06:09	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:22:16.476   17:06:09	-- bdev/bdev_raid.sh@125 -- # local tmp
00:22:16.476    17:06:09	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:22:16.476    17:06:09	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:22:16.734   17:06:09	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:22:16.734    "name": "raid_bdev1",
00:22:16.734    "uuid": "ef123c05-41d4-4957-9319-80b79a1910a2",
00:22:16.734    "strip_size_kb": 64,
00:22:16.734    "state": "configuring",
00:22:16.734    "raid_level": "raid5f",
00:22:16.734    "superblock": true,
00:22:16.734    "num_base_bdevs": 3,
00:22:16.734    "num_base_bdevs_discovered": 1,
00:22:16.734    "num_base_bdevs_operational": 3,
00:22:16.734    "base_bdevs_list": [
00:22:16.734      {
00:22:16.734        "name": "pt1",
00:22:16.734        "uuid": "9486484d-a7c8-533c-bc80-838be38939db",
00:22:16.734        "is_configured": true,
00:22:16.734        "data_offset": 2048,
00:22:16.734        "data_size": 63488
00:22:16.734      },
00:22:16.734      {
00:22:16.734        "name": null,
00:22:16.734        "uuid": "9eac1f2c-a04c-5db2-980e-de3d578136b7",
00:22:16.734        "is_configured": false,
00:22:16.734        "data_offset": 2048,
00:22:16.734        "data_size": 63488
00:22:16.734      },
00:22:16.734      {
00:22:16.734        "name": null,
00:22:16.734        "uuid": "29a6a5ea-9e5c-5ccd-bf40-4b14c476ac01",
00:22:16.734        "is_configured": false,
00:22:16.734        "data_offset": 2048,
00:22:16.734        "data_size": 63488
00:22:16.734      }
00:22:16.734    ]
00:22:16.734  }'
00:22:16.734   17:06:09	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:22:16.734   17:06:09	-- common/autotest_common.sh@10 -- # set +x
00:22:17.302   17:06:10	-- bdev/bdev_raid.sh@484 -- # (( i = 1 ))
00:22:17.303   17:06:10	-- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs ))
00:22:17.303   17:06:10	-- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2
00:22:17.561   17:06:10	-- bdev/bdev_raid.sh@484 -- # (( i++ ))
00:22:17.561   17:06:10	-- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs ))
00:22:17.561   17:06:10	-- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3
00:22:17.820   17:06:10	-- bdev/bdev_raid.sh@484 -- # (( i++ ))
00:22:17.820   17:06:10	-- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs ))
00:22:17.820   17:06:10	-- bdev/bdev_raid.sh@489 -- # i=2
00:22:17.820   17:06:10	-- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003
00:22:17.820  [2024-11-19 17:06:10.659379] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3
00:22:17.820  [2024-11-19 17:06:10.659525] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:22:17.820  [2024-11-19 17:06:10.659586] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80
00:22:17.820  [2024-11-19 17:06:10.659634] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:22:17.820  [2024-11-19 17:06:10.660336] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:22:17.820  [2024-11-19 17:06:10.660421] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3
00:22:17.820  [2024-11-19 17:06:10.660600] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3
00:22:17.820  [2024-11-19 17:06:10.660624] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt3 (4) greater than existing raid bdev raid_bdev1 (2)
00:22:17.820  [2024-11-19 17:06:10.660641] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:22:17.820  [2024-11-19 17:06:10.660691] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state configuring
00:22:17.820  [2024-11-19 17:06:10.660778] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed
00:22:17.820  pt3
00:22:18.079   17:06:10	-- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2
00:22:18.079   17:06:10	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:22:18.079   17:06:10	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:22:18.079   17:06:10	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:22:18.079   17:06:10	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:22:18.079   17:06:10	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:22:18.079   17:06:10	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:22:18.079   17:06:10	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:22:18.079   17:06:10	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:22:18.079   17:06:10	-- bdev/bdev_raid.sh@125 -- # local tmp
00:22:18.079    17:06:10	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:22:18.079    17:06:10	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:22:18.337   17:06:10	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:22:18.337    "name": "raid_bdev1",
00:22:18.337    "uuid": "ef123c05-41d4-4957-9319-80b79a1910a2",
00:22:18.337    "strip_size_kb": 64,
00:22:18.337    "state": "configuring",
00:22:18.337    "raid_level": "raid5f",
00:22:18.337    "superblock": true,
00:22:18.337    "num_base_bdevs": 3,
00:22:18.337    "num_base_bdevs_discovered": 1,
00:22:18.337    "num_base_bdevs_operational": 2,
00:22:18.337    "base_bdevs_list": [
00:22:18.337      {
00:22:18.337        "name": null,
00:22:18.337        "uuid": "00000000-0000-0000-0000-000000000000",
00:22:18.337        "is_configured": false,
00:22:18.337        "data_offset": 2048,
00:22:18.337        "data_size": 63488
00:22:18.337      },
00:22:18.337      {
00:22:18.337        "name": null,
00:22:18.337        "uuid": "9eac1f2c-a04c-5db2-980e-de3d578136b7",
00:22:18.337        "is_configured": false,
00:22:18.337        "data_offset": 2048,
00:22:18.337        "data_size": 63488
00:22:18.338      },
00:22:18.338      {
00:22:18.338        "name": "pt3",
00:22:18.338        "uuid": "29a6a5ea-9e5c-5ccd-bf40-4b14c476ac01",
00:22:18.338        "is_configured": true,
00:22:18.338        "data_offset": 2048,
00:22:18.338        "data_size": 63488
00:22:18.338      }
00:22:18.338    ]
00:22:18.338  }'
00:22:18.338   17:06:10	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:22:18.338   17:06:10	-- common/autotest_common.sh@10 -- # set +x
00:22:18.924   17:06:11	-- bdev/bdev_raid.sh@497 -- # (( i = 1 ))
00:22:18.924   17:06:11	-- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 ))
00:22:18.924   17:06:11	-- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:22:19.183  [2024-11-19 17:06:11.795617] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:22:19.183  [2024-11-19 17:06:11.795738] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:22:19.183  [2024-11-19 17:06:11.795793] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180
00:22:19.183  [2024-11-19 17:06:11.795836] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:22:19.183  [2024-11-19 17:06:11.796429] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:22:19.183  [2024-11-19 17:06:11.796502] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:22:19.183  [2024-11-19 17:06:11.796631] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2
00:22:19.183  [2024-11-19 17:06:11.796668] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:22:19.183  [2024-11-19 17:06:11.796831] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ae80
00:22:19.183  [2024-11-19 17:06:11.796852] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512
00:22:19.183  [2024-11-19 17:06:11.796960] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002c80
00:22:19.183  [2024-11-19 17:06:11.797922] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ae80
00:22:19.183  [2024-11-19 17:06:11.797954] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ae80
00:22:19.183  [2024-11-19 17:06:11.798195] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:22:19.183  pt2
00:22:19.183   17:06:11	-- bdev/bdev_raid.sh@497 -- # (( i++ ))
00:22:19.183   17:06:11	-- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 ))
00:22:19.183   17:06:11	-- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2
00:22:19.183   17:06:11	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:22:19.183   17:06:11	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:22:19.183   17:06:11	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:22:19.183   17:06:11	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:22:19.183   17:06:11	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:22:19.183   17:06:11	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:22:19.183   17:06:11	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:22:19.183   17:06:11	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:22:19.183   17:06:11	-- bdev/bdev_raid.sh@125 -- # local tmp
00:22:19.183    17:06:11	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:22:19.183    17:06:11	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:22:19.183   17:06:12	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:22:19.183    "name": "raid_bdev1",
00:22:19.183    "uuid": "ef123c05-41d4-4957-9319-80b79a1910a2",
00:22:19.183    "strip_size_kb": 64,
00:22:19.183    "state": "online",
00:22:19.183    "raid_level": "raid5f",
00:22:19.183    "superblock": true,
00:22:19.183    "num_base_bdevs": 3,
00:22:19.183    "num_base_bdevs_discovered": 2,
00:22:19.183    "num_base_bdevs_operational": 2,
00:22:19.183    "base_bdevs_list": [
00:22:19.183      {
00:22:19.183        "name": null,
00:22:19.183        "uuid": "00000000-0000-0000-0000-000000000000",
00:22:19.183        "is_configured": false,
00:22:19.183        "data_offset": 2048,
00:22:19.183        "data_size": 63488
00:22:19.183      },
00:22:19.183      {
00:22:19.183        "name": "pt2",
00:22:19.183        "uuid": "9eac1f2c-a04c-5db2-980e-de3d578136b7",
00:22:19.183        "is_configured": true,
00:22:19.183        "data_offset": 2048,
00:22:19.183        "data_size": 63488
00:22:19.183      },
00:22:19.183      {
00:22:19.183        "name": "pt3",
00:22:19.183        "uuid": "29a6a5ea-9e5c-5ccd-bf40-4b14c476ac01",
00:22:19.183        "is_configured": true,
00:22:19.183        "data_offset": 2048,
00:22:19.183        "data_size": 63488
00:22:19.183      }
00:22:19.183    ]
00:22:19.183  }'
00:22:19.183   17:06:12	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:22:19.183   17:06:12	-- common/autotest_common.sh@10 -- # set +x
00:22:20.118    17:06:12	-- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1
00:22:20.118    17:06:12	-- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid'
00:22:20.118  [2024-11-19 17:06:12.880030] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:22:20.118   17:06:12	-- bdev/bdev_raid.sh@506 -- # '[' ef123c05-41d4-4957-9319-80b79a1910a2 '!=' ef123c05-41d4-4957-9319-80b79a1910a2 ']'
00:22:20.118   17:06:12	-- bdev/bdev_raid.sh@511 -- # killprocess 138028
00:22:20.118   17:06:12	-- common/autotest_common.sh@936 -- # '[' -z 138028 ']'
00:22:20.118   17:06:12	-- common/autotest_common.sh@940 -- # kill -0 138028
00:22:20.118    17:06:12	-- common/autotest_common.sh@941 -- # uname
00:22:20.118   17:06:12	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:22:20.118    17:06:12	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 138028
00:22:20.118   17:06:12	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:22:20.118   17:06:12	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:22:20.118   17:06:12	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 138028'
00:22:20.118  killing process with pid 138028
00:22:20.118   17:06:12	-- common/autotest_common.sh@955 -- # kill 138028
00:22:20.118  [2024-11-19 17:06:12.939579] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:22:20.118  [2024-11-19 17:06:12.939675] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:22:20.118  [2024-11-19 17:06:12.939751] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:22:20.118  [2024-11-19 17:06:12.939763] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ae80 name raid_bdev1, state offline
00:22:20.118   17:06:12	-- common/autotest_common.sh@960 -- # wait 138028
00:22:20.377  [2024-11-19 17:06:12.978864] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:22:20.377   17:06:13	-- bdev/bdev_raid.sh@513 -- # return 0
00:22:20.377  
00:22:20.377  real	0m19.048s
00:22:20.377  user	0m35.028s
00:22:20.377  sys	0m2.996s
00:22:20.377   17:06:13	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:22:20.377   17:06:13	-- common/autotest_common.sh@10 -- # set +x
00:22:20.377  ************************************
00:22:20.377  END TEST raid5f_superblock_test
00:22:20.377  ************************************
00:22:20.636   17:06:13	-- bdev/bdev_raid.sh@747 -- # '[' true = true ']'
00:22:20.636   17:06:13	-- bdev/bdev_raid.sh@748 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false
00:22:20.636   17:06:13	-- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']'
00:22:20.636   17:06:13	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:22:20.636   17:06:13	-- common/autotest_common.sh@10 -- # set +x
00:22:20.636  ************************************
00:22:20.636  START TEST raid5f_rebuild_test
00:22:20.636  ************************************
00:22:20.636   17:06:13	-- common/autotest_common.sh@1114 -- # raid_rebuild_test raid5f 3 false false
00:22:20.636   17:06:13	-- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f
00:22:20.636   17:06:13	-- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=3
00:22:20.636   17:06:13	-- bdev/bdev_raid.sh@519 -- # local superblock=false
00:22:20.636   17:06:13	-- bdev/bdev_raid.sh@520 -- # local background_io=false
00:22:20.636    17:06:13	-- bdev/bdev_raid.sh@521 -- # (( i = 1 ))
00:22:20.636    17:06:13	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:22:20.636    17:06:13	-- bdev/bdev_raid.sh@521 -- # echo BaseBdev1
00:22:20.636    17:06:13	-- bdev/bdev_raid.sh@521 -- # (( i++ ))
00:22:20.636    17:06:13	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:22:20.636    17:06:13	-- bdev/bdev_raid.sh@521 -- # echo BaseBdev2
00:22:20.636    17:06:13	-- bdev/bdev_raid.sh@521 -- # (( i++ ))
00:22:20.636    17:06:13	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:22:20.636    17:06:13	-- bdev/bdev_raid.sh@521 -- # echo BaseBdev3
00:22:20.636    17:06:13	-- bdev/bdev_raid.sh@521 -- # (( i++ ))
00:22:20.636    17:06:13	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:22:20.636   17:06:13	-- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3')
00:22:20.636   17:06:13	-- bdev/bdev_raid.sh@521 -- # local base_bdevs
00:22:20.636   17:06:13	-- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1
00:22:20.636   17:06:13	-- bdev/bdev_raid.sh@523 -- # local strip_size
00:22:20.636   17:06:13	-- bdev/bdev_raid.sh@524 -- # local create_arg
00:22:20.636   17:06:13	-- bdev/bdev_raid.sh@525 -- # local raid_bdev_size
00:22:20.636   17:06:13	-- bdev/bdev_raid.sh@526 -- # local data_offset
00:22:20.636   17:06:13	-- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']'
00:22:20.636   17:06:13	-- bdev/bdev_raid.sh@529 -- # '[' false = true ']'
00:22:20.636   17:06:13	-- bdev/bdev_raid.sh@533 -- # strip_size=64
00:22:20.636   17:06:13	-- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64'
00:22:20.636   17:06:13	-- bdev/bdev_raid.sh@539 -- # '[' false = true ']'
00:22:20.636   17:06:13	-- bdev/bdev_raid.sh@544 -- # raid_pid=138626
00:22:20.636   17:06:13	-- bdev/bdev_raid.sh@545 -- # waitforlisten 138626 /var/tmp/spdk-raid.sock
00:22:20.636   17:06:13	-- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid
00:22:20.636   17:06:13	-- common/autotest_common.sh@829 -- # '[' -z 138626 ']'
00:22:20.636   17:06:13	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock
00:22:20.636   17:06:13	-- common/autotest_common.sh@834 -- # local max_retries=100
00:22:20.636  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...
00:22:20.636   17:06:13	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...'
00:22:20.636   17:06:13	-- common/autotest_common.sh@838 -- # xtrace_disable
00:22:20.636   17:06:13	-- common/autotest_common.sh@10 -- # set +x
00:22:20.636  [2024-11-19 17:06:13.353795] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:22:20.636  [2024-11-19 17:06:13.354495] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138626 ]
00:22:20.636  I/O size of 3145728 is greater than zero copy threshold (65536).
00:22:20.636  Zero copy mechanism will not be used.
00:22:20.894  [2024-11-19 17:06:13.500715] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:22:20.894  [2024-11-19 17:06:13.554015] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:22:20.894  [2024-11-19 17:06:13.598398] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:22:21.460   17:06:14	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:22:21.460   17:06:14	-- common/autotest_common.sh@862 -- # return 0
00:22:21.460   17:06:14	-- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}"
00:22:21.460   17:06:14	-- bdev/bdev_raid.sh@549 -- # '[' false = true ']'
00:22:21.460   17:06:14	-- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1
00:22:21.719  BaseBdev1
00:22:21.719   17:06:14	-- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}"
00:22:21.719   17:06:14	-- bdev/bdev_raid.sh@549 -- # '[' false = true ']'
00:22:21.719   17:06:14	-- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2
00:22:21.976  BaseBdev2
00:22:21.976   17:06:14	-- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}"
00:22:21.976   17:06:14	-- bdev/bdev_raid.sh@549 -- # '[' false = true ']'
00:22:21.976   17:06:14	-- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3
00:22:22.234  BaseBdev3
00:22:22.234   17:06:14	-- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc
00:22:22.493  spare_malloc
00:22:22.493   17:06:15	-- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000
00:22:22.751  spare_delay
00:22:22.751   17:06:15	-- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare
00:22:23.009  [2024-11-19 17:06:15.638838] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay
00:22:23.009  [2024-11-19 17:06:15.639199] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:22:23.009  [2024-11-19 17:06:15.639335] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280
00:22:23.009  [2024-11-19 17:06:15.639469] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:22:23.009  [2024-11-19 17:06:15.642335] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:22:23.009  [2024-11-19 17:06:15.642547] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare
00:22:23.009  spare
00:22:23.009   17:06:15	-- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1
00:22:23.267  [2024-11-19 17:06:15.923069] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:22:23.267  [2024-11-19 17:06:15.925569] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:22:23.267  [2024-11-19 17:06:15.925776] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:22:23.267  [2024-11-19 17:06:15.925914] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880
00:22:23.267  [2024-11-19 17:06:15.926021] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512
00:22:23.267  [2024-11-19 17:06:15.926242] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000021f0
00:22:23.267  [2024-11-19 17:06:15.927168] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880
00:22:23.267  [2024-11-19 17:06:15.927297] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007880
00:22:23.267  [2024-11-19 17:06:15.927612] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:22:23.267   17:06:15	-- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3
00:22:23.267   17:06:15	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:22:23.267   17:06:15	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:22:23.267   17:06:15	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:22:23.267   17:06:15	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:22:23.267   17:06:15	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:22:23.267   17:06:15	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:22:23.267   17:06:15	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:22:23.267   17:06:15	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:22:23.267   17:06:15	-- bdev/bdev_raid.sh@125 -- # local tmp
00:22:23.267    17:06:15	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:22:23.267    17:06:15	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:22:23.526   17:06:16	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:22:23.526    "name": "raid_bdev1",
00:22:23.526    "uuid": "42f429a5-2fd3-4b61-b895-564d1d86cf42",
00:22:23.526    "strip_size_kb": 64,
00:22:23.526    "state": "online",
00:22:23.526    "raid_level": "raid5f",
00:22:23.526    "superblock": false,
00:22:23.526    "num_base_bdevs": 3,
00:22:23.526    "num_base_bdevs_discovered": 3,
00:22:23.526    "num_base_bdevs_operational": 3,
00:22:23.526    "base_bdevs_list": [
00:22:23.526      {
00:22:23.526        "name": "BaseBdev1",
00:22:23.526        "uuid": "07b0d886-f2bd-47ae-b87b-a0474cedecae",
00:22:23.526        "is_configured": true,
00:22:23.526        "data_offset": 0,
00:22:23.526        "data_size": 65536
00:22:23.526      },
00:22:23.526      {
00:22:23.526        "name": "BaseBdev2",
00:22:23.526        "uuid": "ca25a12f-48c6-4b28-ac0e-8f9a79791a0a",
00:22:23.526        "is_configured": true,
00:22:23.526        "data_offset": 0,
00:22:23.526        "data_size": 65536
00:22:23.526      },
00:22:23.526      {
00:22:23.526        "name": "BaseBdev3",
00:22:23.526        "uuid": "32886d87-22f7-4b5d-b984-fc67ee414182",
00:22:23.526        "is_configured": true,
00:22:23.526        "data_offset": 0,
00:22:23.526        "data_size": 65536
00:22:23.526      }
00:22:23.526    ]
00:22:23.526  }'
00:22:23.526   17:06:16	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:22:23.526   17:06:16	-- common/autotest_common.sh@10 -- # set +x
00:22:24.093    17:06:16	-- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks'
00:22:24.093    17:06:16	-- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1
00:22:24.093  [2024-11-19 17:06:16.939981] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:22:24.351   17:06:16	-- bdev/bdev_raid.sh@567 -- # raid_bdev_size=131072
00:22:24.351    17:06:16	-- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:22:24.351    17:06:16	-- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset'
00:22:24.351   17:06:17	-- bdev/bdev_raid.sh@570 -- # data_offset=0
00:22:24.351   17:06:17	-- bdev/bdev_raid.sh@572 -- # '[' false = true ']'
00:22:24.351   17:06:17	-- bdev/bdev_raid.sh@576 -- # local write_unit_size
00:22:24.351   17:06:17	-- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0
00:22:24.351   17:06:17	-- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:22:24.351   17:06:17	-- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1')
00:22:24.351   17:06:17	-- bdev/nbd_common.sh@10 -- # local bdev_list
00:22:24.351   17:06:17	-- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0')
00:22:24.351   17:06:17	-- bdev/nbd_common.sh@11 -- # local nbd_list
00:22:24.351   17:06:17	-- bdev/nbd_common.sh@12 -- # local i
00:22:24.351   17:06:17	-- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:22:24.351   17:06:17	-- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:22:24.351   17:06:17	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0
00:22:24.610  [2024-11-19 17:06:17.323967] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390
00:22:24.610  /dev/nbd0
00:22:24.610    17:06:17	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:22:24.610   17:06:17	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:22:24.610   17:06:17	-- common/autotest_common.sh@866 -- # local nbd_name=nbd0
00:22:24.610   17:06:17	-- common/autotest_common.sh@867 -- # local i
00:22:24.610   17:06:17	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:22:24.610   17:06:17	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:22:24.610   17:06:17	-- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions
00:22:24.610   17:06:17	-- common/autotest_common.sh@871 -- # break
00:22:24.610   17:06:17	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:22:24.610   17:06:17	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:22:24.610   17:06:17	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:22:24.610  1+0 records in
00:22:24.610  1+0 records out
00:22:24.610  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000323909 s, 12.6 MB/s
00:22:24.610    17:06:17	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:22:24.610   17:06:17	-- common/autotest_common.sh@884 -- # size=4096
00:22:24.610   17:06:17	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:22:24.610   17:06:17	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:22:24.610   17:06:17	-- common/autotest_common.sh@887 -- # return 0
00:22:24.610   17:06:17	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:22:24.610   17:06:17	-- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:22:24.610   17:06:17	-- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']'
00:22:24.610   17:06:17	-- bdev/bdev_raid.sh@581 -- # write_unit_size=256
00:22:24.610   17:06:17	-- bdev/bdev_raid.sh@582 -- # echo 128
00:22:24.610   17:06:17	-- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct
00:22:25.234  512+0 records in
00:22:25.234  512+0 records out
00:22:25.234  67108864 bytes (67 MB, 64 MiB) copied, 0.404948 s, 166 MB/s
00:22:25.234   17:06:17	-- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0
00:22:25.234   17:06:17	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:22:25.234   17:06:17	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:22:25.234   17:06:17	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:22:25.234   17:06:17	-- bdev/nbd_common.sh@51 -- # local i
00:22:25.234   17:06:17	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:22:25.234   17:06:17	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0
00:22:25.493  [2024-11-19 17:06:18.136790] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:22:25.493    17:06:18	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:22:25.493   17:06:18	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:22:25.493   17:06:18	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:22:25.493   17:06:18	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:22:25.493   17:06:18	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:22:25.493   17:06:18	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:22:25.493   17:06:18	-- bdev/nbd_common.sh@41 -- # break
00:22:25.493   17:06:18	-- bdev/nbd_common.sh@45 -- # return 0
00:22:25.493   17:06:18	-- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1
00:22:25.751  [2024-11-19 17:06:18.404100] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:22:25.751   17:06:18	-- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2
00:22:25.751   17:06:18	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:22:25.751   17:06:18	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:22:25.751   17:06:18	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:22:25.751   17:06:18	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:22:25.751   17:06:18	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:22:25.751   17:06:18	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:22:25.751   17:06:18	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:22:25.751   17:06:18	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:22:25.751   17:06:18	-- bdev/bdev_raid.sh@125 -- # local tmp
00:22:25.751    17:06:18	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:22:25.751    17:06:18	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:22:26.009   17:06:18	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:22:26.009    "name": "raid_bdev1",
00:22:26.009    "uuid": "42f429a5-2fd3-4b61-b895-564d1d86cf42",
00:22:26.009    "strip_size_kb": 64,
00:22:26.009    "state": "online",
00:22:26.009    "raid_level": "raid5f",
00:22:26.009    "superblock": false,
00:22:26.009    "num_base_bdevs": 3,
00:22:26.009    "num_base_bdevs_discovered": 2,
00:22:26.009    "num_base_bdevs_operational": 2,
00:22:26.009    "base_bdevs_list": [
00:22:26.009      {
00:22:26.009        "name": null,
00:22:26.009        "uuid": "00000000-0000-0000-0000-000000000000",
00:22:26.009        "is_configured": false,
00:22:26.009        "data_offset": 0,
00:22:26.009        "data_size": 65536
00:22:26.009      },
00:22:26.009      {
00:22:26.009        "name": "BaseBdev2",
00:22:26.009        "uuid": "ca25a12f-48c6-4b28-ac0e-8f9a79791a0a",
00:22:26.009        "is_configured": true,
00:22:26.009        "data_offset": 0,
00:22:26.009        "data_size": 65536
00:22:26.009      },
00:22:26.009      {
00:22:26.009        "name": "BaseBdev3",
00:22:26.009        "uuid": "32886d87-22f7-4b5d-b984-fc67ee414182",
00:22:26.009        "is_configured": true,
00:22:26.009        "data_offset": 0,
00:22:26.009        "data_size": 65536
00:22:26.009      }
00:22:26.009    ]
00:22:26.009  }'
00:22:26.009   17:06:18	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:22:26.009   17:06:18	-- common/autotest_common.sh@10 -- # set +x
00:22:26.943   17:06:19	-- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare
00:22:26.943  [2024-11-19 17:06:19.617433] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare
00:22:26.943  [2024-11-19 17:06:19.617523] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:22:26.943  [2024-11-19 17:06:19.621798] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027990
00:22:26.943  [2024-11-19 17:06:19.624789] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1
00:22:26.943   17:06:19	-- bdev/bdev_raid.sh@598 -- # sleep 1
00:22:27.877   17:06:20	-- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:22:27.877   17:06:20	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:22:27.877   17:06:20	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:22:27.877   17:06:20	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:22:27.877   17:06:20	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:22:27.877    17:06:20	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:22:27.877    17:06:20	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:22:28.136   17:06:20	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:22:28.136    "name": "raid_bdev1",
00:22:28.136    "uuid": "42f429a5-2fd3-4b61-b895-564d1d86cf42",
00:22:28.136    "strip_size_kb": 64,
00:22:28.136    "state": "online",
00:22:28.136    "raid_level": "raid5f",
00:22:28.136    "superblock": false,
00:22:28.136    "num_base_bdevs": 3,
00:22:28.136    "num_base_bdevs_discovered": 3,
00:22:28.136    "num_base_bdevs_operational": 3,
00:22:28.136    "process": {
00:22:28.136      "type": "rebuild",
00:22:28.136      "target": "spare",
00:22:28.136      "progress": {
00:22:28.136        "blocks": 24576,
00:22:28.136        "percent": 18
00:22:28.136      }
00:22:28.136    },
00:22:28.136    "base_bdevs_list": [
00:22:28.136      {
00:22:28.136        "name": "spare",
00:22:28.136        "uuid": "08c92c1a-f6cc-5bd8-8fb1-405044bfea60",
00:22:28.136        "is_configured": true,
00:22:28.136        "data_offset": 0,
00:22:28.136        "data_size": 65536
00:22:28.136      },
00:22:28.136      {
00:22:28.136        "name": "BaseBdev2",
00:22:28.136        "uuid": "ca25a12f-48c6-4b28-ac0e-8f9a79791a0a",
00:22:28.136        "is_configured": true,
00:22:28.136        "data_offset": 0,
00:22:28.136        "data_size": 65536
00:22:28.136      },
00:22:28.136      {
00:22:28.136        "name": "BaseBdev3",
00:22:28.136        "uuid": "32886d87-22f7-4b5d-b984-fc67ee414182",
00:22:28.136        "is_configured": true,
00:22:28.136        "data_offset": 0,
00:22:28.136        "data_size": 65536
00:22:28.136      }
00:22:28.136    ]
00:22:28.136  }'
00:22:28.136    17:06:20	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:22:28.136   17:06:20	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:22:28.136    17:06:20	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:22:28.136   17:06:20	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:22:28.136   17:06:20	-- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare
00:22:28.395  [2024-11-19 17:06:21.220017] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:22:28.395  [2024-11-19 17:06:21.240199] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device
00:22:28.395  [2024-11-19 17:06:21.240332] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:22:28.654   17:06:21	-- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2
00:22:28.654   17:06:21	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:22:28.654   17:06:21	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:22:28.654   17:06:21	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:22:28.654   17:06:21	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:22:28.654   17:06:21	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:22:28.654   17:06:21	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:22:28.654   17:06:21	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:22:28.654   17:06:21	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:22:28.654   17:06:21	-- bdev/bdev_raid.sh@125 -- # local tmp
00:22:28.654    17:06:21	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:22:28.654    17:06:21	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:22:28.912   17:06:21	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:22:28.912    "name": "raid_bdev1",
00:22:28.912    "uuid": "42f429a5-2fd3-4b61-b895-564d1d86cf42",
00:22:28.912    "strip_size_kb": 64,
00:22:28.912    "state": "online",
00:22:28.912    "raid_level": "raid5f",
00:22:28.912    "superblock": false,
00:22:28.912    "num_base_bdevs": 3,
00:22:28.912    "num_base_bdevs_discovered": 2,
00:22:28.912    "num_base_bdevs_operational": 2,
00:22:28.912    "base_bdevs_list": [
00:22:28.912      {
00:22:28.912        "name": null,
00:22:28.912        "uuid": "00000000-0000-0000-0000-000000000000",
00:22:28.912        "is_configured": false,
00:22:28.912        "data_offset": 0,
00:22:28.912        "data_size": 65536
00:22:28.912      },
00:22:28.912      {
00:22:28.912        "name": "BaseBdev2",
00:22:28.912        "uuid": "ca25a12f-48c6-4b28-ac0e-8f9a79791a0a",
00:22:28.912        "is_configured": true,
00:22:28.912        "data_offset": 0,
00:22:28.912        "data_size": 65536
00:22:28.912      },
00:22:28.912      {
00:22:28.912        "name": "BaseBdev3",
00:22:28.912        "uuid": "32886d87-22f7-4b5d-b984-fc67ee414182",
00:22:28.912        "is_configured": true,
00:22:28.912        "data_offset": 0,
00:22:28.912        "data_size": 65536
00:22:28.912      }
00:22:28.912    ]
00:22:28.912  }'
00:22:28.912   17:06:21	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:22:28.912   17:06:21	-- common/autotest_common.sh@10 -- # set +x
00:22:29.478   17:06:22	-- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none
00:22:29.478   17:06:22	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:22:29.478   17:06:22	-- bdev/bdev_raid.sh@184 -- # local process_type=none
00:22:29.478   17:06:22	-- bdev/bdev_raid.sh@185 -- # local target=none
00:22:29.478   17:06:22	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:22:29.478    17:06:22	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:22:29.478    17:06:22	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:22:29.736   17:06:22	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:22:29.736    "name": "raid_bdev1",
00:22:29.736    "uuid": "42f429a5-2fd3-4b61-b895-564d1d86cf42",
00:22:29.736    "strip_size_kb": 64,
00:22:29.736    "state": "online",
00:22:29.736    "raid_level": "raid5f",
00:22:29.736    "superblock": false,
00:22:29.736    "num_base_bdevs": 3,
00:22:29.736    "num_base_bdevs_discovered": 2,
00:22:29.736    "num_base_bdevs_operational": 2,
00:22:29.736    "base_bdevs_list": [
00:22:29.736      {
00:22:29.736        "name": null,
00:22:29.736        "uuid": "00000000-0000-0000-0000-000000000000",
00:22:29.736        "is_configured": false,
00:22:29.736        "data_offset": 0,
00:22:29.736        "data_size": 65536
00:22:29.736      },
00:22:29.736      {
00:22:29.736        "name": "BaseBdev2",
00:22:29.736        "uuid": "ca25a12f-48c6-4b28-ac0e-8f9a79791a0a",
00:22:29.736        "is_configured": true,
00:22:29.736        "data_offset": 0,
00:22:29.736        "data_size": 65536
00:22:29.736      },
00:22:29.736      {
00:22:29.736        "name": "BaseBdev3",
00:22:29.736        "uuid": "32886d87-22f7-4b5d-b984-fc67ee414182",
00:22:29.736        "is_configured": true,
00:22:29.736        "data_offset": 0,
00:22:29.736        "data_size": 65536
00:22:29.736      }
00:22:29.736    ]
00:22:29.736  }'
00:22:29.736    17:06:22	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:22:29.736   17:06:22	-- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]]
00:22:29.736    17:06:22	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:22:29.736   17:06:22	-- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]]
00:22:29.736   17:06:22	-- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare
00:22:29.994  [2024-11-19 17:06:22.727288] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare
00:22:29.994  [2024-11-19 17:06:22.727344] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:22:29.994  [2024-11-19 17:06:22.731150] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027b30
00:22:29.994  [2024-11-19 17:06:22.733536] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1
00:22:29.994   17:06:22	-- bdev/bdev_raid.sh@614 -- # sleep 1
00:22:30.928   17:06:23	-- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:22:30.928   17:06:23	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:22:30.928   17:06:23	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:22:30.928   17:06:23	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:22:30.928   17:06:23	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:22:30.928    17:06:23	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:22:30.928    17:06:23	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:22:31.186   17:06:24	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:22:31.186    "name": "raid_bdev1",
00:22:31.186    "uuid": "42f429a5-2fd3-4b61-b895-564d1d86cf42",
00:22:31.186    "strip_size_kb": 64,
00:22:31.186    "state": "online",
00:22:31.186    "raid_level": "raid5f",
00:22:31.186    "superblock": false,
00:22:31.186    "num_base_bdevs": 3,
00:22:31.186    "num_base_bdevs_discovered": 3,
00:22:31.186    "num_base_bdevs_operational": 3,
00:22:31.186    "process": {
00:22:31.186      "type": "rebuild",
00:22:31.186      "target": "spare",
00:22:31.186      "progress": {
00:22:31.186        "blocks": 24576,
00:22:31.186        "percent": 18
00:22:31.186      }
00:22:31.186    },
00:22:31.186    "base_bdevs_list": [
00:22:31.186      {
00:22:31.186        "name": "spare",
00:22:31.186        "uuid": "08c92c1a-f6cc-5bd8-8fb1-405044bfea60",
00:22:31.186        "is_configured": true,
00:22:31.186        "data_offset": 0,
00:22:31.186        "data_size": 65536
00:22:31.186      },
00:22:31.186      {
00:22:31.186        "name": "BaseBdev2",
00:22:31.186        "uuid": "ca25a12f-48c6-4b28-ac0e-8f9a79791a0a",
00:22:31.186        "is_configured": true,
00:22:31.186        "data_offset": 0,
00:22:31.186        "data_size": 65536
00:22:31.186      },
00:22:31.186      {
00:22:31.186        "name": "BaseBdev3",
00:22:31.186        "uuid": "32886d87-22f7-4b5d-b984-fc67ee414182",
00:22:31.187        "is_configured": true,
00:22:31.187        "data_offset": 0,
00:22:31.187        "data_size": 65536
00:22:31.187      }
00:22:31.187    ]
00:22:31.187  }'
00:22:31.187    17:06:24	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:22:31.445   17:06:24	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:22:31.445    17:06:24	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:22:31.445   17:06:24	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:22:31.445   17:06:24	-- bdev/bdev_raid.sh@617 -- # '[' false = true ']'
00:22:31.445   17:06:24	-- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=3
00:22:31.445   17:06:24	-- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']'
00:22:31.445   17:06:24	-- bdev/bdev_raid.sh@657 -- # local timeout=585
00:22:31.445   17:06:24	-- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout ))
00:22:31.445   17:06:24	-- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:22:31.445   17:06:24	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:22:31.445   17:06:24	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:22:31.445   17:06:24	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:22:31.445   17:06:24	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:22:31.445    17:06:24	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:22:31.445    17:06:24	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:22:31.703   17:06:24	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:22:31.703    "name": "raid_bdev1",
00:22:31.703    "uuid": "42f429a5-2fd3-4b61-b895-564d1d86cf42",
00:22:31.703    "strip_size_kb": 64,
00:22:31.703    "state": "online",
00:22:31.703    "raid_level": "raid5f",
00:22:31.703    "superblock": false,
00:22:31.703    "num_base_bdevs": 3,
00:22:31.703    "num_base_bdevs_discovered": 3,
00:22:31.703    "num_base_bdevs_operational": 3,
00:22:31.703    "process": {
00:22:31.703      "type": "rebuild",
00:22:31.703      "target": "spare",
00:22:31.703      "progress": {
00:22:31.703        "blocks": 32768,
00:22:31.703        "percent": 25
00:22:31.703      }
00:22:31.703    },
00:22:31.703    "base_bdevs_list": [
00:22:31.703      {
00:22:31.703        "name": "spare",
00:22:31.703        "uuid": "08c92c1a-f6cc-5bd8-8fb1-405044bfea60",
00:22:31.703        "is_configured": true,
00:22:31.703        "data_offset": 0,
00:22:31.703        "data_size": 65536
00:22:31.703      },
00:22:31.703      {
00:22:31.703        "name": "BaseBdev2",
00:22:31.703        "uuid": "ca25a12f-48c6-4b28-ac0e-8f9a79791a0a",
00:22:31.703        "is_configured": true,
00:22:31.703        "data_offset": 0,
00:22:31.703        "data_size": 65536
00:22:31.703      },
00:22:31.703      {
00:22:31.703        "name": "BaseBdev3",
00:22:31.703        "uuid": "32886d87-22f7-4b5d-b984-fc67ee414182",
00:22:31.703        "is_configured": true,
00:22:31.703        "data_offset": 0,
00:22:31.703        "data_size": 65536
00:22:31.703      }
00:22:31.703    ]
00:22:31.703  }'
00:22:31.703    17:06:24	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:22:31.703   17:06:24	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:22:31.703    17:06:24	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:22:31.703   17:06:24	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:22:31.703   17:06:24	-- bdev/bdev_raid.sh@662 -- # sleep 1
00:22:32.707   17:06:25	-- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout ))
00:22:32.707   17:06:25	-- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:22:32.707   17:06:25	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:22:32.707   17:06:25	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:22:32.707   17:06:25	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:22:32.707   17:06:25	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:22:32.707    17:06:25	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:22:32.707    17:06:25	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:22:32.966   17:06:25	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:22:32.966    "name": "raid_bdev1",
00:22:32.966    "uuid": "42f429a5-2fd3-4b61-b895-564d1d86cf42",
00:22:32.966    "strip_size_kb": 64,
00:22:32.966    "state": "online",
00:22:32.966    "raid_level": "raid5f",
00:22:32.966    "superblock": false,
00:22:32.966    "num_base_bdevs": 3,
00:22:32.966    "num_base_bdevs_discovered": 3,
00:22:32.966    "num_base_bdevs_operational": 3,
00:22:32.966    "process": {
00:22:32.966      "type": "rebuild",
00:22:32.966      "target": "spare",
00:22:32.966      "progress": {
00:22:32.966        "blocks": 59392,
00:22:32.966        "percent": 45
00:22:32.966      }
00:22:32.966    },
00:22:32.966    "base_bdevs_list": [
00:22:32.966      {
00:22:32.966        "name": "spare",
00:22:32.966        "uuid": "08c92c1a-f6cc-5bd8-8fb1-405044bfea60",
00:22:32.966        "is_configured": true,
00:22:32.966        "data_offset": 0,
00:22:32.966        "data_size": 65536
00:22:32.966      },
00:22:32.966      {
00:22:32.966        "name": "BaseBdev2",
00:22:32.966        "uuid": "ca25a12f-48c6-4b28-ac0e-8f9a79791a0a",
00:22:32.966        "is_configured": true,
00:22:32.966        "data_offset": 0,
00:22:32.966        "data_size": 65536
00:22:32.966      },
00:22:32.966      {
00:22:32.966        "name": "BaseBdev3",
00:22:32.966        "uuid": "32886d87-22f7-4b5d-b984-fc67ee414182",
00:22:32.966        "is_configured": true,
00:22:32.966        "data_offset": 0,
00:22:32.966        "data_size": 65536
00:22:32.966      }
00:22:32.966    ]
00:22:32.966  }'
00:22:32.966    17:06:25	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:22:32.966   17:06:25	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:22:32.966    17:06:25	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:22:33.224   17:06:25	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:22:33.224   17:06:25	-- bdev/bdev_raid.sh@662 -- # sleep 1
00:22:34.157   17:06:26	-- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout ))
00:22:34.157   17:06:26	-- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:22:34.157   17:06:26	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:22:34.157   17:06:26	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:22:34.157   17:06:26	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:22:34.157   17:06:26	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:22:34.157    17:06:26	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:22:34.157    17:06:26	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:22:34.414   17:06:27	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:22:34.414    "name": "raid_bdev1",
00:22:34.414    "uuid": "42f429a5-2fd3-4b61-b895-564d1d86cf42",
00:22:34.414    "strip_size_kb": 64,
00:22:34.414    "state": "online",
00:22:34.414    "raid_level": "raid5f",
00:22:34.414    "superblock": false,
00:22:34.414    "num_base_bdevs": 3,
00:22:34.414    "num_base_bdevs_discovered": 3,
00:22:34.414    "num_base_bdevs_operational": 3,
00:22:34.414    "process": {
00:22:34.414      "type": "rebuild",
00:22:34.414      "target": "spare",
00:22:34.414      "progress": {
00:22:34.414        "blocks": 86016,
00:22:34.414        "percent": 65
00:22:34.414      }
00:22:34.414    },
00:22:34.414    "base_bdevs_list": [
00:22:34.414      {
00:22:34.414        "name": "spare",
00:22:34.414        "uuid": "08c92c1a-f6cc-5bd8-8fb1-405044bfea60",
00:22:34.414        "is_configured": true,
00:22:34.414        "data_offset": 0,
00:22:34.414        "data_size": 65536
00:22:34.414      },
00:22:34.414      {
00:22:34.414        "name": "BaseBdev2",
00:22:34.414        "uuid": "ca25a12f-48c6-4b28-ac0e-8f9a79791a0a",
00:22:34.414        "is_configured": true,
00:22:34.414        "data_offset": 0,
00:22:34.414        "data_size": 65536
00:22:34.414      },
00:22:34.414      {
00:22:34.414        "name": "BaseBdev3",
00:22:34.414        "uuid": "32886d87-22f7-4b5d-b984-fc67ee414182",
00:22:34.414        "is_configured": true,
00:22:34.414        "data_offset": 0,
00:22:34.414        "data_size": 65536
00:22:34.414      }
00:22:34.414    ]
00:22:34.414  }'
00:22:34.414    17:06:27	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:22:34.414   17:06:27	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:22:34.414    17:06:27	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:22:34.414   17:06:27	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:22:34.414   17:06:27	-- bdev/bdev_raid.sh@662 -- # sleep 1
00:22:35.348   17:06:28	-- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout ))
00:22:35.348   17:06:28	-- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:22:35.348   17:06:28	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:22:35.348   17:06:28	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:22:35.348   17:06:28	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:22:35.348   17:06:28	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:22:35.348    17:06:28	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:22:35.348    17:06:28	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:22:35.606   17:06:28	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:22:35.606    "name": "raid_bdev1",
00:22:35.606    "uuid": "42f429a5-2fd3-4b61-b895-564d1d86cf42",
00:22:35.606    "strip_size_kb": 64,
00:22:35.606    "state": "online",
00:22:35.606    "raid_level": "raid5f",
00:22:35.606    "superblock": false,
00:22:35.606    "num_base_bdevs": 3,
00:22:35.606    "num_base_bdevs_discovered": 3,
00:22:35.606    "num_base_bdevs_operational": 3,
00:22:35.606    "process": {
00:22:35.606      "type": "rebuild",
00:22:35.606      "target": "spare",
00:22:35.606      "progress": {
00:22:35.606        "blocks": 114688,
00:22:35.606        "percent": 87
00:22:35.606      }
00:22:35.606    },
00:22:35.606    "base_bdevs_list": [
00:22:35.606      {
00:22:35.606        "name": "spare",
00:22:35.606        "uuid": "08c92c1a-f6cc-5bd8-8fb1-405044bfea60",
00:22:35.606        "is_configured": true,
00:22:35.606        "data_offset": 0,
00:22:35.606        "data_size": 65536
00:22:35.606      },
00:22:35.606      {
00:22:35.606        "name": "BaseBdev2",
00:22:35.606        "uuid": "ca25a12f-48c6-4b28-ac0e-8f9a79791a0a",
00:22:35.606        "is_configured": true,
00:22:35.606        "data_offset": 0,
00:22:35.606        "data_size": 65536
00:22:35.606      },
00:22:35.606      {
00:22:35.606        "name": "BaseBdev3",
00:22:35.606        "uuid": "32886d87-22f7-4b5d-b984-fc67ee414182",
00:22:35.606        "is_configured": true,
00:22:35.606        "data_offset": 0,
00:22:35.606        "data_size": 65536
00:22:35.606      }
00:22:35.606    ]
00:22:35.606  }'
00:22:35.606    17:06:28	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:22:35.606   17:06:28	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:22:35.864    17:06:28	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:22:35.864   17:06:28	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:22:35.864   17:06:28	-- bdev/bdev_raid.sh@662 -- # sleep 1
00:22:36.430  [2024-11-19 17:06:29.189800] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1
00:22:36.430  [2024-11-19 17:06:29.189903] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1
00:22:36.430  [2024-11-19 17:06:29.190013] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:22:36.687   17:06:29	-- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout ))
00:22:36.687   17:06:29	-- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:22:36.687   17:06:29	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:22:36.687   17:06:29	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:22:36.687   17:06:29	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:22:36.687   17:06:29	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:22:36.687    17:06:29	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:22:36.687    17:06:29	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:22:36.944   17:06:29	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:22:36.944    "name": "raid_bdev1",
00:22:36.944    "uuid": "42f429a5-2fd3-4b61-b895-564d1d86cf42",
00:22:36.944    "strip_size_kb": 64,
00:22:36.945    "state": "online",
00:22:36.945    "raid_level": "raid5f",
00:22:36.945    "superblock": false,
00:22:36.945    "num_base_bdevs": 3,
00:22:36.945    "num_base_bdevs_discovered": 3,
00:22:36.945    "num_base_bdevs_operational": 3,
00:22:36.945    "base_bdevs_list": [
00:22:36.945      {
00:22:36.945        "name": "spare",
00:22:36.945        "uuid": "08c92c1a-f6cc-5bd8-8fb1-405044bfea60",
00:22:36.945        "is_configured": true,
00:22:36.945        "data_offset": 0,
00:22:36.945        "data_size": 65536
00:22:36.945      },
00:22:36.945      {
00:22:36.945        "name": "BaseBdev2",
00:22:36.945        "uuid": "ca25a12f-48c6-4b28-ac0e-8f9a79791a0a",
00:22:36.945        "is_configured": true,
00:22:36.945        "data_offset": 0,
00:22:36.945        "data_size": 65536
00:22:36.945      },
00:22:36.945      {
00:22:36.945        "name": "BaseBdev3",
00:22:36.945        "uuid": "32886d87-22f7-4b5d-b984-fc67ee414182",
00:22:36.945        "is_configured": true,
00:22:36.945        "data_offset": 0,
00:22:36.945        "data_size": 65536
00:22:36.945      }
00:22:36.945    ]
00:22:36.945  }'
00:22:36.945    17:06:29	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:22:36.945   17:06:29	-- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]]
00:22:36.945    17:06:29	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:22:37.203   17:06:29	-- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]]
00:22:37.203   17:06:29	-- bdev/bdev_raid.sh@660 -- # break
00:22:37.203   17:06:29	-- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none
00:22:37.203   17:06:29	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:22:37.203   17:06:29	-- bdev/bdev_raid.sh@184 -- # local process_type=none
00:22:37.203   17:06:29	-- bdev/bdev_raid.sh@185 -- # local target=none
00:22:37.203   17:06:29	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:22:37.203    17:06:29	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:22:37.203    17:06:29	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:22:37.460   17:06:30	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:22:37.460    "name": "raid_bdev1",
00:22:37.460    "uuid": "42f429a5-2fd3-4b61-b895-564d1d86cf42",
00:22:37.460    "strip_size_kb": 64,
00:22:37.460    "state": "online",
00:22:37.460    "raid_level": "raid5f",
00:22:37.460    "superblock": false,
00:22:37.460    "num_base_bdevs": 3,
00:22:37.460    "num_base_bdevs_discovered": 3,
00:22:37.460    "num_base_bdevs_operational": 3,
00:22:37.460    "base_bdevs_list": [
00:22:37.460      {
00:22:37.460        "name": "spare",
00:22:37.460        "uuid": "08c92c1a-f6cc-5bd8-8fb1-405044bfea60",
00:22:37.460        "is_configured": true,
00:22:37.460        "data_offset": 0,
00:22:37.460        "data_size": 65536
00:22:37.460      },
00:22:37.460      {
00:22:37.460        "name": "BaseBdev2",
00:22:37.460        "uuid": "ca25a12f-48c6-4b28-ac0e-8f9a79791a0a",
00:22:37.460        "is_configured": true,
00:22:37.460        "data_offset": 0,
00:22:37.460        "data_size": 65536
00:22:37.460      },
00:22:37.460      {
00:22:37.460        "name": "BaseBdev3",
00:22:37.460        "uuid": "32886d87-22f7-4b5d-b984-fc67ee414182",
00:22:37.460        "is_configured": true,
00:22:37.460        "data_offset": 0,
00:22:37.460        "data_size": 65536
00:22:37.460      }
00:22:37.460    ]
00:22:37.460  }'
00:22:37.460    17:06:30	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:22:37.460   17:06:30	-- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]]
00:22:37.460    17:06:30	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:22:37.460   17:06:30	-- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]]
00:22:37.460   17:06:30	-- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3
00:22:37.460   17:06:30	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:22:37.460   17:06:30	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:22:37.461   17:06:30	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:22:37.461   17:06:30	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:22:37.461   17:06:30	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:22:37.461   17:06:30	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:22:37.461   17:06:30	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:22:37.461   17:06:30	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:22:37.461   17:06:30	-- bdev/bdev_raid.sh@125 -- # local tmp
00:22:37.461    17:06:30	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:22:37.461    17:06:30	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:22:37.718   17:06:30	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:22:37.718    "name": "raid_bdev1",
00:22:37.718    "uuid": "42f429a5-2fd3-4b61-b895-564d1d86cf42",
00:22:37.718    "strip_size_kb": 64,
00:22:37.718    "state": "online",
00:22:37.718    "raid_level": "raid5f",
00:22:37.718    "superblock": false,
00:22:37.718    "num_base_bdevs": 3,
00:22:37.718    "num_base_bdevs_discovered": 3,
00:22:37.718    "num_base_bdevs_operational": 3,
00:22:37.718    "base_bdevs_list": [
00:22:37.718      {
00:22:37.718        "name": "spare",
00:22:37.718        "uuid": "08c92c1a-f6cc-5bd8-8fb1-405044bfea60",
00:22:37.718        "is_configured": true,
00:22:37.718        "data_offset": 0,
00:22:37.718        "data_size": 65536
00:22:37.718      },
00:22:37.718      {
00:22:37.718        "name": "BaseBdev2",
00:22:37.718        "uuid": "ca25a12f-48c6-4b28-ac0e-8f9a79791a0a",
00:22:37.718        "is_configured": true,
00:22:37.718        "data_offset": 0,
00:22:37.718        "data_size": 65536
00:22:37.718      },
00:22:37.718      {
00:22:37.718        "name": "BaseBdev3",
00:22:37.718        "uuid": "32886d87-22f7-4b5d-b984-fc67ee414182",
00:22:37.718        "is_configured": true,
00:22:37.718        "data_offset": 0,
00:22:37.718        "data_size": 65536
00:22:37.718      }
00:22:37.718    ]
00:22:37.718  }'
00:22:37.718   17:06:30	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:22:37.718   17:06:30	-- common/autotest_common.sh@10 -- # set +x
00:22:38.284   17:06:30	-- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1
00:22:38.542  [2024-11-19 17:06:31.264238] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:22:38.542  [2024-11-19 17:06:31.264278] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:22:38.542  [2024-11-19 17:06:31.264409] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:22:38.542  [2024-11-19 17:06:31.264495] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:22:38.542  [2024-11-19 17:06:31.264505] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name raid_bdev1, state offline
00:22:38.542    17:06:31	-- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:22:38.542    17:06:31	-- bdev/bdev_raid.sh@671 -- # jq length
00:22:38.800   17:06:31	-- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]]
00:22:38.800   17:06:31	-- bdev/bdev_raid.sh@673 -- # '[' false = true ']'
00:22:38.800   17:06:31	-- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1'
00:22:38.800   17:06:31	-- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:22:38.800   17:06:31	-- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare')
00:22:38.800   17:06:31	-- bdev/nbd_common.sh@10 -- # local bdev_list
00:22:38.800   17:06:31	-- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:22:38.800   17:06:31	-- bdev/nbd_common.sh@11 -- # local nbd_list
00:22:38.800   17:06:31	-- bdev/nbd_common.sh@12 -- # local i
00:22:38.800   17:06:31	-- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:22:38.800   17:06:31	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:22:38.800   17:06:31	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0
00:22:39.058  /dev/nbd0
00:22:39.058    17:06:31	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:22:39.058   17:06:31	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:22:39.058   17:06:31	-- common/autotest_common.sh@866 -- # local nbd_name=nbd0
00:22:39.058   17:06:31	-- common/autotest_common.sh@867 -- # local i
00:22:39.058   17:06:31	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:22:39.058   17:06:31	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:22:39.058   17:06:31	-- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions
00:22:39.058   17:06:31	-- common/autotest_common.sh@871 -- # break
00:22:39.058   17:06:31	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:22:39.058   17:06:31	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:22:39.058   17:06:31	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:22:39.058  1+0 records in
00:22:39.058  1+0 records out
00:22:39.058  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0003108 s, 13.2 MB/s
00:22:39.058    17:06:31	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:22:39.058   17:06:31	-- common/autotest_common.sh@884 -- # size=4096
00:22:39.058   17:06:31	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:22:39.058   17:06:31	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:22:39.058   17:06:31	-- common/autotest_common.sh@887 -- # return 0
00:22:39.058   17:06:31	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:22:39.058   17:06:31	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:22:39.058   17:06:31	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1
00:22:39.317  /dev/nbd1
00:22:39.317    17:06:32	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:22:39.317   17:06:32	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:22:39.317   17:06:32	-- common/autotest_common.sh@866 -- # local nbd_name=nbd1
00:22:39.317   17:06:32	-- common/autotest_common.sh@867 -- # local i
00:22:39.317   17:06:32	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:22:39.317   17:06:32	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:22:39.317   17:06:32	-- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions
00:22:39.317   17:06:32	-- common/autotest_common.sh@871 -- # break
00:22:39.317   17:06:32	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:22:39.317   17:06:32	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:22:39.317   17:06:32	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:22:39.317  1+0 records in
00:22:39.317  1+0 records out
00:22:39.317  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000427788 s, 9.6 MB/s
00:22:39.317    17:06:32	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:22:39.317   17:06:32	-- common/autotest_common.sh@884 -- # size=4096
00:22:39.317   17:06:32	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:22:39.317   17:06:32	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:22:39.317   17:06:32	-- common/autotest_common.sh@887 -- # return 0
00:22:39.317   17:06:32	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:22:39.317   17:06:32	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:22:39.317   17:06:32	-- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1
00:22:39.317   17:06:32	-- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1'
00:22:39.317   17:06:32	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:22:39.317   17:06:32	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:22:39.317   17:06:32	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:22:39.317   17:06:32	-- bdev/nbd_common.sh@51 -- # local i
00:22:39.317   17:06:32	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:22:39.317   17:06:32	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0
00:22:39.576    17:06:32	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:22:39.576   17:06:32	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:22:39.576   17:06:32	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:22:39.576   17:06:32	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:22:39.577   17:06:32	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:22:39.577   17:06:32	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:22:39.577   17:06:32	-- bdev/nbd_common.sh@41 -- # break
00:22:39.577   17:06:32	-- bdev/nbd_common.sh@45 -- # return 0
00:22:39.577   17:06:32	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:22:39.577   17:06:32	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1
00:22:39.836    17:06:32	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:22:39.836   17:06:32	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:22:39.836   17:06:32	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:22:39.836   17:06:32	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:22:39.836   17:06:32	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:22:39.836   17:06:32	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:22:39.836   17:06:32	-- bdev/nbd_common.sh@41 -- # break
00:22:39.836   17:06:32	-- bdev/nbd_common.sh@45 -- # return 0
00:22:39.836   17:06:32	-- bdev/bdev_raid.sh@692 -- # '[' false = true ']'
00:22:39.836   17:06:32	-- bdev/bdev_raid.sh@709 -- # killprocess 138626
00:22:39.836   17:06:32	-- common/autotest_common.sh@936 -- # '[' -z 138626 ']'
00:22:39.836   17:06:32	-- common/autotest_common.sh@940 -- # kill -0 138626
00:22:39.836    17:06:32	-- common/autotest_common.sh@941 -- # uname
00:22:39.836   17:06:32	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:22:39.836    17:06:32	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 138626
00:22:39.836   17:06:32	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:22:39.836   17:06:32	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:22:39.836   17:06:32	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 138626'
00:22:39.836  killing process with pid 138626
00:22:39.836   17:06:32	-- common/autotest_common.sh@955 -- # kill 138626
00:22:39.836  Received shutdown signal, test time was about 60.000000 seconds
00:22:39.836  
00:22:39.836                                                                                                  Latency(us)
00:22:39.836  
[2024-11-19T17:06:32.700Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:22:39.836  
[2024-11-19T17:06:32.700Z]  ===================================================================================================================
00:22:39.836  
[2024-11-19T17:06:32.701Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00 18446744073709551616.00       0.00
00:22:39.837  [2024-11-19 17:06:32.642260] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:22:39.837   17:06:32	-- common/autotest_common.sh@960 -- # wait 138626
00:22:39.837  [2024-11-19 17:06:32.685075] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:22:40.094   17:06:32	-- bdev/bdev_raid.sh@711 -- # return 0
00:22:40.094  
00:22:40.094  real	0m19.648s
00:22:40.094  user	0m29.472s
00:22:40.094  sys	0m3.163s
00:22:40.094   17:06:32	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:22:40.094   17:06:32	-- common/autotest_common.sh@10 -- # set +x
00:22:40.094  ************************************
00:22:40.094  END TEST raid5f_rebuild_test
00:22:40.094  ************************************
00:22:40.353   17:06:32	-- bdev/bdev_raid.sh@749 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false
00:22:40.353   17:06:32	-- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']'
00:22:40.353   17:06:32	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:22:40.353   17:06:32	-- common/autotest_common.sh@10 -- # set +x
00:22:40.353  ************************************
00:22:40.353  START TEST raid5f_rebuild_test_sb
00:22:40.353  ************************************
00:22:40.353   17:06:33	-- common/autotest_common.sh@1114 -- # raid_rebuild_test raid5f 3 true false
00:22:40.353   17:06:33	-- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f
00:22:40.353   17:06:33	-- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=3
00:22:40.353   17:06:33	-- bdev/bdev_raid.sh@519 -- # local superblock=true
00:22:40.353   17:06:33	-- bdev/bdev_raid.sh@520 -- # local background_io=false
00:22:40.353    17:06:33	-- bdev/bdev_raid.sh@521 -- # (( i = 1 ))
00:22:40.353    17:06:33	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:22:40.353    17:06:33	-- bdev/bdev_raid.sh@521 -- # echo BaseBdev1
00:22:40.353    17:06:33	-- bdev/bdev_raid.sh@521 -- # (( i++ ))
00:22:40.353    17:06:33	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:22:40.353    17:06:33	-- bdev/bdev_raid.sh@521 -- # echo BaseBdev2
00:22:40.353    17:06:33	-- bdev/bdev_raid.sh@521 -- # (( i++ ))
00:22:40.353    17:06:33	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:22:40.353    17:06:33	-- bdev/bdev_raid.sh@521 -- # echo BaseBdev3
00:22:40.353    17:06:33	-- bdev/bdev_raid.sh@521 -- # (( i++ ))
00:22:40.353    17:06:33	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:22:40.353   17:06:33	-- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3')
00:22:40.353   17:06:33	-- bdev/bdev_raid.sh@521 -- # local base_bdevs
00:22:40.353   17:06:33	-- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1
00:22:40.353   17:06:33	-- bdev/bdev_raid.sh@523 -- # local strip_size
00:22:40.353   17:06:33	-- bdev/bdev_raid.sh@524 -- # local create_arg
00:22:40.353   17:06:33	-- bdev/bdev_raid.sh@525 -- # local raid_bdev_size
00:22:40.353   17:06:33	-- bdev/bdev_raid.sh@526 -- # local data_offset
00:22:40.353   17:06:33	-- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']'
00:22:40.353   17:06:33	-- bdev/bdev_raid.sh@529 -- # '[' false = true ']'
00:22:40.353   17:06:33	-- bdev/bdev_raid.sh@533 -- # strip_size=64
00:22:40.353   17:06:33	-- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64'
00:22:40.353   17:06:33	-- bdev/bdev_raid.sh@539 -- # '[' true = true ']'
00:22:40.353   17:06:33	-- bdev/bdev_raid.sh@540 -- # create_arg+=' -s'
00:22:40.353   17:06:33	-- bdev/bdev_raid.sh@544 -- # raid_pid=139152
00:22:40.353   17:06:33	-- bdev/bdev_raid.sh@545 -- # waitforlisten 139152 /var/tmp/spdk-raid.sock
00:22:40.353   17:06:33	-- common/autotest_common.sh@829 -- # '[' -z 139152 ']'
00:22:40.353   17:06:33	-- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid
00:22:40.353   17:06:33	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock
00:22:40.353  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...
00:22:40.353   17:06:33	-- common/autotest_common.sh@834 -- # local max_retries=100
00:22:40.353   17:06:33	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...'
00:22:40.353   17:06:33	-- common/autotest_common.sh@838 -- # xtrace_disable
00:22:40.353   17:06:33	-- common/autotest_common.sh@10 -- # set +x
00:22:40.353  [2024-11-19 17:06:33.079363] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:22:40.353  [2024-11-19 17:06:33.079611] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139152 ]
00:22:40.353  I/O size of 3145728 is greater than zero copy threshold (65536).
00:22:40.353  Zero copy mechanism will not be used.
00:22:40.612  [2024-11-19 17:06:33.233306] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:22:40.612  [2024-11-19 17:06:33.282543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:22:40.612  [2024-11-19 17:06:33.326442] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:22:41.179   17:06:34	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:22:41.179   17:06:34	-- common/autotest_common.sh@862 -- # return 0
00:22:41.179   17:06:34	-- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}"
00:22:41.179   17:06:34	-- bdev/bdev_raid.sh@549 -- # '[' true = true ']'
00:22:41.179   17:06:34	-- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc
00:22:41.438  BaseBdev1_malloc
00:22:41.697   17:06:34	-- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1
00:22:41.697  [2024-11-19 17:06:34.523380] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc
00:22:41.697  [2024-11-19 17:06:34.523508] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:22:41.697  [2024-11-19 17:06:34.523557] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80
00:22:41.697  [2024-11-19 17:06:34.523602] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:22:41.697  [2024-11-19 17:06:34.526272] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:22:41.697  [2024-11-19 17:06:34.526336] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1
00:22:41.697  BaseBdev1
00:22:41.697   17:06:34	-- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}"
00:22:41.697   17:06:34	-- bdev/bdev_raid.sh@549 -- # '[' true = true ']'
00:22:41.697   17:06:34	-- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc
00:22:41.955  BaseBdev2_malloc
00:22:41.955   17:06:34	-- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2
00:22:42.213  [2024-11-19 17:06:34.992551] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc
00:22:42.213  [2024-11-19 17:06:34.992660] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:22:42.213  [2024-11-19 17:06:34.992702] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680
00:22:42.213  [2024-11-19 17:06:34.992746] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:22:42.213  [2024-11-19 17:06:34.995247] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:22:42.213  [2024-11-19 17:06:34.995305] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2
00:22:42.213  BaseBdev2
00:22:42.213   17:06:35	-- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}"
00:22:42.213   17:06:35	-- bdev/bdev_raid.sh@549 -- # '[' true = true ']'
00:22:42.213   17:06:35	-- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc
00:22:42.497  BaseBdev3_malloc
00:22:42.497   17:06:35	-- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3
00:22:42.755  [2024-11-19 17:06:35.413392] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc
00:22:42.755  [2024-11-19 17:06:35.413503] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:22:42.755  [2024-11-19 17:06:35.413548] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280
00:22:42.755  [2024-11-19 17:06:35.413605] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:22:42.755  [2024-11-19 17:06:35.416251] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:22:42.755  [2024-11-19 17:06:35.416321] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3
00:22:42.755  BaseBdev3
00:22:42.755   17:06:35	-- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc
00:22:43.012  spare_malloc
00:22:43.013   17:06:35	-- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000
00:22:43.013  spare_delay
00:22:43.013   17:06:35	-- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare
00:22:43.271  [2024-11-19 17:06:36.030776] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay
00:22:43.271  [2024-11-19 17:06:36.030895] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:22:43.271  [2024-11-19 17:06:36.030950] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480
00:22:43.271  [2024-11-19 17:06:36.031000] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:22:43.271  [2024-11-19 17:06:36.033588] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:22:43.271  [2024-11-19 17:06:36.033647] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare
00:22:43.271  spare
00:22:43.271   17:06:36	-- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1
00:22:43.530  [2024-11-19 17:06:36.282998] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:22:43.530  [2024-11-19 17:06:36.285244] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:22:43.530  [2024-11-19 17:06:36.285315] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:22:43.530  [2024-11-19 17:06:36.285542] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80
00:22:43.530  [2024-11-19 17:06:36.285553] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512
00:22:43.530  [2024-11-19 17:06:36.285723] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460
00:22:43.530  [2024-11-19 17:06:36.286429] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80
00:22:43.530  [2024-11-19 17:06:36.286451] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80
00:22:43.530  [2024-11-19 17:06:36.286574] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:22:43.530   17:06:36	-- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3
00:22:43.530   17:06:36	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:22:43.530   17:06:36	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:22:43.530   17:06:36	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:22:43.530   17:06:36	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:22:43.530   17:06:36	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:22:43.530   17:06:36	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:22:43.530   17:06:36	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:22:43.530   17:06:36	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:22:43.530   17:06:36	-- bdev/bdev_raid.sh@125 -- # local tmp
00:22:43.530    17:06:36	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:22:43.530    17:06:36	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:22:43.788   17:06:36	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:22:43.788    "name": "raid_bdev1",
00:22:43.788    "uuid": "4d01087b-3e41-4698-bd06-67c79297b49b",
00:22:43.788    "strip_size_kb": 64,
00:22:43.788    "state": "online",
00:22:43.788    "raid_level": "raid5f",
00:22:43.788    "superblock": true,
00:22:43.788    "num_base_bdevs": 3,
00:22:43.788    "num_base_bdevs_discovered": 3,
00:22:43.788    "num_base_bdevs_operational": 3,
00:22:43.788    "base_bdevs_list": [
00:22:43.788      {
00:22:43.788        "name": "BaseBdev1",
00:22:43.788        "uuid": "33e3098e-0670-5bc4-816c-9f913f841038",
00:22:43.788        "is_configured": true,
00:22:43.788        "data_offset": 2048,
00:22:43.788        "data_size": 63488
00:22:43.788      },
00:22:43.788      {
00:22:43.788        "name": "BaseBdev2",
00:22:43.788        "uuid": "d96b251d-8d9c-5202-bf31-de96bb9caec7",
00:22:43.788        "is_configured": true,
00:22:43.788        "data_offset": 2048,
00:22:43.788        "data_size": 63488
00:22:43.788      },
00:22:43.788      {
00:22:43.788        "name": "BaseBdev3",
00:22:43.788        "uuid": "44b813d6-eac6-52c5-a846-8b0bb273116f",
00:22:43.788        "is_configured": true,
00:22:43.788        "data_offset": 2048,
00:22:43.788        "data_size": 63488
00:22:43.788      }
00:22:43.788    ]
00:22:43.788  }'
00:22:43.788   17:06:36	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:22:43.789   17:06:36	-- common/autotest_common.sh@10 -- # set +x
00:22:44.355    17:06:37	-- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1
00:22:44.355    17:06:37	-- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks'
00:22:44.922  [2024-11-19 17:06:37.472645] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:22:44.922   17:06:37	-- bdev/bdev_raid.sh@567 -- # raid_bdev_size=126976
00:22:44.922    17:06:37	-- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:22:44.922    17:06:37	-- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset'
00:22:44.922   17:06:37	-- bdev/bdev_raid.sh@570 -- # data_offset=2048
00:22:44.922   17:06:37	-- bdev/bdev_raid.sh@572 -- # '[' false = true ']'
00:22:44.922   17:06:37	-- bdev/bdev_raid.sh@576 -- # local write_unit_size
00:22:44.922   17:06:37	-- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0
00:22:44.922   17:06:37	-- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:22:44.922   17:06:37	-- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1')
00:22:44.922   17:06:37	-- bdev/nbd_common.sh@10 -- # local bdev_list
00:22:44.922   17:06:37	-- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0')
00:22:44.922   17:06:37	-- bdev/nbd_common.sh@11 -- # local nbd_list
00:22:44.922   17:06:37	-- bdev/nbd_common.sh@12 -- # local i
00:22:44.922   17:06:37	-- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:22:44.922   17:06:37	-- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:22:44.922   17:06:37	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0
00:22:45.181  [2024-11-19 17:06:37.964628] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600
00:22:45.181  /dev/nbd0
00:22:45.181    17:06:38	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:22:45.181   17:06:38	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:22:45.181   17:06:38	-- common/autotest_common.sh@866 -- # local nbd_name=nbd0
00:22:45.181   17:06:38	-- common/autotest_common.sh@867 -- # local i
00:22:45.181   17:06:38	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:22:45.181   17:06:38	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:22:45.181   17:06:38	-- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions
00:22:45.181   17:06:38	-- common/autotest_common.sh@871 -- # break
00:22:45.181   17:06:38	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:22:45.181   17:06:38	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:22:45.181   17:06:38	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:22:45.181  1+0 records in
00:22:45.181  1+0 records out
00:22:45.181  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000293342 s, 14.0 MB/s
00:22:45.181    17:06:38	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:22:45.181   17:06:38	-- common/autotest_common.sh@884 -- # size=4096
00:22:45.181   17:06:38	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:22:45.181   17:06:38	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:22:45.181   17:06:38	-- common/autotest_common.sh@887 -- # return 0
00:22:45.181   17:06:38	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:22:45.181   17:06:38	-- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:22:45.181   17:06:38	-- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']'
00:22:45.181   17:06:38	-- bdev/bdev_raid.sh@581 -- # write_unit_size=256
00:22:45.181   17:06:38	-- bdev/bdev_raid.sh@582 -- # echo 128
00:22:45.181   17:06:38	-- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct
00:22:45.748  496+0 records in
00:22:45.748  496+0 records out
00:22:45.748  65011712 bytes (65 MB, 62 MiB) copied, 0.393164 s, 165 MB/s
00:22:45.748   17:06:38	-- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0
00:22:45.748   17:06:38	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:22:45.748   17:06:38	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:22:45.748   17:06:38	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:22:45.748   17:06:38	-- bdev/nbd_common.sh@51 -- # local i
00:22:45.748   17:06:38	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:22:45.748   17:06:38	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0
00:22:46.007    17:06:38	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:22:46.007  [2024-11-19 17:06:38.729981] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:22:46.007   17:06:38	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:22:46.007   17:06:38	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:22:46.007   17:06:38	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:22:46.007   17:06:38	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:22:46.007   17:06:38	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:22:46.007   17:06:38	-- bdev/nbd_common.sh@41 -- # break
00:22:46.007   17:06:38	-- bdev/nbd_common.sh@45 -- # return 0
00:22:46.007   17:06:38	-- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1
00:22:46.266  [2024-11-19 17:06:38.913661] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:22:46.266   17:06:38	-- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2
00:22:46.266   17:06:38	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:22:46.266   17:06:38	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:22:46.266   17:06:38	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:22:46.266   17:06:38	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:22:46.266   17:06:38	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:22:46.266   17:06:38	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:22:46.266   17:06:38	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:22:46.266   17:06:38	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:22:46.266   17:06:38	-- bdev/bdev_raid.sh@125 -- # local tmp
00:22:46.266    17:06:38	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:22:46.266    17:06:38	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:22:46.524   17:06:39	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:22:46.524    "name": "raid_bdev1",
00:22:46.524    "uuid": "4d01087b-3e41-4698-bd06-67c79297b49b",
00:22:46.524    "strip_size_kb": 64,
00:22:46.524    "state": "online",
00:22:46.524    "raid_level": "raid5f",
00:22:46.524    "superblock": true,
00:22:46.524    "num_base_bdevs": 3,
00:22:46.524    "num_base_bdevs_discovered": 2,
00:22:46.524    "num_base_bdevs_operational": 2,
00:22:46.524    "base_bdevs_list": [
00:22:46.524      {
00:22:46.524        "name": null,
00:22:46.524        "uuid": "00000000-0000-0000-0000-000000000000",
00:22:46.524        "is_configured": false,
00:22:46.524        "data_offset": 2048,
00:22:46.524        "data_size": 63488
00:22:46.524      },
00:22:46.524      {
00:22:46.524        "name": "BaseBdev2",
00:22:46.524        "uuid": "d96b251d-8d9c-5202-bf31-de96bb9caec7",
00:22:46.524        "is_configured": true,
00:22:46.524        "data_offset": 2048,
00:22:46.524        "data_size": 63488
00:22:46.524      },
00:22:46.524      {
00:22:46.524        "name": "BaseBdev3",
00:22:46.524        "uuid": "44b813d6-eac6-52c5-a846-8b0bb273116f",
00:22:46.524        "is_configured": true,
00:22:46.524        "data_offset": 2048,
00:22:46.524        "data_size": 63488
00:22:46.524      }
00:22:46.524    ]
00:22:46.524  }'
00:22:46.524   17:06:39	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:22:46.524   17:06:39	-- common/autotest_common.sh@10 -- # set +x
00:22:47.088   17:06:39	-- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare
00:22:47.377  [2024-11-19 17:06:40.089269] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare
00:22:47.377  [2024-11-19 17:06:40.089359] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:22:47.377  [2024-11-19 17:06:40.093483] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000025500
00:22:47.377  [2024-11-19 17:06:40.096355] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1
00:22:47.377   17:06:40	-- bdev/bdev_raid.sh@598 -- # sleep 1
00:22:48.320   17:06:41	-- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:22:48.320   17:06:41	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:22:48.320   17:06:41	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:22:48.320   17:06:41	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:22:48.320   17:06:41	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:22:48.320    17:06:41	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:22:48.320    17:06:41	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:22:48.579   17:06:41	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:22:48.580    "name": "raid_bdev1",
00:22:48.580    "uuid": "4d01087b-3e41-4698-bd06-67c79297b49b",
00:22:48.580    "strip_size_kb": 64,
00:22:48.580    "state": "online",
00:22:48.580    "raid_level": "raid5f",
00:22:48.580    "superblock": true,
00:22:48.580    "num_base_bdevs": 3,
00:22:48.580    "num_base_bdevs_discovered": 3,
00:22:48.580    "num_base_bdevs_operational": 3,
00:22:48.580    "process": {
00:22:48.580      "type": "rebuild",
00:22:48.580      "target": "spare",
00:22:48.580      "progress": {
00:22:48.580        "blocks": 24576,
00:22:48.580        "percent": 19
00:22:48.580      }
00:22:48.580    },
00:22:48.580    "base_bdevs_list": [
00:22:48.580      {
00:22:48.580        "name": "spare",
00:22:48.580        "uuid": "011df64c-04f4-54d7-bd3a-eed1d90ca32d",
00:22:48.580        "is_configured": true,
00:22:48.580        "data_offset": 2048,
00:22:48.580        "data_size": 63488
00:22:48.580      },
00:22:48.580      {
00:22:48.580        "name": "BaseBdev2",
00:22:48.580        "uuid": "d96b251d-8d9c-5202-bf31-de96bb9caec7",
00:22:48.580        "is_configured": true,
00:22:48.580        "data_offset": 2048,
00:22:48.580        "data_size": 63488
00:22:48.580      },
00:22:48.580      {
00:22:48.580        "name": "BaseBdev3",
00:22:48.580        "uuid": "44b813d6-eac6-52c5-a846-8b0bb273116f",
00:22:48.580        "is_configured": true,
00:22:48.580        "data_offset": 2048,
00:22:48.580        "data_size": 63488
00:22:48.580      }
00:22:48.580    ]
00:22:48.580  }'
00:22:48.580    17:06:41	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:22:48.580   17:06:41	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:22:48.580    17:06:41	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:22:48.839   17:06:41	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:22:48.839   17:06:41	-- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare
00:22:49.098  [2024-11-19 17:06:41.723706] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:22:49.098  [2024-11-19 17:06:41.813904] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device
00:22:49.098  [2024-11-19 17:06:41.814052] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:22:49.098   17:06:41	-- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2
00:22:49.098   17:06:41	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:22:49.098   17:06:41	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:22:49.098   17:06:41	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:22:49.098   17:06:41	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:22:49.098   17:06:41	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2
00:22:49.098   17:06:41	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:22:49.098   17:06:41	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:22:49.099   17:06:41	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:22:49.099   17:06:41	-- bdev/bdev_raid.sh@125 -- # local tmp
00:22:49.099    17:06:41	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:22:49.099    17:06:41	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:22:49.358   17:06:42	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:22:49.358    "name": "raid_bdev1",
00:22:49.358    "uuid": "4d01087b-3e41-4698-bd06-67c79297b49b",
00:22:49.358    "strip_size_kb": 64,
00:22:49.358    "state": "online",
00:22:49.358    "raid_level": "raid5f",
00:22:49.358    "superblock": true,
00:22:49.358    "num_base_bdevs": 3,
00:22:49.358    "num_base_bdevs_discovered": 2,
00:22:49.358    "num_base_bdevs_operational": 2,
00:22:49.358    "base_bdevs_list": [
00:22:49.358      {
00:22:49.358        "name": null,
00:22:49.358        "uuid": "00000000-0000-0000-0000-000000000000",
00:22:49.358        "is_configured": false,
00:22:49.358        "data_offset": 2048,
00:22:49.358        "data_size": 63488
00:22:49.358      },
00:22:49.358      {
00:22:49.358        "name": "BaseBdev2",
00:22:49.358        "uuid": "d96b251d-8d9c-5202-bf31-de96bb9caec7",
00:22:49.358        "is_configured": true,
00:22:49.358        "data_offset": 2048,
00:22:49.358        "data_size": 63488
00:22:49.358      },
00:22:49.358      {
00:22:49.358        "name": "BaseBdev3",
00:22:49.358        "uuid": "44b813d6-eac6-52c5-a846-8b0bb273116f",
00:22:49.358        "is_configured": true,
00:22:49.358        "data_offset": 2048,
00:22:49.358        "data_size": 63488
00:22:49.358      }
00:22:49.358    ]
00:22:49.358  }'
00:22:49.358   17:06:42	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:22:49.358   17:06:42	-- common/autotest_common.sh@10 -- # set +x
00:22:49.929   17:06:42	-- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none
00:22:49.929   17:06:42	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:22:49.929   17:06:42	-- bdev/bdev_raid.sh@184 -- # local process_type=none
00:22:49.929   17:06:42	-- bdev/bdev_raid.sh@185 -- # local target=none
00:22:49.929   17:06:42	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:22:49.929    17:06:42	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:22:49.929    17:06:42	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:22:50.188   17:06:43	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:22:50.188    "name": "raid_bdev1",
00:22:50.188    "uuid": "4d01087b-3e41-4698-bd06-67c79297b49b",
00:22:50.188    "strip_size_kb": 64,
00:22:50.188    "state": "online",
00:22:50.188    "raid_level": "raid5f",
00:22:50.188    "superblock": true,
00:22:50.188    "num_base_bdevs": 3,
00:22:50.188    "num_base_bdevs_discovered": 2,
00:22:50.188    "num_base_bdevs_operational": 2,
00:22:50.188    "base_bdevs_list": [
00:22:50.188      {
00:22:50.188        "name": null,
00:22:50.188        "uuid": "00000000-0000-0000-0000-000000000000",
00:22:50.188        "is_configured": false,
00:22:50.188        "data_offset": 2048,
00:22:50.188        "data_size": 63488
00:22:50.188      },
00:22:50.188      {
00:22:50.188        "name": "BaseBdev2",
00:22:50.188        "uuid": "d96b251d-8d9c-5202-bf31-de96bb9caec7",
00:22:50.188        "is_configured": true,
00:22:50.188        "data_offset": 2048,
00:22:50.188        "data_size": 63488
00:22:50.188      },
00:22:50.188      {
00:22:50.188        "name": "BaseBdev3",
00:22:50.188        "uuid": "44b813d6-eac6-52c5-a846-8b0bb273116f",
00:22:50.188        "is_configured": true,
00:22:50.188        "data_offset": 2048,
00:22:50.188        "data_size": 63488
00:22:50.188      }
00:22:50.188    ]
00:22:50.188  }'
00:22:50.188    17:06:43	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:22:50.446   17:06:43	-- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]]
00:22:50.446    17:06:43	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:22:50.446   17:06:43	-- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]]
00:22:50.446   17:06:43	-- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare
00:22:50.705  [2024-11-19 17:06:43.380915] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare
00:22:50.705  [2024-11-19 17:06:43.380967] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:22:50.705  [2024-11-19 17:06:43.384734] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000256a0
00:22:50.705  [2024-11-19 17:06:43.387220] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1
00:22:50.705   17:06:43	-- bdev/bdev_raid.sh@614 -- # sleep 1
00:22:51.640   17:06:44	-- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:22:51.640   17:06:44	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:22:51.640   17:06:44	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:22:51.640   17:06:44	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:22:51.640   17:06:44	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:22:51.640    17:06:44	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:22:51.640    17:06:44	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:22:51.898   17:06:44	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:22:51.898    "name": "raid_bdev1",
00:22:51.898    "uuid": "4d01087b-3e41-4698-bd06-67c79297b49b",
00:22:51.898    "strip_size_kb": 64,
00:22:51.898    "state": "online",
00:22:51.898    "raid_level": "raid5f",
00:22:51.898    "superblock": true,
00:22:51.898    "num_base_bdevs": 3,
00:22:51.898    "num_base_bdevs_discovered": 3,
00:22:51.898    "num_base_bdevs_operational": 3,
00:22:51.898    "process": {
00:22:51.898      "type": "rebuild",
00:22:51.898      "target": "spare",
00:22:51.898      "progress": {
00:22:51.898        "blocks": 24576,
00:22:51.898        "percent": 19
00:22:51.898      }
00:22:51.898    },
00:22:51.898    "base_bdevs_list": [
00:22:51.898      {
00:22:51.898        "name": "spare",
00:22:51.898        "uuid": "011df64c-04f4-54d7-bd3a-eed1d90ca32d",
00:22:51.898        "is_configured": true,
00:22:51.898        "data_offset": 2048,
00:22:51.898        "data_size": 63488
00:22:51.898      },
00:22:51.898      {
00:22:51.898        "name": "BaseBdev2",
00:22:51.898        "uuid": "d96b251d-8d9c-5202-bf31-de96bb9caec7",
00:22:51.898        "is_configured": true,
00:22:51.898        "data_offset": 2048,
00:22:51.898        "data_size": 63488
00:22:51.898      },
00:22:51.898      {
00:22:51.898        "name": "BaseBdev3",
00:22:51.898        "uuid": "44b813d6-eac6-52c5-a846-8b0bb273116f",
00:22:51.898        "is_configured": true,
00:22:51.898        "data_offset": 2048,
00:22:51.898        "data_size": 63488
00:22:51.898      }
00:22:51.898    ]
00:22:51.898  }'
00:22:51.898    17:06:44	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:22:51.898   17:06:44	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:22:51.898    17:06:44	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:22:52.157   17:06:44	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:22:52.157   17:06:44	-- bdev/bdev_raid.sh@617 -- # '[' true = true ']'
00:22:52.157   17:06:44	-- bdev/bdev_raid.sh@617 -- # '[' = false ']'
00:22:52.157  /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected
00:22:52.157   17:06:44	-- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=3
00:22:52.157   17:06:44	-- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']'
00:22:52.157   17:06:44	-- bdev/bdev_raid.sh@657 -- # local timeout=605
00:22:52.157   17:06:44	-- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout ))
00:22:52.157   17:06:44	-- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:22:52.157   17:06:44	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:22:52.157   17:06:44	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:22:52.157   17:06:44	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:22:52.157   17:06:44	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:22:52.157    17:06:44	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:22:52.157    17:06:44	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:22:52.416   17:06:45	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:22:52.416    "name": "raid_bdev1",
00:22:52.416    "uuid": "4d01087b-3e41-4698-bd06-67c79297b49b",
00:22:52.416    "strip_size_kb": 64,
00:22:52.416    "state": "online",
00:22:52.416    "raid_level": "raid5f",
00:22:52.416    "superblock": true,
00:22:52.416    "num_base_bdevs": 3,
00:22:52.416    "num_base_bdevs_discovered": 3,
00:22:52.416    "num_base_bdevs_operational": 3,
00:22:52.416    "process": {
00:22:52.416      "type": "rebuild",
00:22:52.416      "target": "spare",
00:22:52.416      "progress": {
00:22:52.416        "blocks": 32768,
00:22:52.416        "percent": 25
00:22:52.416      }
00:22:52.416    },
00:22:52.416    "base_bdevs_list": [
00:22:52.416      {
00:22:52.416        "name": "spare",
00:22:52.416        "uuid": "011df64c-04f4-54d7-bd3a-eed1d90ca32d",
00:22:52.416        "is_configured": true,
00:22:52.416        "data_offset": 2048,
00:22:52.416        "data_size": 63488
00:22:52.416      },
00:22:52.416      {
00:22:52.416        "name": "BaseBdev2",
00:22:52.416        "uuid": "d96b251d-8d9c-5202-bf31-de96bb9caec7",
00:22:52.416        "is_configured": true,
00:22:52.416        "data_offset": 2048,
00:22:52.416        "data_size": 63488
00:22:52.416      },
00:22:52.416      {
00:22:52.416        "name": "BaseBdev3",
00:22:52.416        "uuid": "44b813d6-eac6-52c5-a846-8b0bb273116f",
00:22:52.416        "is_configured": true,
00:22:52.416        "data_offset": 2048,
00:22:52.416        "data_size": 63488
00:22:52.416      }
00:22:52.416    ]
00:22:52.416  }'
00:22:52.416    17:06:45	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:22:52.416   17:06:45	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:22:52.416    17:06:45	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:22:52.416   17:06:45	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:22:52.416   17:06:45	-- bdev/bdev_raid.sh@662 -- # sleep 1
00:22:53.353   17:06:46	-- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout ))
00:22:53.353   17:06:46	-- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:22:53.353   17:06:46	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:22:53.353   17:06:46	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:22:53.353   17:06:46	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:22:53.353   17:06:46	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:22:53.353    17:06:46	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:22:53.353    17:06:46	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:22:53.612   17:06:46	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:22:53.612    "name": "raid_bdev1",
00:22:53.612    "uuid": "4d01087b-3e41-4698-bd06-67c79297b49b",
00:22:53.612    "strip_size_kb": 64,
00:22:53.612    "state": "online",
00:22:53.612    "raid_level": "raid5f",
00:22:53.612    "superblock": true,
00:22:53.612    "num_base_bdevs": 3,
00:22:53.612    "num_base_bdevs_discovered": 3,
00:22:53.612    "num_base_bdevs_operational": 3,
00:22:53.612    "process": {
00:22:53.612      "type": "rebuild",
00:22:53.612      "target": "spare",
00:22:53.612      "progress": {
00:22:53.612        "blocks": 59392,
00:22:53.612        "percent": 46
00:22:53.612      }
00:22:53.612    },
00:22:53.612    "base_bdevs_list": [
00:22:53.612      {
00:22:53.612        "name": "spare",
00:22:53.612        "uuid": "011df64c-04f4-54d7-bd3a-eed1d90ca32d",
00:22:53.612        "is_configured": true,
00:22:53.612        "data_offset": 2048,
00:22:53.612        "data_size": 63488
00:22:53.612      },
00:22:53.612      {
00:22:53.612        "name": "BaseBdev2",
00:22:53.612        "uuid": "d96b251d-8d9c-5202-bf31-de96bb9caec7",
00:22:53.612        "is_configured": true,
00:22:53.612        "data_offset": 2048,
00:22:53.612        "data_size": 63488
00:22:53.612      },
00:22:53.612      {
00:22:53.612        "name": "BaseBdev3",
00:22:53.612        "uuid": "44b813d6-eac6-52c5-a846-8b0bb273116f",
00:22:53.612        "is_configured": true,
00:22:53.612        "data_offset": 2048,
00:22:53.612        "data_size": 63488
00:22:53.612      }
00:22:53.612    ]
00:22:53.612  }'
00:22:53.612    17:06:46	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:22:53.612   17:06:46	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:22:53.612    17:06:46	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:22:53.612   17:06:46	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:22:53.612   17:06:46	-- bdev/bdev_raid.sh@662 -- # sleep 1
00:22:54.988   17:06:47	-- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout ))
00:22:54.988   17:06:47	-- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:22:54.988   17:06:47	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:22:54.988   17:06:47	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:22:54.988   17:06:47	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:22:54.988   17:06:47	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:22:54.988    17:06:47	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:22:54.988    17:06:47	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:22:54.988   17:06:47	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:22:54.988    "name": "raid_bdev1",
00:22:54.988    "uuid": "4d01087b-3e41-4698-bd06-67c79297b49b",
00:22:54.988    "strip_size_kb": 64,
00:22:54.988    "state": "online",
00:22:54.988    "raid_level": "raid5f",
00:22:54.988    "superblock": true,
00:22:54.988    "num_base_bdevs": 3,
00:22:54.988    "num_base_bdevs_discovered": 3,
00:22:54.988    "num_base_bdevs_operational": 3,
00:22:54.988    "process": {
00:22:54.988      "type": "rebuild",
00:22:54.988      "target": "spare",
00:22:54.988      "progress": {
00:22:54.988        "blocks": 86016,
00:22:54.988        "percent": 67
00:22:54.988      }
00:22:54.988    },
00:22:54.988    "base_bdevs_list": [
00:22:54.988      {
00:22:54.988        "name": "spare",
00:22:54.988        "uuid": "011df64c-04f4-54d7-bd3a-eed1d90ca32d",
00:22:54.988        "is_configured": true,
00:22:54.988        "data_offset": 2048,
00:22:54.988        "data_size": 63488
00:22:54.988      },
00:22:54.988      {
00:22:54.988        "name": "BaseBdev2",
00:22:54.988        "uuid": "d96b251d-8d9c-5202-bf31-de96bb9caec7",
00:22:54.988        "is_configured": true,
00:22:54.988        "data_offset": 2048,
00:22:54.988        "data_size": 63488
00:22:54.988      },
00:22:54.988      {
00:22:54.988        "name": "BaseBdev3",
00:22:54.988        "uuid": "44b813d6-eac6-52c5-a846-8b0bb273116f",
00:22:54.988        "is_configured": true,
00:22:54.988        "data_offset": 2048,
00:22:54.988        "data_size": 63488
00:22:54.988      }
00:22:54.988    ]
00:22:54.988  }'
00:22:54.988    17:06:47	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:22:54.988   17:06:47	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:22:54.988    17:06:47	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:22:54.988   17:06:47	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:22:54.988   17:06:47	-- bdev/bdev_raid.sh@662 -- # sleep 1
00:22:56.366   17:06:48	-- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout ))
00:22:56.366   17:06:48	-- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:22:56.367   17:06:48	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:22:56.367   17:06:48	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:22:56.367   17:06:48	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:22:56.367   17:06:48	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:22:56.367    17:06:48	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:22:56.367    17:06:48	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:22:56.367   17:06:49	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:22:56.367    "name": "raid_bdev1",
00:22:56.367    "uuid": "4d01087b-3e41-4698-bd06-67c79297b49b",
00:22:56.367    "strip_size_kb": 64,
00:22:56.367    "state": "online",
00:22:56.367    "raid_level": "raid5f",
00:22:56.367    "superblock": true,
00:22:56.367    "num_base_bdevs": 3,
00:22:56.367    "num_base_bdevs_discovered": 3,
00:22:56.367    "num_base_bdevs_operational": 3,
00:22:56.367    "process": {
00:22:56.367      "type": "rebuild",
00:22:56.367      "target": "spare",
00:22:56.367      "progress": {
00:22:56.367        "blocks": 114688,
00:22:56.367        "percent": 90
00:22:56.367      }
00:22:56.367    },
00:22:56.367    "base_bdevs_list": [
00:22:56.367      {
00:22:56.367        "name": "spare",
00:22:56.367        "uuid": "011df64c-04f4-54d7-bd3a-eed1d90ca32d",
00:22:56.367        "is_configured": true,
00:22:56.367        "data_offset": 2048,
00:22:56.367        "data_size": 63488
00:22:56.367      },
00:22:56.367      {
00:22:56.367        "name": "BaseBdev2",
00:22:56.367        "uuid": "d96b251d-8d9c-5202-bf31-de96bb9caec7",
00:22:56.367        "is_configured": true,
00:22:56.367        "data_offset": 2048,
00:22:56.367        "data_size": 63488
00:22:56.367      },
00:22:56.367      {
00:22:56.367        "name": "BaseBdev3",
00:22:56.367        "uuid": "44b813d6-eac6-52c5-a846-8b0bb273116f",
00:22:56.367        "is_configured": true,
00:22:56.367        "data_offset": 2048,
00:22:56.367        "data_size": 63488
00:22:56.367      }
00:22:56.367    ]
00:22:56.367  }'
00:22:56.367    17:06:49	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:22:56.367   17:06:49	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:22:56.367    17:06:49	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:22:56.367   17:06:49	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:22:56.367   17:06:49	-- bdev/bdev_raid.sh@662 -- # sleep 1
00:22:56.936  [2024-11-19 17:06:49.648725] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1
00:22:56.936  [2024-11-19 17:06:49.648819] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1
00:22:56.936  [2024-11-19 17:06:49.649011] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:22:57.505   17:06:50	-- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout ))
00:22:57.505   17:06:50	-- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:22:57.505   17:06:50	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:22:57.505   17:06:50	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:22:57.505   17:06:50	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:22:57.505   17:06:50	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:22:57.505    17:06:50	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:22:57.505    17:06:50	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:22:57.764   17:06:50	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:22:57.764    "name": "raid_bdev1",
00:22:57.764    "uuid": "4d01087b-3e41-4698-bd06-67c79297b49b",
00:22:57.764    "strip_size_kb": 64,
00:22:57.764    "state": "online",
00:22:57.764    "raid_level": "raid5f",
00:22:57.764    "superblock": true,
00:22:57.764    "num_base_bdevs": 3,
00:22:57.764    "num_base_bdevs_discovered": 3,
00:22:57.764    "num_base_bdevs_operational": 3,
00:22:57.764    "base_bdevs_list": [
00:22:57.764      {
00:22:57.764        "name": "spare",
00:22:57.764        "uuid": "011df64c-04f4-54d7-bd3a-eed1d90ca32d",
00:22:57.764        "is_configured": true,
00:22:57.764        "data_offset": 2048,
00:22:57.764        "data_size": 63488
00:22:57.764      },
00:22:57.764      {
00:22:57.764        "name": "BaseBdev2",
00:22:57.764        "uuid": "d96b251d-8d9c-5202-bf31-de96bb9caec7",
00:22:57.764        "is_configured": true,
00:22:57.764        "data_offset": 2048,
00:22:57.764        "data_size": 63488
00:22:57.764      },
00:22:57.764      {
00:22:57.764        "name": "BaseBdev3",
00:22:57.764        "uuid": "44b813d6-eac6-52c5-a846-8b0bb273116f",
00:22:57.764        "is_configured": true,
00:22:57.764        "data_offset": 2048,
00:22:57.764        "data_size": 63488
00:22:57.764      }
00:22:57.764    ]
00:22:57.764  }'
00:22:57.764    17:06:50	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:22:57.764   17:06:50	-- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]]
00:22:57.764    17:06:50	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:22:57.764   17:06:50	-- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]]
00:22:57.764   17:06:50	-- bdev/bdev_raid.sh@660 -- # break
00:22:57.764   17:06:50	-- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none
00:22:57.764   17:06:50	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:22:57.764   17:06:50	-- bdev/bdev_raid.sh@184 -- # local process_type=none
00:22:57.764   17:06:50	-- bdev/bdev_raid.sh@185 -- # local target=none
00:22:57.764   17:06:50	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:22:57.764    17:06:50	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:22:57.764    17:06:50	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:22:58.023   17:06:50	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:22:58.023    "name": "raid_bdev1",
00:22:58.023    "uuid": "4d01087b-3e41-4698-bd06-67c79297b49b",
00:22:58.023    "strip_size_kb": 64,
00:22:58.023    "state": "online",
00:22:58.023    "raid_level": "raid5f",
00:22:58.023    "superblock": true,
00:22:58.023    "num_base_bdevs": 3,
00:22:58.023    "num_base_bdevs_discovered": 3,
00:22:58.023    "num_base_bdevs_operational": 3,
00:22:58.023    "base_bdevs_list": [
00:22:58.023      {
00:22:58.023        "name": "spare",
00:22:58.023        "uuid": "011df64c-04f4-54d7-bd3a-eed1d90ca32d",
00:22:58.023        "is_configured": true,
00:22:58.023        "data_offset": 2048,
00:22:58.023        "data_size": 63488
00:22:58.023      },
00:22:58.023      {
00:22:58.023        "name": "BaseBdev2",
00:22:58.023        "uuid": "d96b251d-8d9c-5202-bf31-de96bb9caec7",
00:22:58.023        "is_configured": true,
00:22:58.024        "data_offset": 2048,
00:22:58.024        "data_size": 63488
00:22:58.024      },
00:22:58.024      {
00:22:58.024        "name": "BaseBdev3",
00:22:58.024        "uuid": "44b813d6-eac6-52c5-a846-8b0bb273116f",
00:22:58.024        "is_configured": true,
00:22:58.024        "data_offset": 2048,
00:22:58.024        "data_size": 63488
00:22:58.024      }
00:22:58.024    ]
00:22:58.024  }'
00:22:58.024    17:06:50	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:22:58.024   17:06:50	-- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]]
00:22:58.024    17:06:50	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:22:58.024   17:06:50	-- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]]
00:22:58.024   17:06:50	-- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3
00:22:58.024   17:06:50	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:22:58.024   17:06:50	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:22:58.024   17:06:50	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:22:58.024   17:06:50	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:22:58.024   17:06:50	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:22:58.024   17:06:50	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:22:58.024   17:06:50	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:22:58.024   17:06:50	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:22:58.024   17:06:50	-- bdev/bdev_raid.sh@125 -- # local tmp
00:22:58.283    17:06:50	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:22:58.283    17:06:50	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:22:58.542   17:06:51	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:22:58.542    "name": "raid_bdev1",
00:22:58.542    "uuid": "4d01087b-3e41-4698-bd06-67c79297b49b",
00:22:58.542    "strip_size_kb": 64,
00:22:58.542    "state": "online",
00:22:58.542    "raid_level": "raid5f",
00:22:58.542    "superblock": true,
00:22:58.542    "num_base_bdevs": 3,
00:22:58.542    "num_base_bdevs_discovered": 3,
00:22:58.542    "num_base_bdevs_operational": 3,
00:22:58.542    "base_bdevs_list": [
00:22:58.542      {
00:22:58.542        "name": "spare",
00:22:58.542        "uuid": "011df64c-04f4-54d7-bd3a-eed1d90ca32d",
00:22:58.543        "is_configured": true,
00:22:58.543        "data_offset": 2048,
00:22:58.543        "data_size": 63488
00:22:58.543      },
00:22:58.543      {
00:22:58.543        "name": "BaseBdev2",
00:22:58.543        "uuid": "d96b251d-8d9c-5202-bf31-de96bb9caec7",
00:22:58.543        "is_configured": true,
00:22:58.543        "data_offset": 2048,
00:22:58.543        "data_size": 63488
00:22:58.543      },
00:22:58.543      {
00:22:58.543        "name": "BaseBdev3",
00:22:58.543        "uuid": "44b813d6-eac6-52c5-a846-8b0bb273116f",
00:22:58.543        "is_configured": true,
00:22:58.543        "data_offset": 2048,
00:22:58.543        "data_size": 63488
00:22:58.543      }
00:22:58.543    ]
00:22:58.543  }'
00:22:58.543   17:06:51	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:22:58.543   17:06:51	-- common/autotest_common.sh@10 -- # set +x
00:22:59.111   17:06:51	-- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1
00:22:59.111  [2024-11-19 17:06:51.959285] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:22:59.111  [2024-11-19 17:06:51.959332] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:22:59.111  [2024-11-19 17:06:51.959439] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:22:59.111  [2024-11-19 17:06:51.959518] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:22:59.111  [2024-11-19 17:06:51.959528] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline
00:22:59.396    17:06:51	-- bdev/bdev_raid.sh@671 -- # jq length
00:22:59.396    17:06:51	-- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:22:59.396   17:06:52	-- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]]
00:22:59.396   17:06:52	-- bdev/bdev_raid.sh@673 -- # '[' false = true ']'
00:22:59.396   17:06:52	-- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1'
00:22:59.396   17:06:52	-- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:22:59.396   17:06:52	-- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare')
00:22:59.396   17:06:52	-- bdev/nbd_common.sh@10 -- # local bdev_list
00:22:59.396   17:06:52	-- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:22:59.396   17:06:52	-- bdev/nbd_common.sh@11 -- # local nbd_list
00:22:59.396   17:06:52	-- bdev/nbd_common.sh@12 -- # local i
00:22:59.396   17:06:52	-- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:22:59.396   17:06:52	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:22:59.396   17:06:52	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0
00:22:59.658  /dev/nbd0
00:22:59.658    17:06:52	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:22:59.658   17:06:52	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:22:59.658   17:06:52	-- common/autotest_common.sh@866 -- # local nbd_name=nbd0
00:22:59.658   17:06:52	-- common/autotest_common.sh@867 -- # local i
00:22:59.658   17:06:52	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:22:59.658   17:06:52	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:22:59.659   17:06:52	-- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions
00:22:59.659   17:06:52	-- common/autotest_common.sh@871 -- # break
00:22:59.659   17:06:52	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:22:59.659   17:06:52	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:22:59.659   17:06:52	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:22:59.659  1+0 records in
00:22:59.659  1+0 records out
00:22:59.659  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000873805 s, 4.7 MB/s
00:22:59.659    17:06:52	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:22:59.659   17:06:52	-- common/autotest_common.sh@884 -- # size=4096
00:22:59.659   17:06:52	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:22:59.917   17:06:52	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:22:59.917   17:06:52	-- common/autotest_common.sh@887 -- # return 0
00:22:59.917   17:06:52	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:22:59.917   17:06:52	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:22:59.917   17:06:52	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1
00:23:00.176  /dev/nbd1
00:23:00.176    17:06:52	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:23:00.176   17:06:52	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:23:00.176   17:06:52	-- common/autotest_common.sh@866 -- # local nbd_name=nbd1
00:23:00.176   17:06:52	-- common/autotest_common.sh@867 -- # local i
00:23:00.176   17:06:52	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:23:00.176   17:06:52	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:23:00.176   17:06:52	-- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions
00:23:00.176   17:06:52	-- common/autotest_common.sh@871 -- # break
00:23:00.176   17:06:52	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:23:00.176   17:06:52	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:23:00.176   17:06:52	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:23:00.176  1+0 records in
00:23:00.176  1+0 records out
00:23:00.176  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000393935 s, 10.4 MB/s
00:23:00.176    17:06:52	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:23:00.176   17:06:52	-- common/autotest_common.sh@884 -- # size=4096
00:23:00.176   17:06:52	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:23:00.176   17:06:52	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:23:00.176   17:06:52	-- common/autotest_common.sh@887 -- # return 0
00:23:00.176   17:06:52	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:23:00.176   17:06:52	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:23:00.176   17:06:52	-- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1
00:23:00.176   17:06:52	-- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1'
00:23:00.176   17:06:52	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:23:00.176   17:06:52	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:23:00.176   17:06:52	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:23:00.176   17:06:52	-- bdev/nbd_common.sh@51 -- # local i
00:23:00.176   17:06:52	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:23:00.176   17:06:52	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0
00:23:00.435    17:06:53	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:23:00.435   17:06:53	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:23:00.435   17:06:53	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:23:00.435   17:06:53	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:23:00.435   17:06:53	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:23:00.435   17:06:53	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:23:00.435   17:06:53	-- bdev/nbd_common.sh@41 -- # break
00:23:00.435   17:06:53	-- bdev/nbd_common.sh@45 -- # return 0
00:23:00.435   17:06:53	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:23:00.435   17:06:53	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1
00:23:00.694    17:06:53	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:23:00.694   17:06:53	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:23:00.694   17:06:53	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:23:00.694   17:06:53	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:23:00.694   17:06:53	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:23:00.694   17:06:53	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:23:00.694   17:06:53	-- bdev/nbd_common.sh@41 -- # break
00:23:00.694   17:06:53	-- bdev/nbd_common.sh@45 -- # return 0
00:23:00.694   17:06:53	-- bdev/bdev_raid.sh@692 -- # '[' true = true ']'
00:23:00.694   17:06:53	-- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}"
00:23:00.694   17:06:53	-- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']'
00:23:00.694   17:06:53	-- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1
00:23:00.953   17:06:53	-- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1
00:23:00.953  [2024-11-19 17:06:53.776448] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc
00:23:00.953  [2024-11-19 17:06:53.776550] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:23:00.953  [2024-11-19 17:06:53.776615] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980
00:23:00.953  [2024-11-19 17:06:53.776644] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:23:00.953  [2024-11-19 17:06:53.779225] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:23:00.953  [2024-11-19 17:06:53.779308] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1
00:23:00.953  [2024-11-19 17:06:53.779420] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1
00:23:00.953  [2024-11-19 17:06:53.779479] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:23:00.953  BaseBdev1
00:23:00.953   17:06:53	-- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}"
00:23:00.953   17:06:53	-- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']'
00:23:00.953   17:06:53	-- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2
00:23:01.213   17:06:54	-- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2
00:23:01.472  [2024-11-19 17:06:54.300605] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc
00:23:01.472  [2024-11-19 17:06:54.300701] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:23:01.472  [2024-11-19 17:06:54.300749] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280
00:23:01.472  [2024-11-19 17:06:54.300782] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:23:01.472  [2024-11-19 17:06:54.301278] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:23:01.472  [2024-11-19 17:06:54.301331] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2
00:23:01.472  [2024-11-19 17:06:54.301425] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2
00:23:01.472  [2024-11-19 17:06:54.301441] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1)
00:23:01.472  [2024-11-19 17:06:54.301451] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:23:01.472  [2024-11-19 17:06:54.301491] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009f80 name raid_bdev1, state configuring
00:23:01.472  [2024-11-19 17:06:54.301543] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:23:01.472  BaseBdev2
00:23:01.472   17:06:54	-- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}"
00:23:01.472   17:06:54	-- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']'
00:23:01.472   17:06:54	-- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3
00:23:02.040   17:06:54	-- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3
00:23:02.040  [2024-11-19 17:06:54.768684] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc
00:23:02.040  [2024-11-19 17:06:54.768799] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:23:02.040  [2024-11-19 17:06:54.768844] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880
00:23:02.040  [2024-11-19 17:06:54.768879] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:23:02.040  [2024-11-19 17:06:54.769324] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:23:02.040  [2024-11-19 17:06:54.769374] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3
00:23:02.040  [2024-11-19 17:06:54.769459] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3
00:23:02.040  [2024-11-19 17:06:54.769490] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:23:02.040  BaseBdev3
00:23:02.040   17:06:54	-- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare
00:23:02.299   17:06:55	-- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare
00:23:02.558  [2024-11-19 17:06:55.240750] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay
00:23:02.558  [2024-11-19 17:06:55.240858] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:23:02.558  [2024-11-19 17:06:55.240901] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80
00:23:02.558  [2024-11-19 17:06:55.240929] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:23:02.558  [2024-11-19 17:06:55.241417] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:23:02.558  [2024-11-19 17:06:55.241470] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare
00:23:02.558  [2024-11-19 17:06:55.241569] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare
00:23:02.558  [2024-11-19 17:06:55.241599] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:23:02.558  spare
00:23:02.558   17:06:55	-- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3
00:23:02.558   17:06:55	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:23:02.558   17:06:55	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:23:02.558   17:06:55	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:23:02.558   17:06:55	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:23:02.558   17:06:55	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:23:02.558   17:06:55	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:23:02.558   17:06:55	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:23:02.558   17:06:55	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:23:02.558   17:06:55	-- bdev/bdev_raid.sh@125 -- # local tmp
00:23:02.558    17:06:55	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:23:02.558    17:06:55	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:23:02.558  [2024-11-19 17:06:55.341733] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a580
00:23:02.558  [2024-11-19 17:06:55.341782] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512
00:23:02.558  [2024-11-19 17:06:55.341988] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000044230
00:23:02.558  [2024-11-19 17:06:55.342793] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a580
00:23:02.558  [2024-11-19 17:06:55.342815] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a580
00:23:02.558  [2024-11-19 17:06:55.343030] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:23:02.817   17:06:55	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:23:02.817    "name": "raid_bdev1",
00:23:02.817    "uuid": "4d01087b-3e41-4698-bd06-67c79297b49b",
00:23:02.817    "strip_size_kb": 64,
00:23:02.817    "state": "online",
00:23:02.817    "raid_level": "raid5f",
00:23:02.817    "superblock": true,
00:23:02.817    "num_base_bdevs": 3,
00:23:02.817    "num_base_bdevs_discovered": 3,
00:23:02.817    "num_base_bdevs_operational": 3,
00:23:02.817    "base_bdevs_list": [
00:23:02.817      {
00:23:02.817        "name": "spare",
00:23:02.817        "uuid": "011df64c-04f4-54d7-bd3a-eed1d90ca32d",
00:23:02.817        "is_configured": true,
00:23:02.817        "data_offset": 2048,
00:23:02.817        "data_size": 63488
00:23:02.817      },
00:23:02.817      {
00:23:02.817        "name": "BaseBdev2",
00:23:02.817        "uuid": "d96b251d-8d9c-5202-bf31-de96bb9caec7",
00:23:02.817        "is_configured": true,
00:23:02.817        "data_offset": 2048,
00:23:02.817        "data_size": 63488
00:23:02.817      },
00:23:02.817      {
00:23:02.817        "name": "BaseBdev3",
00:23:02.817        "uuid": "44b813d6-eac6-52c5-a846-8b0bb273116f",
00:23:02.817        "is_configured": true,
00:23:02.817        "data_offset": 2048,
00:23:02.817        "data_size": 63488
00:23:02.817      }
00:23:02.817    ]
00:23:02.817  }'
00:23:02.817   17:06:55	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:23:02.817   17:06:55	-- common/autotest_common.sh@10 -- # set +x
00:23:03.385   17:06:56	-- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none
00:23:03.385   17:06:56	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:23:03.385   17:06:56	-- bdev/bdev_raid.sh@184 -- # local process_type=none
00:23:03.385   17:06:56	-- bdev/bdev_raid.sh@185 -- # local target=none
00:23:03.385   17:06:56	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:23:03.385    17:06:56	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:23:03.385    17:06:56	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:23:03.645   17:06:56	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:23:03.645    "name": "raid_bdev1",
00:23:03.645    "uuid": "4d01087b-3e41-4698-bd06-67c79297b49b",
00:23:03.645    "strip_size_kb": 64,
00:23:03.645    "state": "online",
00:23:03.645    "raid_level": "raid5f",
00:23:03.645    "superblock": true,
00:23:03.645    "num_base_bdevs": 3,
00:23:03.645    "num_base_bdevs_discovered": 3,
00:23:03.645    "num_base_bdevs_operational": 3,
00:23:03.645    "base_bdevs_list": [
00:23:03.645      {
00:23:03.645        "name": "spare",
00:23:03.645        "uuid": "011df64c-04f4-54d7-bd3a-eed1d90ca32d",
00:23:03.645        "is_configured": true,
00:23:03.645        "data_offset": 2048,
00:23:03.645        "data_size": 63488
00:23:03.645      },
00:23:03.645      {
00:23:03.645        "name": "BaseBdev2",
00:23:03.645        "uuid": "d96b251d-8d9c-5202-bf31-de96bb9caec7",
00:23:03.645        "is_configured": true,
00:23:03.645        "data_offset": 2048,
00:23:03.645        "data_size": 63488
00:23:03.645      },
00:23:03.645      {
00:23:03.645        "name": "BaseBdev3",
00:23:03.645        "uuid": "44b813d6-eac6-52c5-a846-8b0bb273116f",
00:23:03.645        "is_configured": true,
00:23:03.645        "data_offset": 2048,
00:23:03.645        "data_size": 63488
00:23:03.645      }
00:23:03.645    ]
00:23:03.645  }'
00:23:03.645    17:06:56	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:23:03.645   17:06:56	-- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]]
00:23:03.645    17:06:56	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:23:03.645   17:06:56	-- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]]
00:23:03.645    17:06:56	-- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name'
00:23:03.645    17:06:56	-- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:23:03.903   17:06:56	-- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]]
00:23:03.903   17:06:56	-- bdev/bdev_raid.sh@709 -- # killprocess 139152
00:23:04.205   17:06:56	-- common/autotest_common.sh@936 -- # '[' -z 139152 ']'
00:23:04.205   17:06:56	-- common/autotest_common.sh@940 -- # kill -0 139152
00:23:04.205    17:06:56	-- common/autotest_common.sh@941 -- # uname
00:23:04.205   17:06:56	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:23:04.205    17:06:56	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 139152
00:23:04.205  killing process with pid 139152
00:23:04.205  Received shutdown signal, test time was about 60.000000 seconds
00:23:04.205  
00:23:04.205                                                                                                  Latency(us)
00:23:04.206  
[2024-11-19T17:06:57.070Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:23:04.206  
[2024-11-19T17:06:57.070Z]  ===================================================================================================================
00:23:04.206  
[2024-11-19T17:06:57.070Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00 18446744073709551616.00       0.00
00:23:04.206   17:06:56	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:23:04.206   17:06:56	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:23:04.206   17:06:56	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 139152'
00:23:04.206   17:06:56	-- common/autotest_common.sh@955 -- # kill 139152
00:23:04.206   17:06:56	-- common/autotest_common.sh@960 -- # wait 139152
00:23:04.206  [2024-11-19 17:06:56.782111] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:23:04.206  [2024-11-19 17:06:56.782200] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:23:04.206  [2024-11-19 17:06:56.782282] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:23:04.206  [2024-11-19 17:06:56.782298] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state offline
00:23:04.206  [2024-11-19 17:06:56.825787] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:23:04.488   17:06:57	-- bdev/bdev_raid.sh@711 -- # return 0
00:23:04.488  
00:23:04.488  real	0m24.073s
00:23:04.488  user	0m37.651s
00:23:04.488  sys	0m3.777s
00:23:04.488   17:06:57	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:23:04.488   17:06:57	-- common/autotest_common.sh@10 -- # set +x
00:23:04.488  ************************************
00:23:04.488  END TEST raid5f_rebuild_test_sb
00:23:04.488  ************************************
00:23:04.488   17:06:57	-- bdev/bdev_raid.sh@743 -- # for n in {3..4}
00:23:04.488   17:06:57	-- bdev/bdev_raid.sh@744 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false
00:23:04.488   17:06:57	-- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']'
00:23:04.488   17:06:57	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:23:04.488   17:06:57	-- common/autotest_common.sh@10 -- # set +x
00:23:04.488  ************************************
00:23:04.488  START TEST raid5f_state_function_test
00:23:04.488  ************************************
00:23:04.488   17:06:57	-- common/autotest_common.sh@1114 -- # raid_state_function_test raid5f 4 false
00:23:04.488   17:06:57	-- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f
00:23:04.488   17:06:57	-- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4
00:23:04.488   17:06:57	-- bdev/bdev_raid.sh@204 -- # local superblock=false
00:23:04.488   17:06:57	-- bdev/bdev_raid.sh@205 -- # local raid_bdev
00:23:04.488    17:06:57	-- bdev/bdev_raid.sh@206 -- # (( i = 1 ))
00:23:04.488    17:06:57	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:23:04.488    17:06:57	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev1
00:23:04.488    17:06:57	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:23:04.488    17:06:57	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:23:04.488    17:06:57	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev2
00:23:04.488    17:06:57	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:23:04.488    17:06:57	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:23:04.488    17:06:57	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev3
00:23:04.488    17:06:57	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:23:04.488    17:06:57	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:23:04.488    17:06:57	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev4
00:23:04.488    17:06:57	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:23:04.488    17:06:57	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:23:04.488   17:06:57	-- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4')
00:23:04.488   17:06:57	-- bdev/bdev_raid.sh@206 -- # local base_bdevs
00:23:04.488   17:06:57	-- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid
00:23:04.488   17:06:57	-- bdev/bdev_raid.sh@208 -- # local strip_size
00:23:04.488   17:06:57	-- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg
00:23:04.488   17:06:57	-- bdev/bdev_raid.sh@210 -- # local superblock_create_arg
00:23:04.488   17:06:57	-- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']'
00:23:04.488   17:06:57	-- bdev/bdev_raid.sh@213 -- # strip_size=64
00:23:04.488   17:06:57	-- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64'
00:23:04.488   17:06:57	-- bdev/bdev_raid.sh@219 -- # '[' false = true ']'
00:23:04.488   17:06:57	-- bdev/bdev_raid.sh@222 -- # superblock_create_arg=
00:23:04.488   17:06:57	-- bdev/bdev_raid.sh@226 -- # raid_pid=139788
00:23:04.488   17:06:57	-- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 139788'
00:23:04.488  Process raid pid: 139788
00:23:04.488   17:06:57	-- bdev/bdev_raid.sh@228 -- # waitforlisten 139788 /var/tmp/spdk-raid.sock
00:23:04.488   17:06:57	-- common/autotest_common.sh@829 -- # '[' -z 139788 ']'
00:23:04.488   17:06:57	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock
00:23:04.488   17:06:57	-- common/autotest_common.sh@834 -- # local max_retries=100
00:23:04.488   17:06:57	-- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid
00:23:04.488  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...
00:23:04.488   17:06:57	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...'
00:23:04.488   17:06:57	-- common/autotest_common.sh@838 -- # xtrace_disable
00:23:04.488   17:06:57	-- common/autotest_common.sh@10 -- # set +x
00:23:04.488  [2024-11-19 17:06:57.239519] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:23:04.488  [2024-11-19 17:06:57.239772] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:23:04.750  [2024-11-19 17:06:57.402116] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:23:04.750  [2024-11-19 17:06:57.461461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:23:04.750  [2024-11-19 17:06:57.509699] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:23:05.319   17:06:58	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:23:05.319   17:06:58	-- common/autotest_common.sh@862 -- # return 0
00:23:05.319   17:06:58	-- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid
00:23:05.579  [2024-11-19 17:06:58.414958] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:23:05.579  [2024-11-19 17:06:58.415048] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:23:05.579  [2024-11-19 17:06:58.415060] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:23:05.579  [2024-11-19 17:06:58.415093] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:23:05.579  [2024-11-19 17:06:58.415101] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:23:05.579  [2024-11-19 17:06:58.415146] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:23:05.579  [2024-11-19 17:06:58.415154] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4
00:23:05.579  [2024-11-19 17:06:58.415181] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now
00:23:05.838   17:06:58	-- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4
00:23:05.838   17:06:58	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:23:05.838   17:06:58	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:23:05.838   17:06:58	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:23:05.838   17:06:58	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:23:05.839   17:06:58	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:23:05.839   17:06:58	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:23:05.839   17:06:58	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:23:05.839   17:06:58	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:23:05.839   17:06:58	-- bdev/bdev_raid.sh@125 -- # local tmp
00:23:05.839    17:06:58	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:23:05.839    17:06:58	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:23:05.839   17:06:58	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:23:05.839    "name": "Existed_Raid",
00:23:05.839    "uuid": "00000000-0000-0000-0000-000000000000",
00:23:05.839    "strip_size_kb": 64,
00:23:05.839    "state": "configuring",
00:23:05.839    "raid_level": "raid5f",
00:23:05.839    "superblock": false,
00:23:05.839    "num_base_bdevs": 4,
00:23:05.839    "num_base_bdevs_discovered": 0,
00:23:05.839    "num_base_bdevs_operational": 4,
00:23:05.839    "base_bdevs_list": [
00:23:05.839      {
00:23:05.839        "name": "BaseBdev1",
00:23:05.839        "uuid": "00000000-0000-0000-0000-000000000000",
00:23:05.839        "is_configured": false,
00:23:05.839        "data_offset": 0,
00:23:05.839        "data_size": 0
00:23:05.839      },
00:23:05.839      {
00:23:05.839        "name": "BaseBdev2",
00:23:05.839        "uuid": "00000000-0000-0000-0000-000000000000",
00:23:05.839        "is_configured": false,
00:23:05.839        "data_offset": 0,
00:23:05.839        "data_size": 0
00:23:05.839      },
00:23:05.839      {
00:23:05.839        "name": "BaseBdev3",
00:23:05.839        "uuid": "00000000-0000-0000-0000-000000000000",
00:23:05.839        "is_configured": false,
00:23:05.839        "data_offset": 0,
00:23:05.839        "data_size": 0
00:23:05.839      },
00:23:05.839      {
00:23:05.839        "name": "BaseBdev4",
00:23:05.839        "uuid": "00000000-0000-0000-0000-000000000000",
00:23:05.839        "is_configured": false,
00:23:05.839        "data_offset": 0,
00:23:05.839        "data_size": 0
00:23:05.839      }
00:23:05.839    ]
00:23:05.839  }'
00:23:05.839   17:06:58	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:23:05.839   17:06:58	-- common/autotest_common.sh@10 -- # set +x
00:23:06.775   17:06:59	-- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid
00:23:06.775  [2024-11-19 17:06:59.523092] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:23:06.775  [2024-11-19 17:06:59.523149] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring
00:23:06.775   17:06:59	-- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid
00:23:07.047  [2024-11-19 17:06:59.727155] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:23:07.047  [2024-11-19 17:06:59.727233] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:23:07.047  [2024-11-19 17:06:59.727243] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:23:07.047  [2024-11-19 17:06:59.727267] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:23:07.047  [2024-11-19 17:06:59.727275] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:23:07.047  [2024-11-19 17:06:59.727291] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:23:07.047  [2024-11-19 17:06:59.727297] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4
00:23:07.047  [2024-11-19 17:06:59.727321] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now
00:23:07.047   17:06:59	-- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1
00:23:07.307  [2024-11-19 17:06:59.924613] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:23:07.307  BaseBdev1
00:23:07.307   17:06:59	-- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1
00:23:07.307   17:06:59	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1
00:23:07.307   17:06:59	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:23:07.307   17:06:59	-- common/autotest_common.sh@899 -- # local i
00:23:07.307   17:06:59	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:23:07.307   17:06:59	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:23:07.307   17:06:59	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:23:07.307   17:07:00	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000
00:23:07.876  [
00:23:07.876    {
00:23:07.876      "name": "BaseBdev1",
00:23:07.876      "aliases": [
00:23:07.876        "3ebc0ae3-6fd5-4cf0-95cf-98c365e48a1c"
00:23:07.876      ],
00:23:07.876      "product_name": "Malloc disk",
00:23:07.876      "block_size": 512,
00:23:07.876      "num_blocks": 65536,
00:23:07.876      "uuid": "3ebc0ae3-6fd5-4cf0-95cf-98c365e48a1c",
00:23:07.876      "assigned_rate_limits": {
00:23:07.876        "rw_ios_per_sec": 0,
00:23:07.876        "rw_mbytes_per_sec": 0,
00:23:07.876        "r_mbytes_per_sec": 0,
00:23:07.876        "w_mbytes_per_sec": 0
00:23:07.876      },
00:23:07.876      "claimed": true,
00:23:07.876      "claim_type": "exclusive_write",
00:23:07.876      "zoned": false,
00:23:07.876      "supported_io_types": {
00:23:07.876        "read": true,
00:23:07.876        "write": true,
00:23:07.876        "unmap": true,
00:23:07.876        "write_zeroes": true,
00:23:07.876        "flush": true,
00:23:07.876        "reset": true,
00:23:07.876        "compare": false,
00:23:07.876        "compare_and_write": false,
00:23:07.876        "abort": true,
00:23:07.876        "nvme_admin": false,
00:23:07.876        "nvme_io": false
00:23:07.876      },
00:23:07.876      "memory_domains": [
00:23:07.876        {
00:23:07.876          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:23:07.876          "dma_device_type": 2
00:23:07.876        }
00:23:07.876      ],
00:23:07.876      "driver_specific": {}
00:23:07.876    }
00:23:07.876  ]
00:23:07.876   17:07:00	-- common/autotest_common.sh@905 -- # return 0
00:23:07.876   17:07:00	-- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4
00:23:07.876   17:07:00	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:23:07.876   17:07:00	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:23:07.876   17:07:00	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:23:07.876   17:07:00	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:23:07.876   17:07:00	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:23:07.876   17:07:00	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:23:07.876   17:07:00	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:23:07.876   17:07:00	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:23:07.876   17:07:00	-- bdev/bdev_raid.sh@125 -- # local tmp
00:23:07.876    17:07:00	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:23:07.876    17:07:00	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:23:07.876   17:07:00	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:23:07.876    "name": "Existed_Raid",
00:23:07.876    "uuid": "00000000-0000-0000-0000-000000000000",
00:23:07.876    "strip_size_kb": 64,
00:23:07.876    "state": "configuring",
00:23:07.876    "raid_level": "raid5f",
00:23:07.876    "superblock": false,
00:23:07.876    "num_base_bdevs": 4,
00:23:07.876    "num_base_bdevs_discovered": 1,
00:23:07.876    "num_base_bdevs_operational": 4,
00:23:07.876    "base_bdevs_list": [
00:23:07.876      {
00:23:07.876        "name": "BaseBdev1",
00:23:07.876        "uuid": "3ebc0ae3-6fd5-4cf0-95cf-98c365e48a1c",
00:23:07.876        "is_configured": true,
00:23:07.876        "data_offset": 0,
00:23:07.876        "data_size": 65536
00:23:07.876      },
00:23:07.876      {
00:23:07.876        "name": "BaseBdev2",
00:23:07.876        "uuid": "00000000-0000-0000-0000-000000000000",
00:23:07.876        "is_configured": false,
00:23:07.876        "data_offset": 0,
00:23:07.876        "data_size": 0
00:23:07.876      },
00:23:07.876      {
00:23:07.876        "name": "BaseBdev3",
00:23:07.876        "uuid": "00000000-0000-0000-0000-000000000000",
00:23:07.876        "is_configured": false,
00:23:07.876        "data_offset": 0,
00:23:07.876        "data_size": 0
00:23:07.876      },
00:23:07.876      {
00:23:07.876        "name": "BaseBdev4",
00:23:07.876        "uuid": "00000000-0000-0000-0000-000000000000",
00:23:07.876        "is_configured": false,
00:23:07.876        "data_offset": 0,
00:23:07.876        "data_size": 0
00:23:07.876      }
00:23:07.876    ]
00:23:07.876  }'
00:23:07.876   17:07:00	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:23:07.876   17:07:00	-- common/autotest_common.sh@10 -- # set +x
00:23:08.443   17:07:01	-- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid
00:23:08.701  [2024-11-19 17:07:01.549009] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:23:08.701  [2024-11-19 17:07:01.549085] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring
00:23:08.960   17:07:01	-- bdev/bdev_raid.sh@244 -- # '[' false = true ']'
00:23:08.960   17:07:01	-- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid
00:23:08.960  [2024-11-19 17:07:01.753188] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:23:08.960  [2024-11-19 17:07:01.755572] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:23:08.960  [2024-11-19 17:07:01.755663] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:23:08.960  [2024-11-19 17:07:01.755673] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:23:08.960  [2024-11-19 17:07:01.755698] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:23:08.960  [2024-11-19 17:07:01.755706] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4
00:23:08.960  [2024-11-19 17:07:01.755724] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now
00:23:08.960   17:07:01	-- bdev/bdev_raid.sh@254 -- # (( i = 1 ))
00:23:08.960   17:07:01	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:23:08.960   17:07:01	-- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4
00:23:08.960   17:07:01	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:23:08.960   17:07:01	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:23:08.960   17:07:01	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:23:08.960   17:07:01	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:23:08.960   17:07:01	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:23:08.960   17:07:01	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:23:08.960   17:07:01	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:23:08.960   17:07:01	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:23:08.960   17:07:01	-- bdev/bdev_raid.sh@125 -- # local tmp
00:23:08.960    17:07:01	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:23:08.960    17:07:01	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:23:09.219   17:07:01	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:23:09.219    "name": "Existed_Raid",
00:23:09.219    "uuid": "00000000-0000-0000-0000-000000000000",
00:23:09.219    "strip_size_kb": 64,
00:23:09.219    "state": "configuring",
00:23:09.219    "raid_level": "raid5f",
00:23:09.219    "superblock": false,
00:23:09.219    "num_base_bdevs": 4,
00:23:09.219    "num_base_bdevs_discovered": 1,
00:23:09.219    "num_base_bdevs_operational": 4,
00:23:09.219    "base_bdevs_list": [
00:23:09.219      {
00:23:09.219        "name": "BaseBdev1",
00:23:09.219        "uuid": "3ebc0ae3-6fd5-4cf0-95cf-98c365e48a1c",
00:23:09.219        "is_configured": true,
00:23:09.219        "data_offset": 0,
00:23:09.219        "data_size": 65536
00:23:09.219      },
00:23:09.219      {
00:23:09.219        "name": "BaseBdev2",
00:23:09.219        "uuid": "00000000-0000-0000-0000-000000000000",
00:23:09.219        "is_configured": false,
00:23:09.219        "data_offset": 0,
00:23:09.219        "data_size": 0
00:23:09.219      },
00:23:09.219      {
00:23:09.219        "name": "BaseBdev3",
00:23:09.219        "uuid": "00000000-0000-0000-0000-000000000000",
00:23:09.219        "is_configured": false,
00:23:09.219        "data_offset": 0,
00:23:09.219        "data_size": 0
00:23:09.219      },
00:23:09.219      {
00:23:09.219        "name": "BaseBdev4",
00:23:09.219        "uuid": "00000000-0000-0000-0000-000000000000",
00:23:09.219        "is_configured": false,
00:23:09.219        "data_offset": 0,
00:23:09.219        "data_size": 0
00:23:09.219      }
00:23:09.219    ]
00:23:09.219  }'
00:23:09.219   17:07:01	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:23:09.219   17:07:01	-- common/autotest_common.sh@10 -- # set +x
00:23:09.787   17:07:02	-- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2
00:23:10.110  [2024-11-19 17:07:02.798390] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:23:10.110  BaseBdev2
00:23:10.110   17:07:02	-- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2
00:23:10.110   17:07:02	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2
00:23:10.110   17:07:02	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:23:10.110   17:07:02	-- common/autotest_common.sh@899 -- # local i
00:23:10.110   17:07:02	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:23:10.110   17:07:02	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:23:10.110   17:07:02	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:23:10.383   17:07:03	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000
00:23:10.383  [
00:23:10.383    {
00:23:10.383      "name": "BaseBdev2",
00:23:10.383      "aliases": [
00:23:10.383        "72336a53-22e4-4e31-b198-3f960fe8862f"
00:23:10.383      ],
00:23:10.383      "product_name": "Malloc disk",
00:23:10.383      "block_size": 512,
00:23:10.383      "num_blocks": 65536,
00:23:10.383      "uuid": "72336a53-22e4-4e31-b198-3f960fe8862f",
00:23:10.383      "assigned_rate_limits": {
00:23:10.383        "rw_ios_per_sec": 0,
00:23:10.383        "rw_mbytes_per_sec": 0,
00:23:10.383        "r_mbytes_per_sec": 0,
00:23:10.383        "w_mbytes_per_sec": 0
00:23:10.383      },
00:23:10.383      "claimed": true,
00:23:10.383      "claim_type": "exclusive_write",
00:23:10.383      "zoned": false,
00:23:10.383      "supported_io_types": {
00:23:10.383        "read": true,
00:23:10.383        "write": true,
00:23:10.383        "unmap": true,
00:23:10.383        "write_zeroes": true,
00:23:10.383        "flush": true,
00:23:10.383        "reset": true,
00:23:10.383        "compare": false,
00:23:10.383        "compare_and_write": false,
00:23:10.383        "abort": true,
00:23:10.384        "nvme_admin": false,
00:23:10.384        "nvme_io": false
00:23:10.384      },
00:23:10.384      "memory_domains": [
00:23:10.384        {
00:23:10.384          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:23:10.384          "dma_device_type": 2
00:23:10.384        }
00:23:10.384      ],
00:23:10.384      "driver_specific": {}
00:23:10.384    }
00:23:10.384  ]
00:23:10.384   17:07:03	-- common/autotest_common.sh@905 -- # return 0
00:23:10.384   17:07:03	-- bdev/bdev_raid.sh@254 -- # (( i++ ))
00:23:10.384   17:07:03	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:23:10.384   17:07:03	-- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4
00:23:10.384   17:07:03	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:23:10.384   17:07:03	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:23:10.384   17:07:03	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:23:10.384   17:07:03	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:23:10.384   17:07:03	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:23:10.384   17:07:03	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:23:10.384   17:07:03	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:23:10.384   17:07:03	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:23:10.384   17:07:03	-- bdev/bdev_raid.sh@125 -- # local tmp
00:23:10.643    17:07:03	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:23:10.643    17:07:03	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:23:10.643   17:07:03	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:23:10.643    "name": "Existed_Raid",
00:23:10.643    "uuid": "00000000-0000-0000-0000-000000000000",
00:23:10.643    "strip_size_kb": 64,
00:23:10.643    "state": "configuring",
00:23:10.643    "raid_level": "raid5f",
00:23:10.643    "superblock": false,
00:23:10.643    "num_base_bdevs": 4,
00:23:10.643    "num_base_bdevs_discovered": 2,
00:23:10.643    "num_base_bdevs_operational": 4,
00:23:10.643    "base_bdevs_list": [
00:23:10.643      {
00:23:10.643        "name": "BaseBdev1",
00:23:10.643        "uuid": "3ebc0ae3-6fd5-4cf0-95cf-98c365e48a1c",
00:23:10.643        "is_configured": true,
00:23:10.643        "data_offset": 0,
00:23:10.643        "data_size": 65536
00:23:10.643      },
00:23:10.643      {
00:23:10.643        "name": "BaseBdev2",
00:23:10.643        "uuid": "72336a53-22e4-4e31-b198-3f960fe8862f",
00:23:10.643        "is_configured": true,
00:23:10.643        "data_offset": 0,
00:23:10.643        "data_size": 65536
00:23:10.643      },
00:23:10.643      {
00:23:10.643        "name": "BaseBdev3",
00:23:10.643        "uuid": "00000000-0000-0000-0000-000000000000",
00:23:10.643        "is_configured": false,
00:23:10.643        "data_offset": 0,
00:23:10.643        "data_size": 0
00:23:10.643      },
00:23:10.643      {
00:23:10.643        "name": "BaseBdev4",
00:23:10.643        "uuid": "00000000-0000-0000-0000-000000000000",
00:23:10.643        "is_configured": false,
00:23:10.643        "data_offset": 0,
00:23:10.643        "data_size": 0
00:23:10.643      }
00:23:10.643    ]
00:23:10.643  }'
00:23:10.643   17:07:03	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:23:10.643   17:07:03	-- common/autotest_common.sh@10 -- # set +x
00:23:11.210   17:07:04	-- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3
00:23:11.778  [2024-11-19 17:07:04.370206] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:23:11.778  BaseBdev3
00:23:11.778   17:07:04	-- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3
00:23:11.778   17:07:04	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3
00:23:11.778   17:07:04	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:23:11.778   17:07:04	-- common/autotest_common.sh@899 -- # local i
00:23:11.778   17:07:04	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:23:11.778   17:07:04	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:23:11.778   17:07:04	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:23:11.778   17:07:04	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000
00:23:12.037  [
00:23:12.037    {
00:23:12.037      "name": "BaseBdev3",
00:23:12.037      "aliases": [
00:23:12.037        "59076dc8-8c5d-4bd7-a4ab-27ac1cc25ae7"
00:23:12.037      ],
00:23:12.037      "product_name": "Malloc disk",
00:23:12.037      "block_size": 512,
00:23:12.037      "num_blocks": 65536,
00:23:12.037      "uuid": "59076dc8-8c5d-4bd7-a4ab-27ac1cc25ae7",
00:23:12.037      "assigned_rate_limits": {
00:23:12.037        "rw_ios_per_sec": 0,
00:23:12.037        "rw_mbytes_per_sec": 0,
00:23:12.037        "r_mbytes_per_sec": 0,
00:23:12.037        "w_mbytes_per_sec": 0
00:23:12.037      },
00:23:12.037      "claimed": true,
00:23:12.037      "claim_type": "exclusive_write",
00:23:12.037      "zoned": false,
00:23:12.037      "supported_io_types": {
00:23:12.037        "read": true,
00:23:12.037        "write": true,
00:23:12.037        "unmap": true,
00:23:12.037        "write_zeroes": true,
00:23:12.037        "flush": true,
00:23:12.037        "reset": true,
00:23:12.037        "compare": false,
00:23:12.037        "compare_and_write": false,
00:23:12.037        "abort": true,
00:23:12.037        "nvme_admin": false,
00:23:12.037        "nvme_io": false
00:23:12.037      },
00:23:12.037      "memory_domains": [
00:23:12.037        {
00:23:12.037          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:23:12.037          "dma_device_type": 2
00:23:12.037        }
00:23:12.037      ],
00:23:12.037      "driver_specific": {}
00:23:12.037    }
00:23:12.037  ]
00:23:12.037   17:07:04	-- common/autotest_common.sh@905 -- # return 0
00:23:12.037   17:07:04	-- bdev/bdev_raid.sh@254 -- # (( i++ ))
00:23:12.037   17:07:04	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:23:12.037   17:07:04	-- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4
00:23:12.037   17:07:04	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:23:12.037   17:07:04	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:23:12.037   17:07:04	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:23:12.037   17:07:04	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:23:12.037   17:07:04	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:23:12.037   17:07:04	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:23:12.037   17:07:04	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:23:12.037   17:07:04	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:23:12.037   17:07:04	-- bdev/bdev_raid.sh@125 -- # local tmp
00:23:12.037    17:07:04	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:23:12.296    17:07:04	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:23:12.555   17:07:05	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:23:12.555    "name": "Existed_Raid",
00:23:12.555    "uuid": "00000000-0000-0000-0000-000000000000",
00:23:12.555    "strip_size_kb": 64,
00:23:12.555    "state": "configuring",
00:23:12.555    "raid_level": "raid5f",
00:23:12.555    "superblock": false,
00:23:12.555    "num_base_bdevs": 4,
00:23:12.555    "num_base_bdevs_discovered": 3,
00:23:12.555    "num_base_bdevs_operational": 4,
00:23:12.555    "base_bdevs_list": [
00:23:12.555      {
00:23:12.555        "name": "BaseBdev1",
00:23:12.555        "uuid": "3ebc0ae3-6fd5-4cf0-95cf-98c365e48a1c",
00:23:12.555        "is_configured": true,
00:23:12.555        "data_offset": 0,
00:23:12.555        "data_size": 65536
00:23:12.555      },
00:23:12.555      {
00:23:12.555        "name": "BaseBdev2",
00:23:12.555        "uuid": "72336a53-22e4-4e31-b198-3f960fe8862f",
00:23:12.555        "is_configured": true,
00:23:12.555        "data_offset": 0,
00:23:12.555        "data_size": 65536
00:23:12.555      },
00:23:12.555      {
00:23:12.555        "name": "BaseBdev3",
00:23:12.555        "uuid": "59076dc8-8c5d-4bd7-a4ab-27ac1cc25ae7",
00:23:12.555        "is_configured": true,
00:23:12.555        "data_offset": 0,
00:23:12.555        "data_size": 65536
00:23:12.555      },
00:23:12.555      {
00:23:12.555        "name": "BaseBdev4",
00:23:12.555        "uuid": "00000000-0000-0000-0000-000000000000",
00:23:12.555        "is_configured": false,
00:23:12.555        "data_offset": 0,
00:23:12.555        "data_size": 0
00:23:12.555      }
00:23:12.555    ]
00:23:12.555  }'
00:23:12.555   17:07:05	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:23:12.555   17:07:05	-- common/autotest_common.sh@10 -- # set +x
00:23:13.122   17:07:05	-- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4
00:23:13.381  [2024-11-19 17:07:05.990180] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed
00:23:13.381  [2024-11-19 17:07:05.990262] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080
00:23:13.381  [2024-11-19 17:07:05.990271] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512
00:23:13.381  [2024-11-19 17:07:05.990430] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120
00:23:13.381  [2024-11-19 17:07:05.991252] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080
00:23:13.381  [2024-11-19 17:07:05.991276] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080
00:23:13.381  [2024-11-19 17:07:05.991562] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:23:13.381  BaseBdev4
00:23:13.381   17:07:06	-- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4
00:23:13.381   17:07:06	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4
00:23:13.381   17:07:06	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:23:13.381   17:07:06	-- common/autotest_common.sh@899 -- # local i
00:23:13.381   17:07:06	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:23:13.381   17:07:06	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:23:13.381   17:07:06	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:23:13.381   17:07:06	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000
00:23:13.640  [
00:23:13.640    {
00:23:13.640      "name": "BaseBdev4",
00:23:13.640      "aliases": [
00:23:13.640        "55836d9d-6cf3-4a39-ab5b-20427d7f8cea"
00:23:13.640      ],
00:23:13.640      "product_name": "Malloc disk",
00:23:13.640      "block_size": 512,
00:23:13.640      "num_blocks": 65536,
00:23:13.640      "uuid": "55836d9d-6cf3-4a39-ab5b-20427d7f8cea",
00:23:13.640      "assigned_rate_limits": {
00:23:13.640        "rw_ios_per_sec": 0,
00:23:13.640        "rw_mbytes_per_sec": 0,
00:23:13.640        "r_mbytes_per_sec": 0,
00:23:13.640        "w_mbytes_per_sec": 0
00:23:13.640      },
00:23:13.640      "claimed": true,
00:23:13.640      "claim_type": "exclusive_write",
00:23:13.640      "zoned": false,
00:23:13.640      "supported_io_types": {
00:23:13.640        "read": true,
00:23:13.640        "write": true,
00:23:13.640        "unmap": true,
00:23:13.640        "write_zeroes": true,
00:23:13.640        "flush": true,
00:23:13.640        "reset": true,
00:23:13.640        "compare": false,
00:23:13.640        "compare_and_write": false,
00:23:13.640        "abort": true,
00:23:13.640        "nvme_admin": false,
00:23:13.640        "nvme_io": false
00:23:13.640      },
00:23:13.640      "memory_domains": [
00:23:13.640        {
00:23:13.640          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:23:13.640          "dma_device_type": 2
00:23:13.640        }
00:23:13.640      ],
00:23:13.640      "driver_specific": {}
00:23:13.640    }
00:23:13.640  ]
00:23:13.900   17:07:06	-- common/autotest_common.sh@905 -- # return 0
00:23:13.900   17:07:06	-- bdev/bdev_raid.sh@254 -- # (( i++ ))
00:23:13.900   17:07:06	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:23:13.900   17:07:06	-- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4
00:23:13.900   17:07:06	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:23:13.900   17:07:06	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:23:13.900   17:07:06	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:23:13.900   17:07:06	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:23:13.900   17:07:06	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:23:13.900   17:07:06	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:23:13.900   17:07:06	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:23:13.900   17:07:06	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:23:13.900   17:07:06	-- bdev/bdev_raid.sh@125 -- # local tmp
00:23:13.900    17:07:06	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:23:13.900    17:07:06	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:23:13.900   17:07:06	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:23:13.900    "name": "Existed_Raid",
00:23:13.900    "uuid": "0a5b2b7b-b414-4feb-ae2a-cc63f73fc3be",
00:23:13.900    "strip_size_kb": 64,
00:23:13.900    "state": "online",
00:23:13.900    "raid_level": "raid5f",
00:23:13.900    "superblock": false,
00:23:13.900    "num_base_bdevs": 4,
00:23:13.900    "num_base_bdevs_discovered": 4,
00:23:13.900    "num_base_bdevs_operational": 4,
00:23:13.900    "base_bdevs_list": [
00:23:13.900      {
00:23:13.900        "name": "BaseBdev1",
00:23:13.900        "uuid": "3ebc0ae3-6fd5-4cf0-95cf-98c365e48a1c",
00:23:13.900        "is_configured": true,
00:23:13.900        "data_offset": 0,
00:23:13.900        "data_size": 65536
00:23:13.900      },
00:23:13.900      {
00:23:13.900        "name": "BaseBdev2",
00:23:13.900        "uuid": "72336a53-22e4-4e31-b198-3f960fe8862f",
00:23:13.900        "is_configured": true,
00:23:13.900        "data_offset": 0,
00:23:13.900        "data_size": 65536
00:23:13.900      },
00:23:13.900      {
00:23:13.900        "name": "BaseBdev3",
00:23:13.900        "uuid": "59076dc8-8c5d-4bd7-a4ab-27ac1cc25ae7",
00:23:13.900        "is_configured": true,
00:23:13.900        "data_offset": 0,
00:23:13.900        "data_size": 65536
00:23:13.900      },
00:23:13.900      {
00:23:13.900        "name": "BaseBdev4",
00:23:13.900        "uuid": "55836d9d-6cf3-4a39-ab5b-20427d7f8cea",
00:23:13.900        "is_configured": true,
00:23:13.900        "data_offset": 0,
00:23:13.900        "data_size": 65536
00:23:13.900      }
00:23:13.900    ]
00:23:13.900  }'
00:23:13.900   17:07:06	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:23:13.900   17:07:06	-- common/autotest_common.sh@10 -- # set +x
00:23:14.468   17:07:07	-- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1
00:23:14.728  [2024-11-19 17:07:07.470675] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:23:14.728   17:07:07	-- bdev/bdev_raid.sh@263 -- # local expected_state
00:23:14.728   17:07:07	-- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f
00:23:14.728   17:07:07	-- bdev/bdev_raid.sh@195 -- # case $1 in
00:23:14.728   17:07:07	-- bdev/bdev_raid.sh@196 -- # return 0
00:23:14.728   17:07:07	-- bdev/bdev_raid.sh@267 -- # expected_state=online
00:23:14.728   17:07:07	-- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3
00:23:14.728   17:07:07	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:23:14.728   17:07:07	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:23:14.728   17:07:07	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:23:14.728   17:07:07	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:23:14.728   17:07:07	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:23:14.728   17:07:07	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:23:14.728   17:07:07	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:23:14.729   17:07:07	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:23:14.729   17:07:07	-- bdev/bdev_raid.sh@125 -- # local tmp
00:23:14.729    17:07:07	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:23:14.729    17:07:07	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:23:14.989   17:07:07	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:23:14.989    "name": "Existed_Raid",
00:23:14.989    "uuid": "0a5b2b7b-b414-4feb-ae2a-cc63f73fc3be",
00:23:14.989    "strip_size_kb": 64,
00:23:14.989    "state": "online",
00:23:14.989    "raid_level": "raid5f",
00:23:14.989    "superblock": false,
00:23:14.989    "num_base_bdevs": 4,
00:23:14.989    "num_base_bdevs_discovered": 3,
00:23:14.989    "num_base_bdevs_operational": 3,
00:23:14.989    "base_bdevs_list": [
00:23:14.989      {
00:23:14.989        "name": null,
00:23:14.989        "uuid": "00000000-0000-0000-0000-000000000000",
00:23:14.989        "is_configured": false,
00:23:14.989        "data_offset": 0,
00:23:14.989        "data_size": 65536
00:23:14.989      },
00:23:14.989      {
00:23:14.989        "name": "BaseBdev2",
00:23:14.989        "uuid": "72336a53-22e4-4e31-b198-3f960fe8862f",
00:23:14.989        "is_configured": true,
00:23:14.989        "data_offset": 0,
00:23:14.989        "data_size": 65536
00:23:14.989      },
00:23:14.989      {
00:23:14.989        "name": "BaseBdev3",
00:23:14.989        "uuid": "59076dc8-8c5d-4bd7-a4ab-27ac1cc25ae7",
00:23:14.989        "is_configured": true,
00:23:14.989        "data_offset": 0,
00:23:14.989        "data_size": 65536
00:23:14.989      },
00:23:14.989      {
00:23:14.989        "name": "BaseBdev4",
00:23:14.989        "uuid": "55836d9d-6cf3-4a39-ab5b-20427d7f8cea",
00:23:14.989        "is_configured": true,
00:23:14.989        "data_offset": 0,
00:23:14.989        "data_size": 65536
00:23:14.989      }
00:23:14.989    ]
00:23:14.989  }'
00:23:14.989   17:07:07	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:23:14.989   17:07:07	-- common/autotest_common.sh@10 -- # set +x
00:23:15.558   17:07:08	-- bdev/bdev_raid.sh@273 -- # (( i = 1 ))
00:23:15.558   17:07:08	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:23:15.558    17:07:08	-- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:23:15.558    17:07:08	-- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]'
00:23:15.818   17:07:08	-- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid
00:23:15.818   17:07:08	-- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:23:15.818   17:07:08	-- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2
00:23:16.077  [2024-11-19 17:07:08.819542] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2
00:23:16.077  [2024-11-19 17:07:08.819607] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:23:16.077  [2024-11-19 17:07:08.819685] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:23:16.077   17:07:08	-- bdev/bdev_raid.sh@273 -- # (( i++ ))
00:23:16.077   17:07:08	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:23:16.077    17:07:08	-- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]'
00:23:16.077    17:07:08	-- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:23:16.337   17:07:09	-- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid
00:23:16.337   17:07:09	-- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:23:16.337   17:07:09	-- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3
00:23:16.596  [2024-11-19 17:07:09.379926] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3
00:23:16.596   17:07:09	-- bdev/bdev_raid.sh@273 -- # (( i++ ))
00:23:16.596   17:07:09	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:23:16.596    17:07:09	-- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:23:16.596    17:07:09	-- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]'
00:23:16.856   17:07:09	-- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid
00:23:16.856   17:07:09	-- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:23:16.856   17:07:09	-- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4
00:23:17.115  [2024-11-19 17:07:09.860545] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4
00:23:17.115  [2024-11-19 17:07:09.860645] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline
00:23:17.115   17:07:09	-- bdev/bdev_raid.sh@273 -- # (( i++ ))
00:23:17.115   17:07:09	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:23:17.115    17:07:09	-- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:23:17.115    17:07:09	-- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)'
00:23:17.374   17:07:10	-- bdev/bdev_raid.sh@281 -- # raid_bdev=
00:23:17.375   17:07:10	-- bdev/bdev_raid.sh@282 -- # '[' -n '' ']'
00:23:17.375   17:07:10	-- bdev/bdev_raid.sh@287 -- # killprocess 139788
00:23:17.375   17:07:10	-- common/autotest_common.sh@936 -- # '[' -z 139788 ']'
00:23:17.375   17:07:10	-- common/autotest_common.sh@940 -- # kill -0 139788
00:23:17.375    17:07:10	-- common/autotest_common.sh@941 -- # uname
00:23:17.375   17:07:10	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:23:17.375    17:07:10	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 139788
00:23:17.375  killing process with pid 139788
00:23:17.375   17:07:10	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:23:17.375   17:07:10	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:23:17.375   17:07:10	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 139788'
00:23:17.375   17:07:10	-- common/autotest_common.sh@955 -- # kill 139788
00:23:17.375  [2024-11-19 17:07:10.133778] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:23:17.375  [2024-11-19 17:07:10.133861] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:23:17.375   17:07:10	-- common/autotest_common.sh@960 -- # wait 139788
00:23:17.634   17:07:10	-- bdev/bdev_raid.sh@289 -- # return 0
00:23:17.634  
00:23:17.634  real	0m13.224s
00:23:17.634  user	0m24.057s
00:23:17.634  sys	0m1.984s
00:23:17.634   17:07:10	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:23:17.634   17:07:10	-- common/autotest_common.sh@10 -- # set +x
00:23:17.634  ************************************
00:23:17.634  END TEST raid5f_state_function_test
00:23:17.634  ************************************
00:23:17.634   17:07:10	-- bdev/bdev_raid.sh@745 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true
00:23:17.634   17:07:10	-- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']'
00:23:17.634   17:07:10	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:23:17.634   17:07:10	-- common/autotest_common.sh@10 -- # set +x
00:23:17.634  ************************************
00:23:17.634  START TEST raid5f_state_function_test_sb
00:23:17.634  ************************************
00:23:17.634   17:07:10	-- common/autotest_common.sh@1114 -- # raid_state_function_test raid5f 4 true
00:23:17.634   17:07:10	-- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f
00:23:17.634   17:07:10	-- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4
00:23:17.634   17:07:10	-- bdev/bdev_raid.sh@204 -- # local superblock=true
00:23:17.634   17:07:10	-- bdev/bdev_raid.sh@205 -- # local raid_bdev
00:23:17.634    17:07:10	-- bdev/bdev_raid.sh@206 -- # (( i = 1 ))
00:23:17.634    17:07:10	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:23:17.634    17:07:10	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev1
00:23:17.634    17:07:10	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:23:17.634    17:07:10	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:23:17.634    17:07:10	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev2
00:23:17.634    17:07:10	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:23:17.634    17:07:10	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:23:17.634    17:07:10	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev3
00:23:17.634    17:07:10	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:23:17.634    17:07:10	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:23:17.634    17:07:10	-- bdev/bdev_raid.sh@206 -- # echo BaseBdev4
00:23:17.634    17:07:10	-- bdev/bdev_raid.sh@206 -- # (( i++ ))
00:23:17.634    17:07:10	-- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs ))
00:23:17.634   17:07:10	-- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4')
00:23:17.634   17:07:10	-- bdev/bdev_raid.sh@206 -- # local base_bdevs
00:23:17.634   17:07:10	-- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid
00:23:17.634   17:07:10	-- bdev/bdev_raid.sh@208 -- # local strip_size
00:23:17.634   17:07:10	-- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg
00:23:17.634   17:07:10	-- bdev/bdev_raid.sh@210 -- # local superblock_create_arg
00:23:17.634   17:07:10	-- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']'
00:23:17.634   17:07:10	-- bdev/bdev_raid.sh@213 -- # strip_size=64
00:23:17.634   17:07:10	-- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64'
00:23:17.634   17:07:10	-- bdev/bdev_raid.sh@219 -- # '[' true = true ']'
00:23:17.634   17:07:10	-- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s
00:23:17.634   17:07:10	-- bdev/bdev_raid.sh@226 -- # raid_pid=140214
00:23:17.634   17:07:10	-- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid
00:23:17.634   17:07:10	-- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 140214'
00:23:17.634  Process raid pid: 140214
00:23:17.634   17:07:10	-- bdev/bdev_raid.sh@228 -- # waitforlisten 140214 /var/tmp/spdk-raid.sock
00:23:17.634   17:07:10	-- common/autotest_common.sh@829 -- # '[' -z 140214 ']'
00:23:17.634   17:07:10	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock
00:23:17.634   17:07:10	-- common/autotest_common.sh@834 -- # local max_retries=100
00:23:17.634   17:07:10	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...'
00:23:17.634  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...
00:23:17.634   17:07:10	-- common/autotest_common.sh@838 -- # xtrace_disable
00:23:17.634   17:07:10	-- common/autotest_common.sh@10 -- # set +x
00:23:17.893  [2024-11-19 17:07:10.526676] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:23:17.893  [2024-11-19 17:07:10.526872] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:23:17.893  [2024-11-19 17:07:10.673228] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:23:17.893  [2024-11-19 17:07:10.727814] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:23:18.151  [2024-11-19 17:07:10.772650] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:23:18.720   17:07:11	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:23:18.720   17:07:11	-- common/autotest_common.sh@862 -- # return 0
00:23:18.720   17:07:11	-- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid
00:23:18.979  [2024-11-19 17:07:11.741719] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:23:18.979  [2024-11-19 17:07:11.741815] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:23:18.979  [2024-11-19 17:07:11.741827] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:23:18.979  [2024-11-19 17:07:11.741846] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:23:18.979  [2024-11-19 17:07:11.741853] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:23:18.979  [2024-11-19 17:07:11.741899] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:23:18.979  [2024-11-19 17:07:11.741906] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4
00:23:18.979  [2024-11-19 17:07:11.741933] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now
00:23:18.979   17:07:11	-- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4
00:23:18.979   17:07:11	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:23:18.979   17:07:11	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:23:18.979   17:07:11	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:23:18.979   17:07:11	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:23:18.979   17:07:11	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:23:18.979   17:07:11	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:23:18.979   17:07:11	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:23:18.979   17:07:11	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:23:18.979   17:07:11	-- bdev/bdev_raid.sh@125 -- # local tmp
00:23:18.979    17:07:11	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:23:18.979    17:07:11	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:23:19.237   17:07:12	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:23:19.237    "name": "Existed_Raid",
00:23:19.237    "uuid": "64c04f5b-e579-41e4-9ad5-032520afc54e",
00:23:19.237    "strip_size_kb": 64,
00:23:19.237    "state": "configuring",
00:23:19.237    "raid_level": "raid5f",
00:23:19.237    "superblock": true,
00:23:19.237    "num_base_bdevs": 4,
00:23:19.237    "num_base_bdevs_discovered": 0,
00:23:19.237    "num_base_bdevs_operational": 4,
00:23:19.237    "base_bdevs_list": [
00:23:19.237      {
00:23:19.237        "name": "BaseBdev1",
00:23:19.237        "uuid": "00000000-0000-0000-0000-000000000000",
00:23:19.237        "is_configured": false,
00:23:19.237        "data_offset": 0,
00:23:19.237        "data_size": 0
00:23:19.237      },
00:23:19.237      {
00:23:19.237        "name": "BaseBdev2",
00:23:19.237        "uuid": "00000000-0000-0000-0000-000000000000",
00:23:19.237        "is_configured": false,
00:23:19.237        "data_offset": 0,
00:23:19.237        "data_size": 0
00:23:19.237      },
00:23:19.237      {
00:23:19.237        "name": "BaseBdev3",
00:23:19.237        "uuid": "00000000-0000-0000-0000-000000000000",
00:23:19.237        "is_configured": false,
00:23:19.237        "data_offset": 0,
00:23:19.237        "data_size": 0
00:23:19.237      },
00:23:19.237      {
00:23:19.237        "name": "BaseBdev4",
00:23:19.237        "uuid": "00000000-0000-0000-0000-000000000000",
00:23:19.237        "is_configured": false,
00:23:19.237        "data_offset": 0,
00:23:19.237        "data_size": 0
00:23:19.237      }
00:23:19.237    ]
00:23:19.237  }'
00:23:19.237   17:07:12	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:23:19.237   17:07:12	-- common/autotest_common.sh@10 -- # set +x
00:23:19.806   17:07:12	-- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid
00:23:20.065  [2024-11-19 17:07:12.777720] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:23:20.065  [2024-11-19 17:07:12.777793] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring
00:23:20.065   17:07:12	-- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid
00:23:20.341  [2024-11-19 17:07:12.969825] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1
00:23:20.341  [2024-11-19 17:07:12.969922] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now
00:23:20.341  [2024-11-19 17:07:12.969933] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:23:20.341  [2024-11-19 17:07:12.969979] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:23:20.341  [2024-11-19 17:07:12.969987] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:23:20.341  [2024-11-19 17:07:12.970004] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:23:20.341  [2024-11-19 17:07:12.970011] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4
00:23:20.341  [2024-11-19 17:07:12.970037] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now
00:23:20.341   17:07:12	-- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1
00:23:20.341  [2024-11-19 17:07:13.184305] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:23:20.341  BaseBdev1
00:23:20.601   17:07:13	-- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1
00:23:20.601   17:07:13	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1
00:23:20.601   17:07:13	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:23:20.601   17:07:13	-- common/autotest_common.sh@899 -- # local i
00:23:20.601   17:07:13	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:23:20.601   17:07:13	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:23:20.601   17:07:13	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:23:20.861   17:07:13	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000
00:23:20.861  [
00:23:20.861    {
00:23:20.861      "name": "BaseBdev1",
00:23:20.861      "aliases": [
00:23:20.861        "eb816c49-56ff-477f-b88b-b3a0dd8ac594"
00:23:20.861      ],
00:23:20.861      "product_name": "Malloc disk",
00:23:20.861      "block_size": 512,
00:23:20.861      "num_blocks": 65536,
00:23:20.861      "uuid": "eb816c49-56ff-477f-b88b-b3a0dd8ac594",
00:23:20.861      "assigned_rate_limits": {
00:23:20.861        "rw_ios_per_sec": 0,
00:23:20.861        "rw_mbytes_per_sec": 0,
00:23:20.861        "r_mbytes_per_sec": 0,
00:23:20.861        "w_mbytes_per_sec": 0
00:23:20.861      },
00:23:20.861      "claimed": true,
00:23:20.861      "claim_type": "exclusive_write",
00:23:20.861      "zoned": false,
00:23:20.861      "supported_io_types": {
00:23:20.861        "read": true,
00:23:20.861        "write": true,
00:23:20.862        "unmap": true,
00:23:20.862        "write_zeroes": true,
00:23:20.862        "flush": true,
00:23:20.862        "reset": true,
00:23:20.862        "compare": false,
00:23:20.862        "compare_and_write": false,
00:23:20.862        "abort": true,
00:23:20.862        "nvme_admin": false,
00:23:20.862        "nvme_io": false
00:23:20.862      },
00:23:20.862      "memory_domains": [
00:23:20.862        {
00:23:20.862          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:23:20.862          "dma_device_type": 2
00:23:20.862        }
00:23:20.862      ],
00:23:20.862      "driver_specific": {}
00:23:20.862    }
00:23:20.862  ]
00:23:20.862   17:07:13	-- common/autotest_common.sh@905 -- # return 0
00:23:20.862   17:07:13	-- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4
00:23:20.862   17:07:13	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:23:20.862   17:07:13	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:23:20.862   17:07:13	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:23:20.862   17:07:13	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:23:20.862   17:07:13	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:23:20.862   17:07:13	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:23:20.862   17:07:13	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:23:20.862   17:07:13	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:23:20.862   17:07:13	-- bdev/bdev_raid.sh@125 -- # local tmp
00:23:20.862    17:07:13	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:23:20.862    17:07:13	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:23:21.121   17:07:13	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:23:21.121    "name": "Existed_Raid",
00:23:21.121    "uuid": "429be325-a357-498c-a81b-58148fab422a",
00:23:21.121    "strip_size_kb": 64,
00:23:21.121    "state": "configuring",
00:23:21.121    "raid_level": "raid5f",
00:23:21.121    "superblock": true,
00:23:21.121    "num_base_bdevs": 4,
00:23:21.121    "num_base_bdevs_discovered": 1,
00:23:21.121    "num_base_bdevs_operational": 4,
00:23:21.121    "base_bdevs_list": [
00:23:21.121      {
00:23:21.121        "name": "BaseBdev1",
00:23:21.121        "uuid": "eb816c49-56ff-477f-b88b-b3a0dd8ac594",
00:23:21.121        "is_configured": true,
00:23:21.121        "data_offset": 2048,
00:23:21.121        "data_size": 63488
00:23:21.121      },
00:23:21.121      {
00:23:21.121        "name": "BaseBdev2",
00:23:21.121        "uuid": "00000000-0000-0000-0000-000000000000",
00:23:21.121        "is_configured": false,
00:23:21.121        "data_offset": 0,
00:23:21.121        "data_size": 0
00:23:21.121      },
00:23:21.121      {
00:23:21.121        "name": "BaseBdev3",
00:23:21.121        "uuid": "00000000-0000-0000-0000-000000000000",
00:23:21.121        "is_configured": false,
00:23:21.121        "data_offset": 0,
00:23:21.121        "data_size": 0
00:23:21.121      },
00:23:21.121      {
00:23:21.121        "name": "BaseBdev4",
00:23:21.121        "uuid": "00000000-0000-0000-0000-000000000000",
00:23:21.121        "is_configured": false,
00:23:21.121        "data_offset": 0,
00:23:21.121        "data_size": 0
00:23:21.121      }
00:23:21.121    ]
00:23:21.122  }'
00:23:21.122   17:07:13	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:23:21.122   17:07:13	-- common/autotest_common.sh@10 -- # set +x
00:23:22.061   17:07:14	-- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid
00:23:22.061  [2024-11-19 17:07:14.848702] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid
00:23:22.061  [2024-11-19 17:07:14.848794] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring
00:23:22.061   17:07:14	-- bdev/bdev_raid.sh@244 -- # '[' true = true ']'
00:23:22.061   17:07:14	-- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1
00:23:22.320   17:07:15	-- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1
00:23:22.580  BaseBdev1
00:23:22.580   17:07:15	-- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1
00:23:22.580   17:07:15	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1
00:23:22.580   17:07:15	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:23:22.580   17:07:15	-- common/autotest_common.sh@899 -- # local i
00:23:22.580   17:07:15	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:23:22.580   17:07:15	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:23:22.580   17:07:15	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:23:22.840   17:07:15	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000
00:23:23.098  [
00:23:23.098    {
00:23:23.098      "name": "BaseBdev1",
00:23:23.098      "aliases": [
00:23:23.098        "a366d113-4ffd-4031-a083-852f6b2a71a6"
00:23:23.098      ],
00:23:23.098      "product_name": "Malloc disk",
00:23:23.098      "block_size": 512,
00:23:23.098      "num_blocks": 65536,
00:23:23.098      "uuid": "a366d113-4ffd-4031-a083-852f6b2a71a6",
00:23:23.098      "assigned_rate_limits": {
00:23:23.098        "rw_ios_per_sec": 0,
00:23:23.098        "rw_mbytes_per_sec": 0,
00:23:23.098        "r_mbytes_per_sec": 0,
00:23:23.098        "w_mbytes_per_sec": 0
00:23:23.098      },
00:23:23.098      "claimed": false,
00:23:23.098      "zoned": false,
00:23:23.098      "supported_io_types": {
00:23:23.098        "read": true,
00:23:23.098        "write": true,
00:23:23.098        "unmap": true,
00:23:23.098        "write_zeroes": true,
00:23:23.098        "flush": true,
00:23:23.098        "reset": true,
00:23:23.098        "compare": false,
00:23:23.098        "compare_and_write": false,
00:23:23.098        "abort": true,
00:23:23.098        "nvme_admin": false,
00:23:23.098        "nvme_io": false
00:23:23.098      },
00:23:23.098      "memory_domains": [
00:23:23.098        {
00:23:23.098          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:23:23.098          "dma_device_type": 2
00:23:23.098        }
00:23:23.098      ],
00:23:23.098      "driver_specific": {}
00:23:23.098    }
00:23:23.098  ]
00:23:23.098   17:07:15	-- common/autotest_common.sh@905 -- # return 0
00:23:23.099   17:07:15	-- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid
00:23:23.358  [2024-11-19 17:07:16.128300] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:23:23.358  [2024-11-19 17:07:16.130764] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2
00:23:23.358  [2024-11-19 17:07:16.130872] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now
00:23:23.358  [2024-11-19 17:07:16.130884] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3
00:23:23.358  [2024-11-19 17:07:16.130909] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now
00:23:23.358  [2024-11-19 17:07:16.130917] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4
00:23:23.358  [2024-11-19 17:07:16.130934] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now
00:23:23.358   17:07:16	-- bdev/bdev_raid.sh@254 -- # (( i = 1 ))
00:23:23.358   17:07:16	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:23:23.358   17:07:16	-- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4
00:23:23.358   17:07:16	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:23:23.358   17:07:16	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:23:23.358   17:07:16	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:23:23.358   17:07:16	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:23:23.358   17:07:16	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:23:23.358   17:07:16	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:23:23.358   17:07:16	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:23:23.358   17:07:16	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:23:23.358   17:07:16	-- bdev/bdev_raid.sh@125 -- # local tmp
00:23:23.358    17:07:16	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:23:23.358    17:07:16	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:23:23.927   17:07:16	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:23:23.927    "name": "Existed_Raid",
00:23:23.927    "uuid": "0e5b468e-ef69-434c-83c7-6b2c5395b95b",
00:23:23.927    "strip_size_kb": 64,
00:23:23.927    "state": "configuring",
00:23:23.927    "raid_level": "raid5f",
00:23:23.927    "superblock": true,
00:23:23.927    "num_base_bdevs": 4,
00:23:23.927    "num_base_bdevs_discovered": 1,
00:23:23.927    "num_base_bdevs_operational": 4,
00:23:23.927    "base_bdevs_list": [
00:23:23.927      {
00:23:23.927        "name": "BaseBdev1",
00:23:23.927        "uuid": "a366d113-4ffd-4031-a083-852f6b2a71a6",
00:23:23.927        "is_configured": true,
00:23:23.927        "data_offset": 2048,
00:23:23.927        "data_size": 63488
00:23:23.927      },
00:23:23.927      {
00:23:23.927        "name": "BaseBdev2",
00:23:23.927        "uuid": "00000000-0000-0000-0000-000000000000",
00:23:23.927        "is_configured": false,
00:23:23.927        "data_offset": 0,
00:23:23.927        "data_size": 0
00:23:23.927      },
00:23:23.927      {
00:23:23.927        "name": "BaseBdev3",
00:23:23.927        "uuid": "00000000-0000-0000-0000-000000000000",
00:23:23.927        "is_configured": false,
00:23:23.927        "data_offset": 0,
00:23:23.927        "data_size": 0
00:23:23.927      },
00:23:23.927      {
00:23:23.927        "name": "BaseBdev4",
00:23:23.927        "uuid": "00000000-0000-0000-0000-000000000000",
00:23:23.927        "is_configured": false,
00:23:23.927        "data_offset": 0,
00:23:23.927        "data_size": 0
00:23:23.927      }
00:23:23.927    ]
00:23:23.927  }'
00:23:23.927   17:07:16	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:23:23.927   17:07:16	-- common/autotest_common.sh@10 -- # set +x
00:23:24.584   17:07:17	-- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2
00:23:24.584  [2024-11-19 17:07:17.354759] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:23:24.584  BaseBdev2
00:23:24.584   17:07:17	-- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2
00:23:24.584   17:07:17	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2
00:23:24.584   17:07:17	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:23:24.584   17:07:17	-- common/autotest_common.sh@899 -- # local i
00:23:24.584   17:07:17	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:23:24.584   17:07:17	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:23:24.584   17:07:17	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:23:24.843   17:07:17	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000
00:23:25.103  [
00:23:25.103    {
00:23:25.103      "name": "BaseBdev2",
00:23:25.103      "aliases": [
00:23:25.103        "0b968a73-ada3-42c5-bab9-5565e8c64752"
00:23:25.103      ],
00:23:25.103      "product_name": "Malloc disk",
00:23:25.103      "block_size": 512,
00:23:25.103      "num_blocks": 65536,
00:23:25.103      "uuid": "0b968a73-ada3-42c5-bab9-5565e8c64752",
00:23:25.103      "assigned_rate_limits": {
00:23:25.103        "rw_ios_per_sec": 0,
00:23:25.103        "rw_mbytes_per_sec": 0,
00:23:25.103        "r_mbytes_per_sec": 0,
00:23:25.103        "w_mbytes_per_sec": 0
00:23:25.103      },
00:23:25.103      "claimed": true,
00:23:25.103      "claim_type": "exclusive_write",
00:23:25.103      "zoned": false,
00:23:25.103      "supported_io_types": {
00:23:25.103        "read": true,
00:23:25.103        "write": true,
00:23:25.103        "unmap": true,
00:23:25.103        "write_zeroes": true,
00:23:25.103        "flush": true,
00:23:25.103        "reset": true,
00:23:25.103        "compare": false,
00:23:25.103        "compare_and_write": false,
00:23:25.103        "abort": true,
00:23:25.103        "nvme_admin": false,
00:23:25.103        "nvme_io": false
00:23:25.103      },
00:23:25.103      "memory_domains": [
00:23:25.103        {
00:23:25.103          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:23:25.103          "dma_device_type": 2
00:23:25.103        }
00:23:25.103      ],
00:23:25.103      "driver_specific": {}
00:23:25.103    }
00:23:25.103  ]
00:23:25.103   17:07:17	-- common/autotest_common.sh@905 -- # return 0
00:23:25.103   17:07:17	-- bdev/bdev_raid.sh@254 -- # (( i++ ))
00:23:25.103   17:07:17	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:23:25.103   17:07:17	-- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4
00:23:25.103   17:07:17	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:23:25.103   17:07:17	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:23:25.103   17:07:17	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:23:25.103   17:07:17	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:23:25.103   17:07:17	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:23:25.103   17:07:17	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:23:25.103   17:07:17	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:23:25.103   17:07:17	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:23:25.103   17:07:17	-- bdev/bdev_raid.sh@125 -- # local tmp
00:23:25.103    17:07:17	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:23:25.103    17:07:17	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:23:25.363   17:07:18	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:23:25.363    "name": "Existed_Raid",
00:23:25.363    "uuid": "0e5b468e-ef69-434c-83c7-6b2c5395b95b",
00:23:25.363    "strip_size_kb": 64,
00:23:25.363    "state": "configuring",
00:23:25.363    "raid_level": "raid5f",
00:23:25.363    "superblock": true,
00:23:25.363    "num_base_bdevs": 4,
00:23:25.363    "num_base_bdevs_discovered": 2,
00:23:25.363    "num_base_bdevs_operational": 4,
00:23:25.363    "base_bdevs_list": [
00:23:25.363      {
00:23:25.363        "name": "BaseBdev1",
00:23:25.363        "uuid": "a366d113-4ffd-4031-a083-852f6b2a71a6",
00:23:25.363        "is_configured": true,
00:23:25.363        "data_offset": 2048,
00:23:25.363        "data_size": 63488
00:23:25.363      },
00:23:25.363      {
00:23:25.363        "name": "BaseBdev2",
00:23:25.363        "uuid": "0b968a73-ada3-42c5-bab9-5565e8c64752",
00:23:25.363        "is_configured": true,
00:23:25.363        "data_offset": 2048,
00:23:25.363        "data_size": 63488
00:23:25.363      },
00:23:25.363      {
00:23:25.363        "name": "BaseBdev3",
00:23:25.363        "uuid": "00000000-0000-0000-0000-000000000000",
00:23:25.363        "is_configured": false,
00:23:25.363        "data_offset": 0,
00:23:25.363        "data_size": 0
00:23:25.363      },
00:23:25.363      {
00:23:25.363        "name": "BaseBdev4",
00:23:25.363        "uuid": "00000000-0000-0000-0000-000000000000",
00:23:25.363        "is_configured": false,
00:23:25.363        "data_offset": 0,
00:23:25.363        "data_size": 0
00:23:25.363      }
00:23:25.363    ]
00:23:25.363  }'
00:23:25.363   17:07:18	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:23:25.363   17:07:18	-- common/autotest_common.sh@10 -- # set +x
00:23:26.299   17:07:18	-- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3
00:23:26.299  [2024-11-19 17:07:19.071279] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:23:26.299  BaseBdev3
00:23:26.299   17:07:19	-- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3
00:23:26.299   17:07:19	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3
00:23:26.300   17:07:19	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:23:26.300   17:07:19	-- common/autotest_common.sh@899 -- # local i
00:23:26.300   17:07:19	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:23:26.300   17:07:19	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:23:26.300   17:07:19	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:23:26.557   17:07:19	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000
00:23:26.815  [
00:23:26.815    {
00:23:26.815      "name": "BaseBdev3",
00:23:26.815      "aliases": [
00:23:26.815        "e1f0ee88-7682-411e-aa42-9d14e734fca8"
00:23:26.815      ],
00:23:26.815      "product_name": "Malloc disk",
00:23:26.815      "block_size": 512,
00:23:26.815      "num_blocks": 65536,
00:23:26.815      "uuid": "e1f0ee88-7682-411e-aa42-9d14e734fca8",
00:23:26.815      "assigned_rate_limits": {
00:23:26.815        "rw_ios_per_sec": 0,
00:23:26.815        "rw_mbytes_per_sec": 0,
00:23:26.815        "r_mbytes_per_sec": 0,
00:23:26.815        "w_mbytes_per_sec": 0
00:23:26.815      },
00:23:26.815      "claimed": true,
00:23:26.815      "claim_type": "exclusive_write",
00:23:26.815      "zoned": false,
00:23:26.815      "supported_io_types": {
00:23:26.815        "read": true,
00:23:26.815        "write": true,
00:23:26.815        "unmap": true,
00:23:26.815        "write_zeroes": true,
00:23:26.815        "flush": true,
00:23:26.815        "reset": true,
00:23:26.815        "compare": false,
00:23:26.815        "compare_and_write": false,
00:23:26.815        "abort": true,
00:23:26.815        "nvme_admin": false,
00:23:26.815        "nvme_io": false
00:23:26.815      },
00:23:26.815      "memory_domains": [
00:23:26.815        {
00:23:26.815          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:23:26.815          "dma_device_type": 2
00:23:26.815        }
00:23:26.815      ],
00:23:26.815      "driver_specific": {}
00:23:26.815    }
00:23:26.815  ]
00:23:26.815   17:07:19	-- common/autotest_common.sh@905 -- # return 0
00:23:26.815   17:07:19	-- bdev/bdev_raid.sh@254 -- # (( i++ ))
00:23:26.815   17:07:19	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:23:26.815   17:07:19	-- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4
00:23:26.815   17:07:19	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:23:26.815   17:07:19	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:23:26.815   17:07:19	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:23:26.815   17:07:19	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:23:26.815   17:07:19	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:23:26.815   17:07:19	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:23:26.815   17:07:19	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:23:26.815   17:07:19	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:23:26.815   17:07:19	-- bdev/bdev_raid.sh@125 -- # local tmp
00:23:26.815    17:07:19	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:23:26.815    17:07:19	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:23:27.074   17:07:19	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:23:27.074    "name": "Existed_Raid",
00:23:27.074    "uuid": "0e5b468e-ef69-434c-83c7-6b2c5395b95b",
00:23:27.074    "strip_size_kb": 64,
00:23:27.074    "state": "configuring",
00:23:27.074    "raid_level": "raid5f",
00:23:27.074    "superblock": true,
00:23:27.074    "num_base_bdevs": 4,
00:23:27.074    "num_base_bdevs_discovered": 3,
00:23:27.074    "num_base_bdevs_operational": 4,
00:23:27.074    "base_bdevs_list": [
00:23:27.074      {
00:23:27.074        "name": "BaseBdev1",
00:23:27.074        "uuid": "a366d113-4ffd-4031-a083-852f6b2a71a6",
00:23:27.074        "is_configured": true,
00:23:27.074        "data_offset": 2048,
00:23:27.074        "data_size": 63488
00:23:27.074      },
00:23:27.074      {
00:23:27.074        "name": "BaseBdev2",
00:23:27.074        "uuid": "0b968a73-ada3-42c5-bab9-5565e8c64752",
00:23:27.074        "is_configured": true,
00:23:27.074        "data_offset": 2048,
00:23:27.074        "data_size": 63488
00:23:27.074      },
00:23:27.074      {
00:23:27.074        "name": "BaseBdev3",
00:23:27.074        "uuid": "e1f0ee88-7682-411e-aa42-9d14e734fca8",
00:23:27.074        "is_configured": true,
00:23:27.074        "data_offset": 2048,
00:23:27.074        "data_size": 63488
00:23:27.074      },
00:23:27.074      {
00:23:27.074        "name": "BaseBdev4",
00:23:27.074        "uuid": "00000000-0000-0000-0000-000000000000",
00:23:27.074        "is_configured": false,
00:23:27.074        "data_offset": 0,
00:23:27.074        "data_size": 0
00:23:27.074      }
00:23:27.074    ]
00:23:27.074  }'
00:23:27.074   17:07:19	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:23:27.074   17:07:19	-- common/autotest_common.sh@10 -- # set +x
00:23:28.010   17:07:20	-- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4
00:23:28.010  [2024-11-19 17:07:20.727229] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed
00:23:28.010  [2024-11-19 17:07:20.727737] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006680
00:23:28.010  [2024-11-19 17:07:20.727870] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512
00:23:28.010  [2024-11-19 17:07:20.728066] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000021f0
00:23:28.010  [2024-11-19 17:07:20.728905] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006680
00:23:28.010  [2024-11-19 17:07:20.729040] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006680
00:23:28.010  [2024-11-19 17:07:20.729399] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:23:28.010  BaseBdev4
00:23:28.010   17:07:20	-- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4
00:23:28.011   17:07:20	-- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4
00:23:28.011   17:07:20	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:23:28.011   17:07:20	-- common/autotest_common.sh@899 -- # local i
00:23:28.011   17:07:20	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:23:28.011   17:07:20	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:23:28.011   17:07:20	-- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine
00:23:28.288   17:07:21	-- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000
00:23:28.547  [
00:23:28.547    {
00:23:28.547      "name": "BaseBdev4",
00:23:28.547      "aliases": [
00:23:28.547        "4fd72ebe-181c-465d-9b54-e09c44a15cdb"
00:23:28.547      ],
00:23:28.547      "product_name": "Malloc disk",
00:23:28.547      "block_size": 512,
00:23:28.547      "num_blocks": 65536,
00:23:28.547      "uuid": "4fd72ebe-181c-465d-9b54-e09c44a15cdb",
00:23:28.547      "assigned_rate_limits": {
00:23:28.547        "rw_ios_per_sec": 0,
00:23:28.547        "rw_mbytes_per_sec": 0,
00:23:28.547        "r_mbytes_per_sec": 0,
00:23:28.547        "w_mbytes_per_sec": 0
00:23:28.547      },
00:23:28.547      "claimed": true,
00:23:28.547      "claim_type": "exclusive_write",
00:23:28.547      "zoned": false,
00:23:28.547      "supported_io_types": {
00:23:28.547        "read": true,
00:23:28.547        "write": true,
00:23:28.547        "unmap": true,
00:23:28.547        "write_zeroes": true,
00:23:28.547        "flush": true,
00:23:28.547        "reset": true,
00:23:28.547        "compare": false,
00:23:28.547        "compare_and_write": false,
00:23:28.547        "abort": true,
00:23:28.547        "nvme_admin": false,
00:23:28.547        "nvme_io": false
00:23:28.547      },
00:23:28.547      "memory_domains": [
00:23:28.547        {
00:23:28.547          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:23:28.547          "dma_device_type": 2
00:23:28.547        }
00:23:28.547      ],
00:23:28.547      "driver_specific": {}
00:23:28.547    }
00:23:28.547  ]
00:23:28.547   17:07:21	-- common/autotest_common.sh@905 -- # return 0
00:23:28.547   17:07:21	-- bdev/bdev_raid.sh@254 -- # (( i++ ))
00:23:28.547   17:07:21	-- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs ))
00:23:28.547   17:07:21	-- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4
00:23:28.547   17:07:21	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:23:28.547   17:07:21	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:23:28.547   17:07:21	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:23:28.547   17:07:21	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:23:28.547   17:07:21	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:23:28.547   17:07:21	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:23:28.547   17:07:21	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:23:28.547   17:07:21	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:23:28.547   17:07:21	-- bdev/bdev_raid.sh@125 -- # local tmp
00:23:28.547    17:07:21	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:23:28.547    17:07:21	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:23:28.805   17:07:21	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:23:28.805    "name": "Existed_Raid",
00:23:28.805    "uuid": "0e5b468e-ef69-434c-83c7-6b2c5395b95b",
00:23:28.805    "strip_size_kb": 64,
00:23:28.805    "state": "online",
00:23:28.805    "raid_level": "raid5f",
00:23:28.805    "superblock": true,
00:23:28.805    "num_base_bdevs": 4,
00:23:28.805    "num_base_bdevs_discovered": 4,
00:23:28.805    "num_base_bdevs_operational": 4,
00:23:28.805    "base_bdevs_list": [
00:23:28.805      {
00:23:28.805        "name": "BaseBdev1",
00:23:28.805        "uuid": "a366d113-4ffd-4031-a083-852f6b2a71a6",
00:23:28.805        "is_configured": true,
00:23:28.805        "data_offset": 2048,
00:23:28.805        "data_size": 63488
00:23:28.805      },
00:23:28.805      {
00:23:28.805        "name": "BaseBdev2",
00:23:28.805        "uuid": "0b968a73-ada3-42c5-bab9-5565e8c64752",
00:23:28.805        "is_configured": true,
00:23:28.805        "data_offset": 2048,
00:23:28.805        "data_size": 63488
00:23:28.805      },
00:23:28.805      {
00:23:28.805        "name": "BaseBdev3",
00:23:28.805        "uuid": "e1f0ee88-7682-411e-aa42-9d14e734fca8",
00:23:28.805        "is_configured": true,
00:23:28.805        "data_offset": 2048,
00:23:28.805        "data_size": 63488
00:23:28.805      },
00:23:28.805      {
00:23:28.805        "name": "BaseBdev4",
00:23:28.805        "uuid": "4fd72ebe-181c-465d-9b54-e09c44a15cdb",
00:23:28.805        "is_configured": true,
00:23:28.805        "data_offset": 2048,
00:23:28.805        "data_size": 63488
00:23:28.805      }
00:23:28.805    ]
00:23:28.805  }'
00:23:28.805   17:07:21	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:23:28.805   17:07:21	-- common/autotest_common.sh@10 -- # set +x
00:23:29.370   17:07:22	-- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1
00:23:29.629  [2024-11-19 17:07:22.251971] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:23:29.629   17:07:22	-- bdev/bdev_raid.sh@263 -- # local expected_state
00:23:29.629   17:07:22	-- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f
00:23:29.629   17:07:22	-- bdev/bdev_raid.sh@195 -- # case $1 in
00:23:29.629   17:07:22	-- bdev/bdev_raid.sh@196 -- # return 0
00:23:29.629   17:07:22	-- bdev/bdev_raid.sh@267 -- # expected_state=online
00:23:29.629   17:07:22	-- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3
00:23:29.629   17:07:22	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid
00:23:29.629   17:07:22	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:23:29.629   17:07:22	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:23:29.629   17:07:22	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:23:29.629   17:07:22	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:23:29.629   17:07:22	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:23:29.629   17:07:22	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:23:29.629   17:07:22	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:23:29.629   17:07:22	-- bdev/bdev_raid.sh@125 -- # local tmp
00:23:29.629    17:07:22	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:23:29.629    17:07:22	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")'
00:23:29.629   17:07:22	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:23:29.629    "name": "Existed_Raid",
00:23:29.629    "uuid": "0e5b468e-ef69-434c-83c7-6b2c5395b95b",
00:23:29.629    "strip_size_kb": 64,
00:23:29.629    "state": "online",
00:23:29.629    "raid_level": "raid5f",
00:23:29.629    "superblock": true,
00:23:29.629    "num_base_bdevs": 4,
00:23:29.629    "num_base_bdevs_discovered": 3,
00:23:29.629    "num_base_bdevs_operational": 3,
00:23:29.629    "base_bdevs_list": [
00:23:29.629      {
00:23:29.629        "name": null,
00:23:29.629        "uuid": "00000000-0000-0000-0000-000000000000",
00:23:29.629        "is_configured": false,
00:23:29.629        "data_offset": 2048,
00:23:29.629        "data_size": 63488
00:23:29.629      },
00:23:29.629      {
00:23:29.629        "name": "BaseBdev2",
00:23:29.629        "uuid": "0b968a73-ada3-42c5-bab9-5565e8c64752",
00:23:29.629        "is_configured": true,
00:23:29.629        "data_offset": 2048,
00:23:29.629        "data_size": 63488
00:23:29.629      },
00:23:29.629      {
00:23:29.629        "name": "BaseBdev3",
00:23:29.629        "uuid": "e1f0ee88-7682-411e-aa42-9d14e734fca8",
00:23:29.629        "is_configured": true,
00:23:29.629        "data_offset": 2048,
00:23:29.629        "data_size": 63488
00:23:29.629      },
00:23:29.629      {
00:23:29.629        "name": "BaseBdev4",
00:23:29.629        "uuid": "4fd72ebe-181c-465d-9b54-e09c44a15cdb",
00:23:29.629        "is_configured": true,
00:23:29.629        "data_offset": 2048,
00:23:29.629        "data_size": 63488
00:23:29.629      }
00:23:29.629    ]
00:23:29.629  }'
00:23:29.629   17:07:22	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:23:29.629   17:07:22	-- common/autotest_common.sh@10 -- # set +x
00:23:30.561   17:07:23	-- bdev/bdev_raid.sh@273 -- # (( i = 1 ))
00:23:30.561   17:07:23	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:23:30.561    17:07:23	-- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:23:30.561    17:07:23	-- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]'
00:23:30.561   17:07:23	-- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid
00:23:30.561   17:07:23	-- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:23:30.561   17:07:23	-- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2
00:23:30.819  [2024-11-19 17:07:23.619709] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2
00:23:30.819  [2024-11-19 17:07:23.620033] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:23:30.820  [2024-11-19 17:07:23.620222] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:23:30.820   17:07:23	-- bdev/bdev_raid.sh@273 -- # (( i++ ))
00:23:30.820   17:07:23	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:23:30.820    17:07:23	-- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:23:30.820    17:07:23	-- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]'
00:23:31.387   17:07:24	-- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid
00:23:31.387   17:07:24	-- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:23:31.387   17:07:24	-- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3
00:23:31.387  [2024-11-19 17:07:24.209557] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3
00:23:31.387   17:07:24	-- bdev/bdev_raid.sh@273 -- # (( i++ ))
00:23:31.387   17:07:24	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:23:31.645    17:07:24	-- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]'
00:23:31.645    17:07:24	-- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:23:31.904   17:07:24	-- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid
00:23:31.904   17:07:24	-- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']'
00:23:31.904   17:07:24	-- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4
00:23:31.904  [2024-11-19 17:07:24.695483] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4
00:23:31.904  [2024-11-19 17:07:24.695787] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state offline
00:23:31.904   17:07:24	-- bdev/bdev_raid.sh@273 -- # (( i++ ))
00:23:31.904   17:07:24	-- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs ))
00:23:31.904    17:07:24	-- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:23:31.904    17:07:24	-- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)'
00:23:32.164   17:07:24	-- bdev/bdev_raid.sh@281 -- # raid_bdev=
00:23:32.164   17:07:24	-- bdev/bdev_raid.sh@282 -- # '[' -n '' ']'
00:23:32.164   17:07:24	-- bdev/bdev_raid.sh@287 -- # killprocess 140214
00:23:32.164   17:07:24	-- common/autotest_common.sh@936 -- # '[' -z 140214 ']'
00:23:32.164   17:07:24	-- common/autotest_common.sh@940 -- # kill -0 140214
00:23:32.164    17:07:24	-- common/autotest_common.sh@941 -- # uname
00:23:32.164   17:07:24	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:23:32.164    17:07:24	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 140214
00:23:32.164   17:07:24	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:23:32.164   17:07:24	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:23:32.164   17:07:24	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 140214'
00:23:32.164  killing process with pid 140214
00:23:32.164   17:07:24	-- common/autotest_common.sh@955 -- # kill 140214
00:23:32.164  [2024-11-19 17:07:24.996274] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:23:32.164   17:07:24	-- common/autotest_common.sh@960 -- # wait 140214
00:23:32.164  [2024-11-19 17:07:24.996565] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:23:32.422   17:07:25	-- bdev/bdev_raid.sh@289 -- # return 0
00:23:32.422  
00:23:32.422  real	0m14.800s
00:23:32.422  user	0m26.781s
00:23:32.422  sys	0m2.396s
00:23:32.422   17:07:25	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:23:32.422   17:07:25	-- common/autotest_common.sh@10 -- # set +x
00:23:32.422  ************************************
00:23:32.422  END TEST raid5f_state_function_test_sb
00:23:32.422  ************************************
00:23:32.681   17:07:25	-- bdev/bdev_raid.sh@746 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4
00:23:32.681   17:07:25	-- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']'
00:23:32.681   17:07:25	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:23:32.681   17:07:25	-- common/autotest_common.sh@10 -- # set +x
00:23:32.681  ************************************
00:23:32.681  START TEST raid5f_superblock_test
00:23:32.681  ************************************
00:23:32.681   17:07:25	-- common/autotest_common.sh@1114 -- # raid_superblock_test raid5f 4
00:23:32.681   17:07:25	-- bdev/bdev_raid.sh@338 -- # local raid_level=raid5f
00:23:32.681   17:07:25	-- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4
00:23:32.681   17:07:25	-- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=()
00:23:32.681   17:07:25	-- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc
00:23:32.681   17:07:25	-- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=()
00:23:32.681   17:07:25	-- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt
00:23:32.681   17:07:25	-- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=()
00:23:32.681   17:07:25	-- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid
00:23:32.681   17:07:25	-- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1
00:23:32.681   17:07:25	-- bdev/bdev_raid.sh@344 -- # local strip_size
00:23:32.681   17:07:25	-- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg
00:23:32.681   17:07:25	-- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid
00:23:32.681   17:07:25	-- bdev/bdev_raid.sh@347 -- # local raid_bdev
00:23:32.681   17:07:25	-- bdev/bdev_raid.sh@349 -- # '[' raid5f '!=' raid1 ']'
00:23:32.681   17:07:25	-- bdev/bdev_raid.sh@350 -- # strip_size=64
00:23:32.681   17:07:25	-- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64'
00:23:32.681   17:07:25	-- bdev/bdev_raid.sh@357 -- # raid_pid=140662
00:23:32.681   17:07:25	-- bdev/bdev_raid.sh@358 -- # waitforlisten 140662 /var/tmp/spdk-raid.sock
00:23:32.681   17:07:25	-- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid
00:23:32.681   17:07:25	-- common/autotest_common.sh@829 -- # '[' -z 140662 ']'
00:23:32.681   17:07:25	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock
00:23:32.681   17:07:25	-- common/autotest_common.sh@834 -- # local max_retries=100
00:23:32.681   17:07:25	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...'
00:23:32.681  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...
00:23:32.681   17:07:25	-- common/autotest_common.sh@838 -- # xtrace_disable
00:23:32.681   17:07:25	-- common/autotest_common.sh@10 -- # set +x
00:23:32.681  [2024-11-19 17:07:25.413124] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:23:32.681  [2024-11-19 17:07:25.414286] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140662 ]
00:23:32.940  [2024-11-19 17:07:25.576849] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:23:32.940  [2024-11-19 17:07:25.636574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:23:32.940  [2024-11-19 17:07:25.686510] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:23:33.878   17:07:26	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:23:33.878   17:07:26	-- common/autotest_common.sh@862 -- # return 0
00:23:33.878   17:07:26	-- bdev/bdev_raid.sh@361 -- # (( i = 1 ))
00:23:33.878   17:07:26	-- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs ))
00:23:33.878   17:07:26	-- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1
00:23:33.878   17:07:26	-- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1
00:23:33.878   17:07:26	-- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001
00:23:33.878   17:07:26	-- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc)
00:23:33.878   17:07:26	-- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt)
00:23:33.878   17:07:26	-- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:23:33.878   17:07:26	-- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1
00:23:33.878  malloc1
00:23:33.878   17:07:26	-- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001
00:23:34.136  [2024-11-19 17:07:26.926569] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1
00:23:34.136  [2024-11-19 17:07:26.926977] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:23:34.137  [2024-11-19 17:07:26.927076] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80
00:23:34.137  [2024-11-19 17:07:26.927214] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:23:34.137  [2024-11-19 17:07:26.930283] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:23:34.137  [2024-11-19 17:07:26.930502] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1
00:23:34.137  pt1
00:23:34.137   17:07:26	-- bdev/bdev_raid.sh@361 -- # (( i++ ))
00:23:34.137   17:07:26	-- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs ))
00:23:34.137   17:07:26	-- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2
00:23:34.137   17:07:26	-- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2
00:23:34.137   17:07:26	-- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002
00:23:34.137   17:07:26	-- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc)
00:23:34.137   17:07:26	-- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt)
00:23:34.137   17:07:26	-- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:23:34.137   17:07:26	-- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2
00:23:34.395  malloc2
00:23:34.654   17:07:27	-- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:23:34.654  [2024-11-19 17:07:27.432568] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:23:34.654  [2024-11-19 17:07:27.432923] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:23:34.654  [2024-11-19 17:07:27.433007] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680
00:23:34.654  [2024-11-19 17:07:27.433270] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:23:34.654  [2024-11-19 17:07:27.436267] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:23:34.654  [2024-11-19 17:07:27.436488] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:23:34.654  pt2
00:23:34.654   17:07:27	-- bdev/bdev_raid.sh@361 -- # (( i++ ))
00:23:34.654   17:07:27	-- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs ))
00:23:34.654   17:07:27	-- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3
00:23:34.654   17:07:27	-- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3
00:23:34.654   17:07:27	-- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003
00:23:34.654   17:07:27	-- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc)
00:23:34.654   17:07:27	-- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt)
00:23:34.654   17:07:27	-- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:23:34.654   17:07:27	-- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3
00:23:34.912  malloc3
00:23:34.912   17:07:27	-- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003
00:23:35.171  [2024-11-19 17:07:27.885381] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3
00:23:35.171  [2024-11-19 17:07:27.885735] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:23:35.171  [2024-11-19 17:07:27.885843] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280
00:23:35.171  [2024-11-19 17:07:27.885987] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:23:35.171  [2024-11-19 17:07:27.889037] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:23:35.171  [2024-11-19 17:07:27.889340] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3
00:23:35.171  pt3
00:23:35.171   17:07:27	-- bdev/bdev_raid.sh@361 -- # (( i++ ))
00:23:35.171   17:07:27	-- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs ))
00:23:35.171   17:07:27	-- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4
00:23:35.171   17:07:27	-- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4
00:23:35.171   17:07:27	-- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004
00:23:35.171   17:07:27	-- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc)
00:23:35.171   17:07:27	-- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt)
00:23:35.171   17:07:27	-- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid)
00:23:35.171   17:07:27	-- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4
00:23:35.430  malloc4
00:23:35.430   17:07:28	-- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004
00:23:35.689  [2024-11-19 17:07:28.311334] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4
00:23:35.689  [2024-11-19 17:07:28.311679] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:23:35.689  [2024-11-19 17:07:28.311782] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80
00:23:35.689  [2024-11-19 17:07:28.311914] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:23:35.689  [2024-11-19 17:07:28.315045] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:23:35.689  [2024-11-19 17:07:28.315306] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4
00:23:35.689  pt4
00:23:35.689   17:07:28	-- bdev/bdev_raid.sh@361 -- # (( i++ ))
00:23:35.689   17:07:28	-- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs ))
00:23:35.689   17:07:28	-- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s
00:23:35.689  [2024-11-19 17:07:28.527801] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed
00:23:35.689  [2024-11-19 17:07:28.530446] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:23:35.689  [2024-11-19 17:07:28.530719] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed
00:23:35.689  [2024-11-19 17:07:28.530797] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed
00:23:35.689  [2024-11-19 17:07:28.531163] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008480
00:23:35.689  [2024-11-19 17:07:28.531266] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512
00:23:35.689  [2024-11-19 17:07:28.531453] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460
00:23:35.689  [2024-11-19 17:07:28.532357] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008480
00:23:35.689  [2024-11-19 17:07:28.532481] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008480
00:23:35.689  [2024-11-19 17:07:28.532786] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:23:35.947   17:07:28	-- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4
00:23:35.947   17:07:28	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:23:35.947   17:07:28	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:23:35.947   17:07:28	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:23:35.947   17:07:28	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:23:35.947   17:07:28	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:23:35.947   17:07:28	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:23:35.947   17:07:28	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:23:35.947   17:07:28	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:23:35.947   17:07:28	-- bdev/bdev_raid.sh@125 -- # local tmp
00:23:35.947    17:07:28	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:23:35.947    17:07:28	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:23:36.209   17:07:28	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:23:36.209    "name": "raid_bdev1",
00:23:36.209    "uuid": "c7481e34-5735-473a-8d20-a0fef111cb51",
00:23:36.209    "strip_size_kb": 64,
00:23:36.209    "state": "online",
00:23:36.209    "raid_level": "raid5f",
00:23:36.209    "superblock": true,
00:23:36.209    "num_base_bdevs": 4,
00:23:36.209    "num_base_bdevs_discovered": 4,
00:23:36.209    "num_base_bdevs_operational": 4,
00:23:36.209    "base_bdevs_list": [
00:23:36.209      {
00:23:36.209        "name": "pt1",
00:23:36.209        "uuid": "0a1245de-b19c-5bea-a801-ad1732659b7b",
00:23:36.209        "is_configured": true,
00:23:36.209        "data_offset": 2048,
00:23:36.209        "data_size": 63488
00:23:36.209      },
00:23:36.209      {
00:23:36.209        "name": "pt2",
00:23:36.209        "uuid": "b60870b5-3bc9-5255-97ea-623d896a0ab3",
00:23:36.209        "is_configured": true,
00:23:36.209        "data_offset": 2048,
00:23:36.209        "data_size": 63488
00:23:36.209      },
00:23:36.209      {
00:23:36.209        "name": "pt3",
00:23:36.209        "uuid": "18f47afa-e261-5cf3-954c-1b89f6485437",
00:23:36.209        "is_configured": true,
00:23:36.209        "data_offset": 2048,
00:23:36.209        "data_size": 63488
00:23:36.209      },
00:23:36.209      {
00:23:36.209        "name": "pt4",
00:23:36.209        "uuid": "b2d40256-66f7-546f-896e-652e5d7aa5f4",
00:23:36.209        "is_configured": true,
00:23:36.209        "data_offset": 2048,
00:23:36.209        "data_size": 63488
00:23:36.209      }
00:23:36.209    ]
00:23:36.209  }'
00:23:36.209   17:07:28	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:23:36.209   17:07:28	-- common/autotest_common.sh@10 -- # set +x
00:23:36.777    17:07:29	-- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid'
00:23:36.777    17:07:29	-- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1
00:23:37.035  [2024-11-19 17:07:29.705179] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:23:37.035   17:07:29	-- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=c7481e34-5735-473a-8d20-a0fef111cb51
00:23:37.035   17:07:29	-- bdev/bdev_raid.sh@380 -- # '[' -z c7481e34-5735-473a-8d20-a0fef111cb51 ']'
00:23:37.035   17:07:29	-- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1
00:23:37.293  [2024-11-19 17:07:29.969037] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:23:37.293  [2024-11-19 17:07:29.969351] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:23:37.294  [2024-11-19 17:07:29.969635] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:23:37.294  [2024-11-19 17:07:29.969843] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:23:37.294  [2024-11-19 17:07:29.969928] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state offline
00:23:37.294    17:07:29	-- bdev/bdev_raid.sh@386 -- # jq -r '.[]'
00:23:37.294    17:07:29	-- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:23:37.552   17:07:30	-- bdev/bdev_raid.sh@386 -- # raid_bdev=
00:23:37.552   17:07:30	-- bdev/bdev_raid.sh@387 -- # '[' -n '' ']'
00:23:37.552   17:07:30	-- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}"
00:23:37.552   17:07:30	-- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1
00:23:37.810   17:07:30	-- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}"
00:23:37.810   17:07:30	-- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2
00:23:38.068   17:07:30	-- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}"
00:23:38.068   17:07:30	-- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3
00:23:38.068   17:07:30	-- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}"
00:23:38.068   17:07:30	-- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4
00:23:38.326    17:07:31	-- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any'
00:23:38.326    17:07:31	-- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs
00:23:38.584   17:07:31	-- bdev/bdev_raid.sh@395 -- # '[' false == true ']'
00:23:38.584   17:07:31	-- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1
00:23:38.584   17:07:31	-- common/autotest_common.sh@650 -- # local es=0
00:23:38.584   17:07:31	-- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1
00:23:38.584   17:07:31	-- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:23:38.584   17:07:31	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:23:38.584    17:07:31	-- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:23:38.584   17:07:31	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:23:38.584    17:07:31	-- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:23:38.584   17:07:31	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:23:38.584   17:07:31	-- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:23:38.585   17:07:31	-- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]]
00:23:38.585   17:07:31	-- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1
00:23:38.841  [2024-11-19 17:07:31.521327] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed
00:23:38.841  [2024-11-19 17:07:31.523854] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed
00:23:38.841  [2024-11-19 17:07:31.524046] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed
00:23:38.841  [2024-11-19 17:07:31.524112] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed
00:23:38.841  [2024-11-19 17:07:31.524238] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1
00:23:38.841  [2024-11-19 17:07:31.524430] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2
00:23:38.841  [2024-11-19 17:07:31.524549] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3
00:23:38.841  [2024-11-19 17:07:31.524629] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4
00:23:38.841  [2024-11-19 17:07:31.524764] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:23:38.841  [2024-11-19 17:07:31.524803] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state configuring
00:23:38.841  request:
00:23:38.841  {
00:23:38.841    "name": "raid_bdev1",
00:23:38.841    "raid_level": "raid5f",
00:23:38.841    "base_bdevs": [
00:23:38.841      "malloc1",
00:23:38.841      "malloc2",
00:23:38.841      "malloc3",
00:23:38.841      "malloc4"
00:23:38.841    ],
00:23:38.841    "superblock": false,
00:23:38.841    "strip_size_kb": 64,
00:23:38.841    "method": "bdev_raid_create",
00:23:38.841    "req_id": 1
00:23:38.841  }
00:23:38.841  Got JSON-RPC error response
00:23:38.841  response:
00:23:38.841  {
00:23:38.841    "code": -17,
00:23:38.841    "message": "Failed to create RAID bdev raid_bdev1: File exists"
00:23:38.841  }
00:23:38.841   17:07:31	-- common/autotest_common.sh@653 -- # es=1
00:23:38.841   17:07:31	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:23:38.841   17:07:31	-- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:23:38.841   17:07:31	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:23:38.841    17:07:31	-- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:23:38.841    17:07:31	-- bdev/bdev_raid.sh@403 -- # jq -r '.[]'
00:23:39.100   17:07:31	-- bdev/bdev_raid.sh@403 -- # raid_bdev=
00:23:39.100   17:07:31	-- bdev/bdev_raid.sh@404 -- # '[' -n '' ']'
00:23:39.100   17:07:31	-- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001
00:23:39.100  [2024-11-19 17:07:31.933374] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1
00:23:39.100  [2024-11-19 17:07:31.933683] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:23:39.100  [2024-11-19 17:07:31.933763] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080
00:23:39.100  [2024-11-19 17:07:31.933864] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:23:39.100  [2024-11-19 17:07:31.936449] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:23:39.100  [2024-11-19 17:07:31.936663] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1
00:23:39.100  [2024-11-19 17:07:31.936836] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1
00:23:39.100  [2024-11-19 17:07:31.937014] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed
00:23:39.100  pt1
00:23:39.100   17:07:31	-- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4
00:23:39.100   17:07:31	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:23:39.100   17:07:31	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:23:39.100   17:07:31	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:23:39.100   17:07:31	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:23:39.100   17:07:31	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:23:39.100   17:07:31	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:23:39.100   17:07:31	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:23:39.359   17:07:31	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:23:39.359   17:07:31	-- bdev/bdev_raid.sh@125 -- # local tmp
00:23:39.359    17:07:31	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:23:39.359    17:07:31	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:23:39.618   17:07:32	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:23:39.618    "name": "raid_bdev1",
00:23:39.618    "uuid": "c7481e34-5735-473a-8d20-a0fef111cb51",
00:23:39.618    "strip_size_kb": 64,
00:23:39.618    "state": "configuring",
00:23:39.618    "raid_level": "raid5f",
00:23:39.618    "superblock": true,
00:23:39.618    "num_base_bdevs": 4,
00:23:39.618    "num_base_bdevs_discovered": 1,
00:23:39.618    "num_base_bdevs_operational": 4,
00:23:39.618    "base_bdevs_list": [
00:23:39.618      {
00:23:39.618        "name": "pt1",
00:23:39.618        "uuid": "0a1245de-b19c-5bea-a801-ad1732659b7b",
00:23:39.618        "is_configured": true,
00:23:39.618        "data_offset": 2048,
00:23:39.618        "data_size": 63488
00:23:39.618      },
00:23:39.618      {
00:23:39.618        "name": null,
00:23:39.618        "uuid": "b60870b5-3bc9-5255-97ea-623d896a0ab3",
00:23:39.618        "is_configured": false,
00:23:39.618        "data_offset": 2048,
00:23:39.618        "data_size": 63488
00:23:39.618      },
00:23:39.618      {
00:23:39.618        "name": null,
00:23:39.618        "uuid": "18f47afa-e261-5cf3-954c-1b89f6485437",
00:23:39.618        "is_configured": false,
00:23:39.618        "data_offset": 2048,
00:23:39.618        "data_size": 63488
00:23:39.618      },
00:23:39.618      {
00:23:39.618        "name": null,
00:23:39.618        "uuid": "b2d40256-66f7-546f-896e-652e5d7aa5f4",
00:23:39.618        "is_configured": false,
00:23:39.618        "data_offset": 2048,
00:23:39.618        "data_size": 63488
00:23:39.618      }
00:23:39.618    ]
00:23:39.618  }'
00:23:39.618   17:07:32	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:23:39.618   17:07:32	-- common/autotest_common.sh@10 -- # set +x
00:23:40.185   17:07:32	-- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']'
00:23:40.185   17:07:32	-- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:23:40.445  [2024-11-19 17:07:33.161645] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:23:40.445  [2024-11-19 17:07:33.161998] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:23:40.445  [2024-11-19 17:07:33.162093] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980
00:23:40.445  [2024-11-19 17:07:33.162193] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:23:40.445  [2024-11-19 17:07:33.162716] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:23:40.445  [2024-11-19 17:07:33.162888] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:23:40.445  [2024-11-19 17:07:33.163073] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2
00:23:40.445  [2024-11-19 17:07:33.163172] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:23:40.445  pt2
00:23:40.445   17:07:33	-- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2
00:23:40.704  [2024-11-19 17:07:33.389689] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2
00:23:40.704   17:07:33	-- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4
00:23:40.704   17:07:33	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:23:40.704   17:07:33	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:23:40.704   17:07:33	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:23:40.704   17:07:33	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:23:40.704   17:07:33	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:23:40.704   17:07:33	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:23:40.705   17:07:33	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:23:40.705   17:07:33	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:23:40.705   17:07:33	-- bdev/bdev_raid.sh@125 -- # local tmp
00:23:40.705    17:07:33	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:23:40.705    17:07:33	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:23:40.998   17:07:33	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:23:40.998    "name": "raid_bdev1",
00:23:40.998    "uuid": "c7481e34-5735-473a-8d20-a0fef111cb51",
00:23:40.998    "strip_size_kb": 64,
00:23:40.998    "state": "configuring",
00:23:40.998    "raid_level": "raid5f",
00:23:40.998    "superblock": true,
00:23:40.998    "num_base_bdevs": 4,
00:23:40.998    "num_base_bdevs_discovered": 1,
00:23:40.998    "num_base_bdevs_operational": 4,
00:23:40.998    "base_bdevs_list": [
00:23:40.998      {
00:23:40.998        "name": "pt1",
00:23:40.998        "uuid": "0a1245de-b19c-5bea-a801-ad1732659b7b",
00:23:40.998        "is_configured": true,
00:23:40.998        "data_offset": 2048,
00:23:40.998        "data_size": 63488
00:23:40.998      },
00:23:40.998      {
00:23:40.998        "name": null,
00:23:40.998        "uuid": "b60870b5-3bc9-5255-97ea-623d896a0ab3",
00:23:40.998        "is_configured": false,
00:23:40.998        "data_offset": 2048,
00:23:40.998        "data_size": 63488
00:23:40.998      },
00:23:40.998      {
00:23:40.998        "name": null,
00:23:40.998        "uuid": "18f47afa-e261-5cf3-954c-1b89f6485437",
00:23:40.998        "is_configured": false,
00:23:40.998        "data_offset": 2048,
00:23:40.998        "data_size": 63488
00:23:40.998      },
00:23:40.998      {
00:23:40.998        "name": null,
00:23:40.998        "uuid": "b2d40256-66f7-546f-896e-652e5d7aa5f4",
00:23:40.998        "is_configured": false,
00:23:40.998        "data_offset": 2048,
00:23:40.998        "data_size": 63488
00:23:40.998      }
00:23:40.998    ]
00:23:40.998  }'
00:23:40.999   17:07:33	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:23:40.999   17:07:33	-- common/autotest_common.sh@10 -- # set +x
00:23:41.576   17:07:34	-- bdev/bdev_raid.sh@422 -- # (( i = 1 ))
00:23:41.576   17:07:34	-- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs ))
00:23:41.576   17:07:34	-- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:23:41.835  [2024-11-19 17:07:34.573933] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:23:41.835  [2024-11-19 17:07:34.574242] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:23:41.835  [2024-11-19 17:07:34.574322] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80
00:23:41.835  [2024-11-19 17:07:34.574423] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:23:41.835  [2024-11-19 17:07:34.574915] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:23:41.835  [2024-11-19 17:07:34.575089] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:23:41.835  [2024-11-19 17:07:34.575256] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2
00:23:41.835  [2024-11-19 17:07:34.575357] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:23:41.835  pt2
00:23:41.835   17:07:34	-- bdev/bdev_raid.sh@422 -- # (( i++ ))
00:23:41.835   17:07:34	-- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs ))
00:23:41.835   17:07:34	-- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003
00:23:42.094  [2024-11-19 17:07:34.777998] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3
00:23:42.095  [2024-11-19 17:07:34.778359] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:23:42.095  [2024-11-19 17:07:34.778438] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80
00:23:42.095  [2024-11-19 17:07:34.778586] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:23:42.095  [2024-11-19 17:07:34.779076] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:23:42.095  [2024-11-19 17:07:34.779246] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3
00:23:42.095  [2024-11-19 17:07:34.779405] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3
00:23:42.095  [2024-11-19 17:07:34.779501] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed
00:23:42.095  pt3
00:23:42.095   17:07:34	-- bdev/bdev_raid.sh@422 -- # (( i++ ))
00:23:42.095   17:07:34	-- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs ))
00:23:42.095   17:07:34	-- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004
00:23:42.354  [2024-11-19 17:07:35.022047] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4
00:23:42.354  [2024-11-19 17:07:35.022398] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:23:42.354  [2024-11-19 17:07:35.022470] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280
00:23:42.354  [2024-11-19 17:07:35.022574] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:23:42.354  [2024-11-19 17:07:35.023104] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:23:42.354  [2024-11-19 17:07:35.023288] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4
00:23:42.354  [2024-11-19 17:07:35.023460] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4
00:23:42.354  [2024-11-19 17:07:35.023562] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed
00:23:42.354  [2024-11-19 17:07:35.023823] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680
00:23:42.354  [2024-11-19 17:07:35.023935] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512
00:23:42.354  [2024-11-19 17:07:35.024050] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002940
00:23:42.354  [2024-11-19 17:07:35.024863] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680
00:23:42.354  [2024-11-19 17:07:35.024994] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680
00:23:42.354  [2024-11-19 17:07:35.025203] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:23:42.354  pt4
00:23:42.354   17:07:35	-- bdev/bdev_raid.sh@422 -- # (( i++ ))
00:23:42.354   17:07:35	-- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs ))
00:23:42.354   17:07:35	-- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4
00:23:42.354   17:07:35	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:23:42.354   17:07:35	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:23:42.354   17:07:35	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:23:42.354   17:07:35	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:23:42.354   17:07:35	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:23:42.354   17:07:35	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:23:42.354   17:07:35	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:23:42.354   17:07:35	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:23:42.354   17:07:35	-- bdev/bdev_raid.sh@125 -- # local tmp
00:23:42.354    17:07:35	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:23:42.354    17:07:35	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:23:42.612   17:07:35	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:23:42.612    "name": "raid_bdev1",
00:23:42.612    "uuid": "c7481e34-5735-473a-8d20-a0fef111cb51",
00:23:42.612    "strip_size_kb": 64,
00:23:42.612    "state": "online",
00:23:42.612    "raid_level": "raid5f",
00:23:42.612    "superblock": true,
00:23:42.612    "num_base_bdevs": 4,
00:23:42.612    "num_base_bdevs_discovered": 4,
00:23:42.612    "num_base_bdevs_operational": 4,
00:23:42.612    "base_bdevs_list": [
00:23:42.612      {
00:23:42.612        "name": "pt1",
00:23:42.612        "uuid": "0a1245de-b19c-5bea-a801-ad1732659b7b",
00:23:42.612        "is_configured": true,
00:23:42.612        "data_offset": 2048,
00:23:42.612        "data_size": 63488
00:23:42.612      },
00:23:42.612      {
00:23:42.612        "name": "pt2",
00:23:42.612        "uuid": "b60870b5-3bc9-5255-97ea-623d896a0ab3",
00:23:42.612        "is_configured": true,
00:23:42.612        "data_offset": 2048,
00:23:42.612        "data_size": 63488
00:23:42.612      },
00:23:42.612      {
00:23:42.612        "name": "pt3",
00:23:42.612        "uuid": "18f47afa-e261-5cf3-954c-1b89f6485437",
00:23:42.612        "is_configured": true,
00:23:42.612        "data_offset": 2048,
00:23:42.612        "data_size": 63488
00:23:42.612      },
00:23:42.612      {
00:23:42.612        "name": "pt4",
00:23:42.612        "uuid": "b2d40256-66f7-546f-896e-652e5d7aa5f4",
00:23:42.612        "is_configured": true,
00:23:42.612        "data_offset": 2048,
00:23:42.612        "data_size": 63488
00:23:42.612      }
00:23:42.612    ]
00:23:42.613  }'
00:23:42.613   17:07:35	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:23:42.613   17:07:35	-- common/autotest_common.sh@10 -- # set +x
00:23:43.179    17:07:35	-- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1
00:23:43.179    17:07:35	-- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid'
00:23:43.436  [2024-11-19 17:07:36.059425] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:23:43.436   17:07:36	-- bdev/bdev_raid.sh@430 -- # '[' c7481e34-5735-473a-8d20-a0fef111cb51 '!=' c7481e34-5735-473a-8d20-a0fef111cb51 ']'
00:23:43.436   17:07:36	-- bdev/bdev_raid.sh@434 -- # has_redundancy raid5f
00:23:43.436   17:07:36	-- bdev/bdev_raid.sh@195 -- # case $1 in
00:23:43.436   17:07:36	-- bdev/bdev_raid.sh@196 -- # return 0
00:23:43.436   17:07:36	-- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1
00:23:43.437  [2024-11-19 17:07:36.263360] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1
00:23:43.437   17:07:36	-- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3
00:23:43.437   17:07:36	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:23:43.437   17:07:36	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:23:43.437   17:07:36	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:23:43.437   17:07:36	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:23:43.437   17:07:36	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:23:43.437   17:07:36	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:23:43.437   17:07:36	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:23:43.437   17:07:36	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:23:43.437   17:07:36	-- bdev/bdev_raid.sh@125 -- # local tmp
00:23:43.437    17:07:36	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:23:43.437    17:07:36	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:23:44.004   17:07:36	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:23:44.004    "name": "raid_bdev1",
00:23:44.004    "uuid": "c7481e34-5735-473a-8d20-a0fef111cb51",
00:23:44.004    "strip_size_kb": 64,
00:23:44.004    "state": "online",
00:23:44.004    "raid_level": "raid5f",
00:23:44.004    "superblock": true,
00:23:44.004    "num_base_bdevs": 4,
00:23:44.004    "num_base_bdevs_discovered": 3,
00:23:44.004    "num_base_bdevs_operational": 3,
00:23:44.004    "base_bdevs_list": [
00:23:44.004      {
00:23:44.004        "name": null,
00:23:44.004        "uuid": "00000000-0000-0000-0000-000000000000",
00:23:44.004        "is_configured": false,
00:23:44.004        "data_offset": 2048,
00:23:44.004        "data_size": 63488
00:23:44.004      },
00:23:44.004      {
00:23:44.004        "name": "pt2",
00:23:44.004        "uuid": "b60870b5-3bc9-5255-97ea-623d896a0ab3",
00:23:44.004        "is_configured": true,
00:23:44.004        "data_offset": 2048,
00:23:44.004        "data_size": 63488
00:23:44.004      },
00:23:44.004      {
00:23:44.004        "name": "pt3",
00:23:44.004        "uuid": "18f47afa-e261-5cf3-954c-1b89f6485437",
00:23:44.004        "is_configured": true,
00:23:44.004        "data_offset": 2048,
00:23:44.004        "data_size": 63488
00:23:44.004      },
00:23:44.004      {
00:23:44.004        "name": "pt4",
00:23:44.004        "uuid": "b2d40256-66f7-546f-896e-652e5d7aa5f4",
00:23:44.004        "is_configured": true,
00:23:44.004        "data_offset": 2048,
00:23:44.004        "data_size": 63488
00:23:44.004      }
00:23:44.004    ]
00:23:44.004  }'
00:23:44.004   17:07:36	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:23:44.004   17:07:36	-- common/autotest_common.sh@10 -- # set +x
00:23:44.570   17:07:37	-- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1
00:23:44.570  [2024-11-19 17:07:37.311550] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:23:44.570  [2024-11-19 17:07:37.311812] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:23:44.570  [2024-11-19 17:07:37.311974] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:23:44.570  [2024-11-19 17:07:37.312087] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:23:44.570  [2024-11-19 17:07:37.312188] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline
00:23:44.570    17:07:37	-- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:23:44.570    17:07:37	-- bdev/bdev_raid.sh@443 -- # jq -r '.[]'
00:23:44.828   17:07:37	-- bdev/bdev_raid.sh@443 -- # raid_bdev=
00:23:44.828   17:07:37	-- bdev/bdev_raid.sh@444 -- # '[' -n '' ']'
00:23:44.828   17:07:37	-- bdev/bdev_raid.sh@449 -- # (( i = 1 ))
00:23:44.828   17:07:37	-- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs ))
00:23:44.828   17:07:37	-- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2
00:23:45.091   17:07:37	-- bdev/bdev_raid.sh@449 -- # (( i++ ))
00:23:45.091   17:07:37	-- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs ))
00:23:45.091   17:07:37	-- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3
00:23:45.359   17:07:38	-- bdev/bdev_raid.sh@449 -- # (( i++ ))
00:23:45.359   17:07:38	-- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs ))
00:23:45.359   17:07:38	-- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4
00:23:45.619   17:07:38	-- bdev/bdev_raid.sh@449 -- # (( i++ ))
00:23:45.619   17:07:38	-- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs ))
00:23:45.619   17:07:38	-- bdev/bdev_raid.sh@454 -- # (( i = 1 ))
00:23:45.619   17:07:38	-- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 ))
00:23:45.619   17:07:38	-- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:23:45.619  [2024-11-19 17:07:38.403348] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:23:45.619  [2024-11-19 17:07:38.403746] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:23:45.619  [2024-11-19 17:07:38.403827] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580
00:23:45.619  [2024-11-19 17:07:38.403950] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:23:45.619  [2024-11-19 17:07:38.406583] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:23:45.619  [2024-11-19 17:07:38.406805] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:23:45.619  [2024-11-19 17:07:38.407019] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2
00:23:45.619  [2024-11-19 17:07:38.407146] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:23:45.619  pt2
00:23:45.619   17:07:38	-- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3
00:23:45.619   17:07:38	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:23:45.619   17:07:38	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:23:45.619   17:07:38	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:23:45.619   17:07:38	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:23:45.619   17:07:38	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:23:45.619   17:07:38	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:23:45.619   17:07:38	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:23:45.619   17:07:38	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:23:45.619   17:07:38	-- bdev/bdev_raid.sh@125 -- # local tmp
00:23:45.619    17:07:38	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:23:45.619    17:07:38	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:23:45.878   17:07:38	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:23:45.878    "name": "raid_bdev1",
00:23:45.878    "uuid": "c7481e34-5735-473a-8d20-a0fef111cb51",
00:23:45.878    "strip_size_kb": 64,
00:23:45.878    "state": "configuring",
00:23:45.878    "raid_level": "raid5f",
00:23:45.878    "superblock": true,
00:23:45.878    "num_base_bdevs": 4,
00:23:45.878    "num_base_bdevs_discovered": 1,
00:23:45.878    "num_base_bdevs_operational": 3,
00:23:45.878    "base_bdevs_list": [
00:23:45.878      {
00:23:45.878        "name": null,
00:23:45.878        "uuid": "00000000-0000-0000-0000-000000000000",
00:23:45.878        "is_configured": false,
00:23:45.878        "data_offset": 2048,
00:23:45.878        "data_size": 63488
00:23:45.878      },
00:23:45.878      {
00:23:45.878        "name": "pt2",
00:23:45.878        "uuid": "b60870b5-3bc9-5255-97ea-623d896a0ab3",
00:23:45.878        "is_configured": true,
00:23:45.878        "data_offset": 2048,
00:23:45.878        "data_size": 63488
00:23:45.878      },
00:23:45.878      {
00:23:45.878        "name": null,
00:23:45.878        "uuid": "18f47afa-e261-5cf3-954c-1b89f6485437",
00:23:45.878        "is_configured": false,
00:23:45.878        "data_offset": 2048,
00:23:45.878        "data_size": 63488
00:23:45.878      },
00:23:45.878      {
00:23:45.878        "name": null,
00:23:45.878        "uuid": "b2d40256-66f7-546f-896e-652e5d7aa5f4",
00:23:45.878        "is_configured": false,
00:23:45.878        "data_offset": 2048,
00:23:45.878        "data_size": 63488
00:23:45.878      }
00:23:45.878    ]
00:23:45.878  }'
00:23:45.878   17:07:38	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:23:45.878   17:07:38	-- common/autotest_common.sh@10 -- # set +x
00:23:46.449   17:07:39	-- bdev/bdev_raid.sh@454 -- # (( i++ ))
00:23:46.449   17:07:39	-- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 ))
00:23:46.449   17:07:39	-- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003
00:23:46.710  [2024-11-19 17:07:39.443643] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3
00:23:46.710  [2024-11-19 17:07:39.444085] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:23:46.710  [2024-11-19 17:07:39.444199] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80
00:23:46.710  [2024-11-19 17:07:39.444478] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:23:46.710  [2024-11-19 17:07:39.445095] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:23:46.710  [2024-11-19 17:07:39.445302] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3
00:23:46.710  [2024-11-19 17:07:39.445553] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3
00:23:46.710  [2024-11-19 17:07:39.445714] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed
00:23:46.710  pt3
00:23:46.710   17:07:39	-- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3
00:23:46.710   17:07:39	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:23:46.710   17:07:39	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:23:46.710   17:07:39	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:23:46.710   17:07:39	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:23:46.710   17:07:39	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:23:46.710   17:07:39	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:23:46.710   17:07:39	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:23:46.710   17:07:39	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:23:46.710   17:07:39	-- bdev/bdev_raid.sh@125 -- # local tmp
00:23:46.710    17:07:39	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:23:46.710    17:07:39	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:23:46.969   17:07:39	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:23:46.969    "name": "raid_bdev1",
00:23:46.969    "uuid": "c7481e34-5735-473a-8d20-a0fef111cb51",
00:23:46.969    "strip_size_kb": 64,
00:23:46.969    "state": "configuring",
00:23:46.969    "raid_level": "raid5f",
00:23:46.969    "superblock": true,
00:23:46.969    "num_base_bdevs": 4,
00:23:46.969    "num_base_bdevs_discovered": 2,
00:23:46.969    "num_base_bdevs_operational": 3,
00:23:46.969    "base_bdevs_list": [
00:23:46.969      {
00:23:46.969        "name": null,
00:23:46.969        "uuid": "00000000-0000-0000-0000-000000000000",
00:23:46.969        "is_configured": false,
00:23:46.969        "data_offset": 2048,
00:23:46.969        "data_size": 63488
00:23:46.969      },
00:23:46.969      {
00:23:46.969        "name": "pt2",
00:23:46.969        "uuid": "b60870b5-3bc9-5255-97ea-623d896a0ab3",
00:23:46.969        "is_configured": true,
00:23:46.969        "data_offset": 2048,
00:23:46.969        "data_size": 63488
00:23:46.969      },
00:23:46.969      {
00:23:46.969        "name": "pt3",
00:23:46.969        "uuid": "18f47afa-e261-5cf3-954c-1b89f6485437",
00:23:46.969        "is_configured": true,
00:23:46.969        "data_offset": 2048,
00:23:46.969        "data_size": 63488
00:23:46.969      },
00:23:46.969      {
00:23:46.969        "name": null,
00:23:46.969        "uuid": "b2d40256-66f7-546f-896e-652e5d7aa5f4",
00:23:46.969        "is_configured": false,
00:23:46.969        "data_offset": 2048,
00:23:46.969        "data_size": 63488
00:23:46.969      }
00:23:46.969    ]
00:23:46.969  }'
00:23:46.969   17:07:39	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:23:46.969   17:07:39	-- common/autotest_common.sh@10 -- # set +x
00:23:47.537   17:07:40	-- bdev/bdev_raid.sh@454 -- # (( i++ ))
00:23:47.537   17:07:40	-- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 ))
00:23:47.537   17:07:40	-- bdev/bdev_raid.sh@462 -- # i=3
00:23:47.537   17:07:40	-- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004
00:23:47.801  [2024-11-19 17:07:40.627856] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4
00:23:47.801  [2024-11-19 17:07:40.628179] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:23:47.801  [2024-11-19 17:07:40.628260] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180
00:23:47.801  [2024-11-19 17:07:40.628373] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:23:47.801  [2024-11-19 17:07:40.628917] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:23:47.801  [2024-11-19 17:07:40.629126] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4
00:23:47.802  [2024-11-19 17:07:40.629329] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4
00:23:47.802  [2024-11-19 17:07:40.629441] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed
00:23:47.802  [2024-11-19 17:07:40.629618] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80
00:23:47.802  [2024-11-19 17:07:40.629792] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512
00:23:47.802  [2024-11-19 17:07:40.629903] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002c80
00:23:47.802  [2024-11-19 17:07:40.630821] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80
00:23:47.802  [2024-11-19 17:07:40.630977] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80
00:23:47.802  [2024-11-19 17:07:40.631325] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:23:47.802  pt4
00:23:48.061   17:07:40	-- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3
00:23:48.061   17:07:40	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:23:48.061   17:07:40	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:23:48.061   17:07:40	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:23:48.061   17:07:40	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:23:48.061   17:07:40	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:23:48.061   17:07:40	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:23:48.061   17:07:40	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:23:48.061   17:07:40	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:23:48.061   17:07:40	-- bdev/bdev_raid.sh@125 -- # local tmp
00:23:48.061    17:07:40	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:23:48.061    17:07:40	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:23:48.320   17:07:40	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:23:48.320    "name": "raid_bdev1",
00:23:48.320    "uuid": "c7481e34-5735-473a-8d20-a0fef111cb51",
00:23:48.320    "strip_size_kb": 64,
00:23:48.320    "state": "online",
00:23:48.320    "raid_level": "raid5f",
00:23:48.320    "superblock": true,
00:23:48.320    "num_base_bdevs": 4,
00:23:48.320    "num_base_bdevs_discovered": 3,
00:23:48.320    "num_base_bdevs_operational": 3,
00:23:48.320    "base_bdevs_list": [
00:23:48.320      {
00:23:48.320        "name": null,
00:23:48.320        "uuid": "00000000-0000-0000-0000-000000000000",
00:23:48.320        "is_configured": false,
00:23:48.320        "data_offset": 2048,
00:23:48.320        "data_size": 63488
00:23:48.320      },
00:23:48.320      {
00:23:48.320        "name": "pt2",
00:23:48.320        "uuid": "b60870b5-3bc9-5255-97ea-623d896a0ab3",
00:23:48.320        "is_configured": true,
00:23:48.320        "data_offset": 2048,
00:23:48.320        "data_size": 63488
00:23:48.320      },
00:23:48.320      {
00:23:48.320        "name": "pt3",
00:23:48.320        "uuid": "18f47afa-e261-5cf3-954c-1b89f6485437",
00:23:48.320        "is_configured": true,
00:23:48.320        "data_offset": 2048,
00:23:48.320        "data_size": 63488
00:23:48.320      },
00:23:48.320      {
00:23:48.320        "name": "pt4",
00:23:48.320        "uuid": "b2d40256-66f7-546f-896e-652e5d7aa5f4",
00:23:48.320        "is_configured": true,
00:23:48.320        "data_offset": 2048,
00:23:48.320        "data_size": 63488
00:23:48.320      }
00:23:48.320    ]
00:23:48.320  }'
00:23:48.320   17:07:40	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:23:48.320   17:07:40	-- common/autotest_common.sh@10 -- # set +x
00:23:48.886   17:07:41	-- bdev/bdev_raid.sh@468 -- # '[' 4 -gt 2 ']'
00:23:48.886   17:07:41	-- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1
00:23:49.144  [2024-11-19 17:07:41.785496] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:23:49.144  [2024-11-19 17:07:41.785797] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:23:49.144  [2024-11-19 17:07:41.786016] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:23:49.144  [2024-11-19 17:07:41.786135] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:23:49.144  [2024-11-19 17:07:41.786353] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline
00:23:49.144    17:07:41	-- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:23:49.144    17:07:41	-- bdev/bdev_raid.sh@471 -- # jq -r '.[]'
00:23:49.402   17:07:42	-- bdev/bdev_raid.sh@471 -- # raid_bdev=
00:23:49.402   17:07:42	-- bdev/bdev_raid.sh@472 -- # '[' -n '' ']'
00:23:49.402   17:07:42	-- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001
00:23:49.660  [2024-11-19 17:07:42.277608] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1
00:23:49.660  [2024-11-19 17:07:42.277962] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:23:49.660  [2024-11-19 17:07:42.278064] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480
00:23:49.660  [2024-11-19 17:07:42.278342] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:23:49.660  [2024-11-19 17:07:42.281534] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:23:49.660  [2024-11-19 17:07:42.281785] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1
00:23:49.660  [2024-11-19 17:07:42.281993] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1
00:23:49.660  [2024-11-19 17:07:42.282147] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed
00:23:49.660  pt1
00:23:49.660   17:07:42	-- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4
00:23:49.660   17:07:42	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:23:49.660   17:07:42	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:23:49.660   17:07:42	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:23:49.660   17:07:42	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:23:49.661   17:07:42	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:23:49.661   17:07:42	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:23:49.661   17:07:42	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:23:49.661   17:07:42	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:23:49.661   17:07:42	-- bdev/bdev_raid.sh@125 -- # local tmp
00:23:49.661    17:07:42	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:23:49.661    17:07:42	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:23:49.661   17:07:42	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:23:49.661    "name": "raid_bdev1",
00:23:49.661    "uuid": "c7481e34-5735-473a-8d20-a0fef111cb51",
00:23:49.661    "strip_size_kb": 64,
00:23:49.661    "state": "configuring",
00:23:49.661    "raid_level": "raid5f",
00:23:49.661    "superblock": true,
00:23:49.661    "num_base_bdevs": 4,
00:23:49.661    "num_base_bdevs_discovered": 1,
00:23:49.661    "num_base_bdevs_operational": 4,
00:23:49.661    "base_bdevs_list": [
00:23:49.661      {
00:23:49.661        "name": "pt1",
00:23:49.661        "uuid": "0a1245de-b19c-5bea-a801-ad1732659b7b",
00:23:49.661        "is_configured": true,
00:23:49.661        "data_offset": 2048,
00:23:49.661        "data_size": 63488
00:23:49.661      },
00:23:49.661      {
00:23:49.661        "name": null,
00:23:49.661        "uuid": "b60870b5-3bc9-5255-97ea-623d896a0ab3",
00:23:49.661        "is_configured": false,
00:23:49.661        "data_offset": 2048,
00:23:49.661        "data_size": 63488
00:23:49.661      },
00:23:49.661      {
00:23:49.661        "name": null,
00:23:49.661        "uuid": "18f47afa-e261-5cf3-954c-1b89f6485437",
00:23:49.661        "is_configured": false,
00:23:49.661        "data_offset": 2048,
00:23:49.661        "data_size": 63488
00:23:49.661      },
00:23:49.661      {
00:23:49.661        "name": null,
00:23:49.661        "uuid": "b2d40256-66f7-546f-896e-652e5d7aa5f4",
00:23:49.661        "is_configured": false,
00:23:49.661        "data_offset": 2048,
00:23:49.661        "data_size": 63488
00:23:49.661      }
00:23:49.661    ]
00:23:49.661  }'
00:23:49.661   17:07:42	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:23:49.661   17:07:42	-- common/autotest_common.sh@10 -- # set +x
00:23:50.596   17:07:43	-- bdev/bdev_raid.sh@484 -- # (( i = 1 ))
00:23:50.596   17:07:43	-- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs ))
00:23:50.596   17:07:43	-- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2
00:23:50.596   17:07:43	-- bdev/bdev_raid.sh@484 -- # (( i++ ))
00:23:50.596   17:07:43	-- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs ))
00:23:50.596   17:07:43	-- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3
00:23:50.855   17:07:43	-- bdev/bdev_raid.sh@484 -- # (( i++ ))
00:23:50.855   17:07:43	-- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs ))
00:23:50.855   17:07:43	-- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4
00:23:51.114   17:07:43	-- bdev/bdev_raid.sh@484 -- # (( i++ ))
00:23:51.114   17:07:43	-- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs ))
00:23:51.114   17:07:43	-- bdev/bdev_raid.sh@489 -- # i=3
00:23:51.114   17:07:43	-- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004
00:23:51.372  [2024-11-19 17:07:44.006609] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4
00:23:51.372  [2024-11-19 17:07:44.007044] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:23:51.372  [2024-11-19 17:07:44.007134] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80
00:23:51.372  [2024-11-19 17:07:44.007271] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:23:51.372  [2024-11-19 17:07:44.007824] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:23:51.372  [2024-11-19 17:07:44.008019] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4
00:23:51.372  [2024-11-19 17:07:44.008224] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4
00:23:51.372  [2024-11-19 17:07:44.008330] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt4 (4) greater than existing raid bdev raid_bdev1 (2)
00:23:51.372  [2024-11-19 17:07:44.008445] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:23:51.372  [2024-11-19 17:07:44.008524] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ba80 name raid_bdev1, state configuring
00:23:51.372  [2024-11-19 17:07:44.008673] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed
00:23:51.372  pt4
00:23:51.372   17:07:44	-- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3
00:23:51.372   17:07:44	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:23:51.372   17:07:44	-- bdev/bdev_raid.sh@118 -- # local expected_state=configuring
00:23:51.372   17:07:44	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:23:51.372   17:07:44	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:23:51.372   17:07:44	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:23:51.372   17:07:44	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:23:51.373   17:07:44	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:23:51.373   17:07:44	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:23:51.373   17:07:44	-- bdev/bdev_raid.sh@125 -- # local tmp
00:23:51.373    17:07:44	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:23:51.373    17:07:44	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:23:51.631   17:07:44	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:23:51.631    "name": "raid_bdev1",
00:23:51.632    "uuid": "c7481e34-5735-473a-8d20-a0fef111cb51",
00:23:51.632    "strip_size_kb": 64,
00:23:51.632    "state": "configuring",
00:23:51.632    "raid_level": "raid5f",
00:23:51.632    "superblock": true,
00:23:51.632    "num_base_bdevs": 4,
00:23:51.632    "num_base_bdevs_discovered": 1,
00:23:51.632    "num_base_bdevs_operational": 3,
00:23:51.632    "base_bdevs_list": [
00:23:51.632      {
00:23:51.632        "name": null,
00:23:51.632        "uuid": "00000000-0000-0000-0000-000000000000",
00:23:51.632        "is_configured": false,
00:23:51.632        "data_offset": 2048,
00:23:51.632        "data_size": 63488
00:23:51.632      },
00:23:51.632      {
00:23:51.632        "name": null,
00:23:51.632        "uuid": "b60870b5-3bc9-5255-97ea-623d896a0ab3",
00:23:51.632        "is_configured": false,
00:23:51.632        "data_offset": 2048,
00:23:51.632        "data_size": 63488
00:23:51.632      },
00:23:51.632      {
00:23:51.632        "name": null,
00:23:51.632        "uuid": "18f47afa-e261-5cf3-954c-1b89f6485437",
00:23:51.632        "is_configured": false,
00:23:51.632        "data_offset": 2048,
00:23:51.632        "data_size": 63488
00:23:51.632      },
00:23:51.632      {
00:23:51.632        "name": "pt4",
00:23:51.632        "uuid": "b2d40256-66f7-546f-896e-652e5d7aa5f4",
00:23:51.632        "is_configured": true,
00:23:51.632        "data_offset": 2048,
00:23:51.632        "data_size": 63488
00:23:51.632      }
00:23:51.632    ]
00:23:51.632  }'
00:23:51.632   17:07:44	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:23:51.632   17:07:44	-- common/autotest_common.sh@10 -- # set +x
00:23:52.199   17:07:44	-- bdev/bdev_raid.sh@497 -- # (( i = 1 ))
00:23:52.199   17:07:44	-- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 ))
00:23:52.199   17:07:44	-- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002
00:23:52.457  [2024-11-19 17:07:45.087369] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2
00:23:52.457  [2024-11-19 17:07:45.088162] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:23:52.457  [2024-11-19 17:07:45.088252] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380
00:23:52.457  [2024-11-19 17:07:45.088375] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:23:52.457  [2024-11-19 17:07:45.089000] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:23:52.457  [2024-11-19 17:07:45.089213] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2
00:23:52.457  [2024-11-19 17:07:45.089432] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2
00:23:52.457  [2024-11-19 17:07:45.089553] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed
00:23:52.457  pt2
00:23:52.457   17:07:45	-- bdev/bdev_raid.sh@497 -- # (( i++ ))
00:23:52.457   17:07:45	-- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 ))
00:23:52.457   17:07:45	-- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003
00:23:52.716  [2024-11-19 17:07:45.371439] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3
00:23:52.716  [2024-11-19 17:07:45.371867] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:23:52.716  [2024-11-19 17:07:45.371949] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680
00:23:52.716  [2024-11-19 17:07:45.372065] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:23:52.716  [2024-11-19 17:07:45.372611] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:23:52.716  [2024-11-19 17:07:45.372798] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3
00:23:52.716  [2024-11-19 17:07:45.373010] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3
00:23:52.716  [2024-11-19 17:07:45.373138] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed
00:23:52.716  [2024-11-19 17:07:45.373411] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c080
00:23:52.716  [2024-11-19 17:07:45.373529] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512
00:23:52.716  [2024-11-19 17:07:45.373666] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000003090
00:23:52.716  [2024-11-19 17:07:45.374634] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c080
00:23:52.716  [2024-11-19 17:07:45.374778] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c080
00:23:52.716  [2024-11-19 17:07:45.375130] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:23:52.716  pt3
00:23:52.716   17:07:45	-- bdev/bdev_raid.sh@497 -- # (( i++ ))
00:23:52.716   17:07:45	-- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 ))
00:23:52.716   17:07:45	-- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3
00:23:52.716   17:07:45	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:23:52.716   17:07:45	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:23:52.716   17:07:45	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:23:52.716   17:07:45	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:23:52.716   17:07:45	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:23:52.716   17:07:45	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:23:52.716   17:07:45	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:23:52.716   17:07:45	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:23:52.716   17:07:45	-- bdev/bdev_raid.sh@125 -- # local tmp
00:23:52.716    17:07:45	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:23:52.716    17:07:45	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:23:52.975   17:07:45	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:23:52.975    "name": "raid_bdev1",
00:23:52.975    "uuid": "c7481e34-5735-473a-8d20-a0fef111cb51",
00:23:52.975    "strip_size_kb": 64,
00:23:52.975    "state": "online",
00:23:52.975    "raid_level": "raid5f",
00:23:52.975    "superblock": true,
00:23:52.975    "num_base_bdevs": 4,
00:23:52.975    "num_base_bdevs_discovered": 3,
00:23:52.975    "num_base_bdevs_operational": 3,
00:23:52.975    "base_bdevs_list": [
00:23:52.975      {
00:23:52.975        "name": null,
00:23:52.975        "uuid": "00000000-0000-0000-0000-000000000000",
00:23:52.975        "is_configured": false,
00:23:52.975        "data_offset": 2048,
00:23:52.975        "data_size": 63488
00:23:52.975      },
00:23:52.975      {
00:23:52.975        "name": "pt2",
00:23:52.975        "uuid": "b60870b5-3bc9-5255-97ea-623d896a0ab3",
00:23:52.975        "is_configured": true,
00:23:52.975        "data_offset": 2048,
00:23:52.975        "data_size": 63488
00:23:52.975      },
00:23:52.975      {
00:23:52.975        "name": "pt3",
00:23:52.975        "uuid": "18f47afa-e261-5cf3-954c-1b89f6485437",
00:23:52.975        "is_configured": true,
00:23:52.975        "data_offset": 2048,
00:23:52.975        "data_size": 63488
00:23:52.975      },
00:23:52.975      {
00:23:52.975        "name": "pt4",
00:23:52.975        "uuid": "b2d40256-66f7-546f-896e-652e5d7aa5f4",
00:23:52.975        "is_configured": true,
00:23:52.975        "data_offset": 2048,
00:23:52.975        "data_size": 63488
00:23:52.975      }
00:23:52.975    ]
00:23:52.975  }'
00:23:52.975   17:07:45	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:23:52.975   17:07:45	-- common/autotest_common.sh@10 -- # set +x
00:23:53.541    17:07:46	-- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid'
00:23:53.541    17:07:46	-- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1
00:23:53.800  [2024-11-19 17:07:46.491863] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:23:53.800   17:07:46	-- bdev/bdev_raid.sh@506 -- # '[' c7481e34-5735-473a-8d20-a0fef111cb51 '!=' c7481e34-5735-473a-8d20-a0fef111cb51 ']'
00:23:53.800   17:07:46	-- bdev/bdev_raid.sh@511 -- # killprocess 140662
00:23:53.800   17:07:46	-- common/autotest_common.sh@936 -- # '[' -z 140662 ']'
00:23:53.800   17:07:46	-- common/autotest_common.sh@940 -- # kill -0 140662
00:23:53.800    17:07:46	-- common/autotest_common.sh@941 -- # uname
00:23:53.800   17:07:46	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:23:53.800    17:07:46	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 140662
00:23:53.800   17:07:46	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:23:53.800   17:07:46	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:23:53.800   17:07:46	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 140662'
00:23:53.800  killing process with pid 140662
00:23:53.800   17:07:46	-- common/autotest_common.sh@955 -- # kill 140662
00:23:53.800   17:07:46	-- common/autotest_common.sh@960 -- # wait 140662
00:23:53.800  [2024-11-19 17:07:46.560045] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:23:53.800  [2024-11-19 17:07:46.560140] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:23:53.800  [2024-11-19 17:07:46.560220] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:23:53.800  [2024-11-19 17:07:46.560231] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c080 name raid_bdev1, state offline
00:23:53.800  [2024-11-19 17:07:46.611129] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:23:54.058  ************************************
00:23:54.058  END TEST raid5f_superblock_test
00:23:54.058  ************************************
00:23:54.058   17:07:46	-- bdev/bdev_raid.sh@513 -- # return 0
00:23:54.058  
00:23:54.058  real	0m21.520s
00:23:54.058  user	0m39.445s
00:23:54.058  sys	0m3.542s
00:23:54.058   17:07:46	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:23:54.058   17:07:46	-- common/autotest_common.sh@10 -- # set +x
00:23:54.316   17:07:46	-- bdev/bdev_raid.sh@747 -- # '[' true = true ']'
00:23:54.316   17:07:46	-- bdev/bdev_raid.sh@748 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false
00:23:54.316   17:07:46	-- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']'
00:23:54.316   17:07:46	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:23:54.316   17:07:46	-- common/autotest_common.sh@10 -- # set +x
00:23:54.316  ************************************
00:23:54.316  START TEST raid5f_rebuild_test
00:23:54.316  ************************************
00:23:54.316   17:07:46	-- common/autotest_common.sh@1114 -- # raid_rebuild_test raid5f 4 false false
00:23:54.316   17:07:46	-- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f
00:23:54.316   17:07:46	-- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4
00:23:54.316   17:07:46	-- bdev/bdev_raid.sh@519 -- # local superblock=false
00:23:54.316   17:07:46	-- bdev/bdev_raid.sh@520 -- # local background_io=false
00:23:54.316    17:07:46	-- bdev/bdev_raid.sh@521 -- # (( i = 1 ))
00:23:54.316    17:07:46	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:23:54.316    17:07:46	-- bdev/bdev_raid.sh@521 -- # echo BaseBdev1
00:23:54.316    17:07:46	-- bdev/bdev_raid.sh@521 -- # (( i++ ))
00:23:54.316    17:07:46	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:23:54.316    17:07:46	-- bdev/bdev_raid.sh@521 -- # echo BaseBdev2
00:23:54.316    17:07:46	-- bdev/bdev_raid.sh@521 -- # (( i++ ))
00:23:54.316    17:07:46	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:23:54.316    17:07:46	-- bdev/bdev_raid.sh@521 -- # echo BaseBdev3
00:23:54.316    17:07:46	-- bdev/bdev_raid.sh@521 -- # (( i++ ))
00:23:54.316    17:07:46	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:23:54.316    17:07:46	-- bdev/bdev_raid.sh@521 -- # echo BaseBdev4
00:23:54.317    17:07:46	-- bdev/bdev_raid.sh@521 -- # (( i++ ))
00:23:54.317    17:07:46	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:23:54.317   17:07:46	-- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4')
00:23:54.317   17:07:46	-- bdev/bdev_raid.sh@521 -- # local base_bdevs
00:23:54.317   17:07:46	-- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1
00:23:54.317   17:07:46	-- bdev/bdev_raid.sh@523 -- # local strip_size
00:23:54.317   17:07:46	-- bdev/bdev_raid.sh@524 -- # local create_arg
00:23:54.317   17:07:46	-- bdev/bdev_raid.sh@525 -- # local raid_bdev_size
00:23:54.317   17:07:46	-- bdev/bdev_raid.sh@526 -- # local data_offset
00:23:54.317   17:07:46	-- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']'
00:23:54.317   17:07:46	-- bdev/bdev_raid.sh@529 -- # '[' false = true ']'
00:23:54.317   17:07:46	-- bdev/bdev_raid.sh@533 -- # strip_size=64
00:23:54.317   17:07:46	-- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64'
00:23:54.317   17:07:46	-- bdev/bdev_raid.sh@539 -- # '[' false = true ']'
00:23:54.317   17:07:46	-- bdev/bdev_raid.sh@544 -- # raid_pid=141333
00:23:54.317   17:07:46	-- bdev/bdev_raid.sh@545 -- # waitforlisten 141333 /var/tmp/spdk-raid.sock
00:23:54.317   17:07:46	-- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid
00:23:54.317   17:07:46	-- common/autotest_common.sh@829 -- # '[' -z 141333 ']'
00:23:54.317   17:07:46	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock
00:23:54.317   17:07:46	-- common/autotest_common.sh@834 -- # local max_retries=100
00:23:54.317   17:07:46	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...'
00:23:54.317  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...
00:23:54.317   17:07:46	-- common/autotest_common.sh@838 -- # xtrace_disable
00:23:54.317   17:07:46	-- common/autotest_common.sh@10 -- # set +x
00:23:54.317  [2024-11-19 17:07:47.018341] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:23:54.317  [2024-11-19 17:07:47.018902] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141333 ]
00:23:54.317  I/O size of 3145728 is greater than zero copy threshold (65536).
00:23:54.317  Zero copy mechanism will not be used.
00:23:54.608  [2024-11-19 17:07:47.176310] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:23:54.608  [2024-11-19 17:07:47.232356] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:23:54.608  [2024-11-19 17:07:47.281587] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:23:55.175   17:07:47	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:23:55.175   17:07:47	-- common/autotest_common.sh@862 -- # return 0
00:23:55.175   17:07:47	-- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}"
00:23:55.175   17:07:47	-- bdev/bdev_raid.sh@549 -- # '[' false = true ']'
00:23:55.175   17:07:47	-- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1
00:23:55.432  BaseBdev1
00:23:55.432   17:07:48	-- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}"
00:23:55.432   17:07:48	-- bdev/bdev_raid.sh@549 -- # '[' false = true ']'
00:23:55.432   17:07:48	-- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2
00:23:55.689  BaseBdev2
00:23:55.689   17:07:48	-- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}"
00:23:55.689   17:07:48	-- bdev/bdev_raid.sh@549 -- # '[' false = true ']'
00:23:55.689   17:07:48	-- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3
00:23:55.947  BaseBdev3
00:23:55.947   17:07:48	-- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}"
00:23:55.947   17:07:48	-- bdev/bdev_raid.sh@549 -- # '[' false = true ']'
00:23:55.947   17:07:48	-- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4
00:23:56.205  BaseBdev4
00:23:56.205   17:07:49	-- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc
00:23:56.462  spare_malloc
00:23:56.462   17:07:49	-- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000
00:23:56.721  spare_delay
00:23:56.721   17:07:49	-- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare
00:23:56.979  [2024-11-19 17:07:49.669609] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay
00:23:56.979  [2024-11-19 17:07:49.670005] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:23:56.979  [2024-11-19 17:07:49.670094] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880
00:23:56.979  [2024-11-19 17:07:49.670331] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:23:56.979  [2024-11-19 17:07:49.673172] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:23:56.979  [2024-11-19 17:07:49.673401] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare
00:23:56.979  spare
00:23:56.979   17:07:49	-- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1
00:23:57.238  [2024-11-19 17:07:49.877887] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:23:57.238  [2024-11-19 17:07:49.880510] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:23:57.238  [2024-11-19 17:07:49.880759] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:23:57.238  [2024-11-19 17:07:49.880834] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed
00:23:57.238  [2024-11-19 17:07:49.881173] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80
00:23:57.238  [2024-11-19 17:07:49.881275] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512
00:23:57.238  [2024-11-19 17:07:49.881485] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0
00:23:57.238  [2024-11-19 17:07:49.882417] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80
00:23:57.238  [2024-11-19 17:07:49.882548] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80
00:23:57.238  [2024-11-19 17:07:49.882918] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:23:57.238   17:07:49	-- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4
00:23:57.238   17:07:49	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:23:57.238   17:07:49	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:23:57.238   17:07:49	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:23:57.238   17:07:49	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:23:57.238   17:07:49	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:23:57.238   17:07:49	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:23:57.238   17:07:49	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:23:57.238   17:07:49	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:23:57.238   17:07:49	-- bdev/bdev_raid.sh@125 -- # local tmp
00:23:57.238    17:07:49	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:23:57.238    17:07:49	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:23:57.496   17:07:50	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:23:57.496    "name": "raid_bdev1",
00:23:57.496    "uuid": "7288dc6c-e232-49b7-a3be-ee70369c35df",
00:23:57.496    "strip_size_kb": 64,
00:23:57.496    "state": "online",
00:23:57.496    "raid_level": "raid5f",
00:23:57.496    "superblock": false,
00:23:57.496    "num_base_bdevs": 4,
00:23:57.496    "num_base_bdevs_discovered": 4,
00:23:57.496    "num_base_bdevs_operational": 4,
00:23:57.496    "base_bdevs_list": [
00:23:57.496      {
00:23:57.496        "name": "BaseBdev1",
00:23:57.496        "uuid": "5a01cbb6-c5f2-414a-87a1-c8bb59a220bf",
00:23:57.496        "is_configured": true,
00:23:57.496        "data_offset": 0,
00:23:57.496        "data_size": 65536
00:23:57.496      },
00:23:57.496      {
00:23:57.496        "name": "BaseBdev2",
00:23:57.496        "uuid": "3832362e-a629-4cd0-bad3-afda0dd1ec88",
00:23:57.496        "is_configured": true,
00:23:57.496        "data_offset": 0,
00:23:57.496        "data_size": 65536
00:23:57.496      },
00:23:57.496      {
00:23:57.496        "name": "BaseBdev3",
00:23:57.496        "uuid": "d4a552fd-0944-4323-91f1-fd729c659287",
00:23:57.496        "is_configured": true,
00:23:57.496        "data_offset": 0,
00:23:57.496        "data_size": 65536
00:23:57.496      },
00:23:57.496      {
00:23:57.496        "name": "BaseBdev4",
00:23:57.496        "uuid": "7c980182-a73a-4053-ae2d-89406a613080",
00:23:57.496        "is_configured": true,
00:23:57.496        "data_offset": 0,
00:23:57.496        "data_size": 65536
00:23:57.496      }
00:23:57.496    ]
00:23:57.496  }'
00:23:57.496   17:07:50	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:23:57.496   17:07:50	-- common/autotest_common.sh@10 -- # set +x
00:23:58.061    17:07:50	-- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1
00:23:58.061    17:07:50	-- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks'
00:23:58.320  [2024-11-19 17:07:50.939231] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:23:58.320   17:07:50	-- bdev/bdev_raid.sh@567 -- # raid_bdev_size=196608
00:23:58.320    17:07:50	-- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:23:58.320    17:07:50	-- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset'
00:23:58.578   17:07:51	-- bdev/bdev_raid.sh@570 -- # data_offset=0
00:23:58.578   17:07:51	-- bdev/bdev_raid.sh@572 -- # '[' false = true ']'
00:23:58.578   17:07:51	-- bdev/bdev_raid.sh@576 -- # local write_unit_size
00:23:58.578   17:07:51	-- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0
00:23:58.578   17:07:51	-- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:23:58.578   17:07:51	-- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1')
00:23:58.578   17:07:51	-- bdev/nbd_common.sh@10 -- # local bdev_list
00:23:58.578   17:07:51	-- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0')
00:23:58.578   17:07:51	-- bdev/nbd_common.sh@11 -- # local nbd_list
00:23:58.578   17:07:51	-- bdev/nbd_common.sh@12 -- # local i
00:23:58.578   17:07:51	-- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:23:58.578   17:07:51	-- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:23:58.578   17:07:51	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0
00:23:58.578  [2024-11-19 17:07:51.367276] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460
00:23:58.578  /dev/nbd0
00:23:58.578    17:07:51	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:23:58.578   17:07:51	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:23:58.578   17:07:51	-- common/autotest_common.sh@866 -- # local nbd_name=nbd0
00:23:58.578   17:07:51	-- common/autotest_common.sh@867 -- # local i
00:23:58.578   17:07:51	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:23:58.578   17:07:51	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:23:58.578   17:07:51	-- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions
00:23:58.578   17:07:51	-- common/autotest_common.sh@871 -- # break
00:23:58.578   17:07:51	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:23:58.578   17:07:51	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:23:58.578   17:07:51	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:23:58.578  1+0 records in
00:23:58.578  1+0 records out
00:23:58.578  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000257045 s, 15.9 MB/s
00:23:58.578    17:07:51	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:23:58.837   17:07:51	-- common/autotest_common.sh@884 -- # size=4096
00:23:58.837   17:07:51	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:23:58.837   17:07:51	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:23:58.837   17:07:51	-- common/autotest_common.sh@887 -- # return 0
00:23:58.837   17:07:51	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:23:58.837   17:07:51	-- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:23:58.837   17:07:51	-- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']'
00:23:58.837   17:07:51	-- bdev/bdev_raid.sh@581 -- # write_unit_size=384
00:23:58.837   17:07:51	-- bdev/bdev_raid.sh@582 -- # echo 192
00:23:58.837   17:07:51	-- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct
00:23:59.409  512+0 records in
00:23:59.410  512+0 records out
00:23:59.410  100663296 bytes (101 MB, 96 MiB) copied, 0.561296 s, 179 MB/s
00:23:59.410   17:07:52	-- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0
00:23:59.410   17:07:52	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:23:59.410   17:07:52	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:23:59.410   17:07:52	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:23:59.410   17:07:52	-- bdev/nbd_common.sh@51 -- # local i
00:23:59.410   17:07:52	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:23:59.410   17:07:52	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0
00:23:59.410    17:07:52	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:23:59.410  [2024-11-19 17:07:52.258339] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:23:59.410   17:07:52	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:23:59.671   17:07:52	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:23:59.671   17:07:52	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:23:59.671   17:07:52	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:23:59.671   17:07:52	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:23:59.671   17:07:52	-- bdev/nbd_common.sh@41 -- # break
00:23:59.671   17:07:52	-- bdev/nbd_common.sh@45 -- # return 0
00:23:59.671   17:07:52	-- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1
00:23:59.671  [2024-11-19 17:07:52.510000] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:23:59.930   17:07:52	-- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3
00:23:59.930   17:07:52	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:23:59.930   17:07:52	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:23:59.930   17:07:52	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:23:59.930   17:07:52	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:23:59.930   17:07:52	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:23:59.930   17:07:52	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:23:59.930   17:07:52	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:23:59.930   17:07:52	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:23:59.930   17:07:52	-- bdev/bdev_raid.sh@125 -- # local tmp
00:23:59.930    17:07:52	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:23:59.930    17:07:52	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:24:00.189   17:07:52	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:24:00.189    "name": "raid_bdev1",
00:24:00.189    "uuid": "7288dc6c-e232-49b7-a3be-ee70369c35df",
00:24:00.189    "strip_size_kb": 64,
00:24:00.189    "state": "online",
00:24:00.189    "raid_level": "raid5f",
00:24:00.189    "superblock": false,
00:24:00.189    "num_base_bdevs": 4,
00:24:00.189    "num_base_bdevs_discovered": 3,
00:24:00.189    "num_base_bdevs_operational": 3,
00:24:00.189    "base_bdevs_list": [
00:24:00.189      {
00:24:00.189        "name": null,
00:24:00.189        "uuid": "00000000-0000-0000-0000-000000000000",
00:24:00.189        "is_configured": false,
00:24:00.189        "data_offset": 0,
00:24:00.189        "data_size": 65536
00:24:00.189      },
00:24:00.189      {
00:24:00.189        "name": "BaseBdev2",
00:24:00.189        "uuid": "3832362e-a629-4cd0-bad3-afda0dd1ec88",
00:24:00.189        "is_configured": true,
00:24:00.189        "data_offset": 0,
00:24:00.189        "data_size": 65536
00:24:00.189      },
00:24:00.189      {
00:24:00.189        "name": "BaseBdev3",
00:24:00.189        "uuid": "d4a552fd-0944-4323-91f1-fd729c659287",
00:24:00.189        "is_configured": true,
00:24:00.189        "data_offset": 0,
00:24:00.189        "data_size": 65536
00:24:00.189      },
00:24:00.189      {
00:24:00.189        "name": "BaseBdev4",
00:24:00.189        "uuid": "7c980182-a73a-4053-ae2d-89406a613080",
00:24:00.189        "is_configured": true,
00:24:00.189        "data_offset": 0,
00:24:00.189        "data_size": 65536
00:24:00.189      }
00:24:00.189    ]
00:24:00.189  }'
00:24:00.189   17:07:52	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:24:00.189   17:07:52	-- common/autotest_common.sh@10 -- # set +x
00:24:00.758   17:07:53	-- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare
00:24:01.017  [2024-11-19 17:07:53.626309] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare
00:24:01.017  [2024-11-19 17:07:53.626373] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:24:01.017  [2024-11-19 17:07:53.630240] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027a60
00:24:01.017  [2024-11-19 17:07:53.633516] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1
00:24:01.017   17:07:53	-- bdev/bdev_raid.sh@598 -- # sleep 1
00:24:01.952   17:07:54	-- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:24:01.952   17:07:54	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:24:01.952   17:07:54	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:24:01.952   17:07:54	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:24:01.952   17:07:54	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:24:01.952    17:07:54	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:24:01.952    17:07:54	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:24:02.211   17:07:54	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:24:02.211    "name": "raid_bdev1",
00:24:02.211    "uuid": "7288dc6c-e232-49b7-a3be-ee70369c35df",
00:24:02.211    "strip_size_kb": 64,
00:24:02.211    "state": "online",
00:24:02.211    "raid_level": "raid5f",
00:24:02.211    "superblock": false,
00:24:02.211    "num_base_bdevs": 4,
00:24:02.211    "num_base_bdevs_discovered": 4,
00:24:02.211    "num_base_bdevs_operational": 4,
00:24:02.211    "process": {
00:24:02.211      "type": "rebuild",
00:24:02.211      "target": "spare",
00:24:02.211      "progress": {
00:24:02.211        "blocks": 23040,
00:24:02.211        "percent": 11
00:24:02.211      }
00:24:02.211    },
00:24:02.211    "base_bdevs_list": [
00:24:02.211      {
00:24:02.212        "name": "spare",
00:24:02.212        "uuid": "8e6e6650-2f8e-507a-b372-8d732f471ab8",
00:24:02.212        "is_configured": true,
00:24:02.212        "data_offset": 0,
00:24:02.212        "data_size": 65536
00:24:02.212      },
00:24:02.212      {
00:24:02.212        "name": "BaseBdev2",
00:24:02.212        "uuid": "3832362e-a629-4cd0-bad3-afda0dd1ec88",
00:24:02.212        "is_configured": true,
00:24:02.212        "data_offset": 0,
00:24:02.212        "data_size": 65536
00:24:02.212      },
00:24:02.212      {
00:24:02.212        "name": "BaseBdev3",
00:24:02.212        "uuid": "d4a552fd-0944-4323-91f1-fd729c659287",
00:24:02.212        "is_configured": true,
00:24:02.212        "data_offset": 0,
00:24:02.212        "data_size": 65536
00:24:02.212      },
00:24:02.212      {
00:24:02.212        "name": "BaseBdev4",
00:24:02.212        "uuid": "7c980182-a73a-4053-ae2d-89406a613080",
00:24:02.212        "is_configured": true,
00:24:02.212        "data_offset": 0,
00:24:02.212        "data_size": 65536
00:24:02.212      }
00:24:02.212    ]
00:24:02.212  }'
00:24:02.212    17:07:54	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:24:02.212   17:07:54	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:24:02.212    17:07:54	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:24:02.212   17:07:54	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:24:02.212   17:07:54	-- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare
00:24:02.470  [2024-11-19 17:07:55.203880] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:24:02.470  [2024-11-19 17:07:55.246233] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device
00:24:02.470  [2024-11-19 17:07:55.246390] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:24:02.470   17:07:55	-- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3
00:24:02.470   17:07:55	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:24:02.470   17:07:55	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:24:02.470   17:07:55	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:24:02.470   17:07:55	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:24:02.470   17:07:55	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:24:02.470   17:07:55	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:24:02.470   17:07:55	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:24:02.470   17:07:55	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:24:02.470   17:07:55	-- bdev/bdev_raid.sh@125 -- # local tmp
00:24:02.470    17:07:55	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:24:02.470    17:07:55	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:24:02.729   17:07:55	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:24:02.729    "name": "raid_bdev1",
00:24:02.729    "uuid": "7288dc6c-e232-49b7-a3be-ee70369c35df",
00:24:02.729    "strip_size_kb": 64,
00:24:02.729    "state": "online",
00:24:02.729    "raid_level": "raid5f",
00:24:02.729    "superblock": false,
00:24:02.729    "num_base_bdevs": 4,
00:24:02.729    "num_base_bdevs_discovered": 3,
00:24:02.729    "num_base_bdevs_operational": 3,
00:24:02.729    "base_bdevs_list": [
00:24:02.729      {
00:24:02.729        "name": null,
00:24:02.729        "uuid": "00000000-0000-0000-0000-000000000000",
00:24:02.729        "is_configured": false,
00:24:02.729        "data_offset": 0,
00:24:02.729        "data_size": 65536
00:24:02.729      },
00:24:02.729      {
00:24:02.729        "name": "BaseBdev2",
00:24:02.729        "uuid": "3832362e-a629-4cd0-bad3-afda0dd1ec88",
00:24:02.729        "is_configured": true,
00:24:02.729        "data_offset": 0,
00:24:02.729        "data_size": 65536
00:24:02.729      },
00:24:02.729      {
00:24:02.729        "name": "BaseBdev3",
00:24:02.729        "uuid": "d4a552fd-0944-4323-91f1-fd729c659287",
00:24:02.729        "is_configured": true,
00:24:02.729        "data_offset": 0,
00:24:02.729        "data_size": 65536
00:24:02.729      },
00:24:02.729      {
00:24:02.729        "name": "BaseBdev4",
00:24:02.729        "uuid": "7c980182-a73a-4053-ae2d-89406a613080",
00:24:02.729        "is_configured": true,
00:24:02.729        "data_offset": 0,
00:24:02.729        "data_size": 65536
00:24:02.729      }
00:24:02.729    ]
00:24:02.729  }'
00:24:02.729   17:07:55	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:24:02.729   17:07:55	-- common/autotest_common.sh@10 -- # set +x
00:24:03.295   17:07:56	-- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none
00:24:03.295   17:07:56	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:24:03.295   17:07:56	-- bdev/bdev_raid.sh@184 -- # local process_type=none
00:24:03.295   17:07:56	-- bdev/bdev_raid.sh@185 -- # local target=none
00:24:03.295   17:07:56	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:24:03.295    17:07:56	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:24:03.295    17:07:56	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:24:03.554   17:07:56	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:24:03.554    "name": "raid_bdev1",
00:24:03.554    "uuid": "7288dc6c-e232-49b7-a3be-ee70369c35df",
00:24:03.554    "strip_size_kb": 64,
00:24:03.554    "state": "online",
00:24:03.554    "raid_level": "raid5f",
00:24:03.554    "superblock": false,
00:24:03.554    "num_base_bdevs": 4,
00:24:03.554    "num_base_bdevs_discovered": 3,
00:24:03.554    "num_base_bdevs_operational": 3,
00:24:03.554    "base_bdevs_list": [
00:24:03.554      {
00:24:03.554        "name": null,
00:24:03.554        "uuid": "00000000-0000-0000-0000-000000000000",
00:24:03.554        "is_configured": false,
00:24:03.554        "data_offset": 0,
00:24:03.554        "data_size": 65536
00:24:03.554      },
00:24:03.554      {
00:24:03.554        "name": "BaseBdev2",
00:24:03.554        "uuid": "3832362e-a629-4cd0-bad3-afda0dd1ec88",
00:24:03.554        "is_configured": true,
00:24:03.554        "data_offset": 0,
00:24:03.554        "data_size": 65536
00:24:03.554      },
00:24:03.554      {
00:24:03.554        "name": "BaseBdev3",
00:24:03.554        "uuid": "d4a552fd-0944-4323-91f1-fd729c659287",
00:24:03.554        "is_configured": true,
00:24:03.554        "data_offset": 0,
00:24:03.554        "data_size": 65536
00:24:03.554      },
00:24:03.554      {
00:24:03.554        "name": "BaseBdev4",
00:24:03.554        "uuid": "7c980182-a73a-4053-ae2d-89406a613080",
00:24:03.554        "is_configured": true,
00:24:03.554        "data_offset": 0,
00:24:03.554        "data_size": 65536
00:24:03.554      }
00:24:03.554    ]
00:24:03.554  }'
00:24:03.554    17:07:56	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:24:03.812   17:07:56	-- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]]
00:24:03.812    17:07:56	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:24:03.812   17:07:56	-- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]]
00:24:03.812   17:07:56	-- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare
00:24:04.071  [2024-11-19 17:07:56.669518] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare
00:24:04.071  [2024-11-19 17:07:56.669605] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:24:04.071  [2024-11-19 17:07:56.673181] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027c00
00:24:04.071  [2024-11-19 17:07:56.675813] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1
00:24:04.071   17:07:56	-- bdev/bdev_raid.sh@614 -- # sleep 1
00:24:05.006   17:07:57	-- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:24:05.006   17:07:57	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:24:05.006   17:07:57	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:24:05.006   17:07:57	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:24:05.006   17:07:57	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:24:05.006    17:07:57	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:24:05.006    17:07:57	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:24:05.264   17:07:57	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:24:05.264    "name": "raid_bdev1",
00:24:05.264    "uuid": "7288dc6c-e232-49b7-a3be-ee70369c35df",
00:24:05.264    "strip_size_kb": 64,
00:24:05.264    "state": "online",
00:24:05.264    "raid_level": "raid5f",
00:24:05.264    "superblock": false,
00:24:05.264    "num_base_bdevs": 4,
00:24:05.264    "num_base_bdevs_discovered": 4,
00:24:05.264    "num_base_bdevs_operational": 4,
00:24:05.264    "process": {
00:24:05.264      "type": "rebuild",
00:24:05.264      "target": "spare",
00:24:05.264      "progress": {
00:24:05.264        "blocks": 23040,
00:24:05.264        "percent": 11
00:24:05.264      }
00:24:05.264    },
00:24:05.264    "base_bdevs_list": [
00:24:05.264      {
00:24:05.264        "name": "spare",
00:24:05.264        "uuid": "8e6e6650-2f8e-507a-b372-8d732f471ab8",
00:24:05.264        "is_configured": true,
00:24:05.264        "data_offset": 0,
00:24:05.264        "data_size": 65536
00:24:05.264      },
00:24:05.264      {
00:24:05.264        "name": "BaseBdev2",
00:24:05.264        "uuid": "3832362e-a629-4cd0-bad3-afda0dd1ec88",
00:24:05.264        "is_configured": true,
00:24:05.264        "data_offset": 0,
00:24:05.264        "data_size": 65536
00:24:05.264      },
00:24:05.264      {
00:24:05.264        "name": "BaseBdev3",
00:24:05.264        "uuid": "d4a552fd-0944-4323-91f1-fd729c659287",
00:24:05.264        "is_configured": true,
00:24:05.264        "data_offset": 0,
00:24:05.264        "data_size": 65536
00:24:05.264      },
00:24:05.264      {
00:24:05.264        "name": "BaseBdev4",
00:24:05.264        "uuid": "7c980182-a73a-4053-ae2d-89406a613080",
00:24:05.264        "is_configured": true,
00:24:05.264        "data_offset": 0,
00:24:05.264        "data_size": 65536
00:24:05.264      }
00:24:05.264    ]
00:24:05.264  }'
00:24:05.264    17:07:57	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:24:05.264   17:07:57	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:24:05.264    17:07:57	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:24:05.264   17:07:58	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:24:05.264   17:07:58	-- bdev/bdev_raid.sh@617 -- # '[' false = true ']'
00:24:05.264   17:07:58	-- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4
00:24:05.264   17:07:58	-- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']'
00:24:05.264   17:07:58	-- bdev/bdev_raid.sh@657 -- # local timeout=679
00:24:05.264   17:07:58	-- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout ))
00:24:05.264   17:07:58	-- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:24:05.264   17:07:58	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:24:05.264   17:07:58	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:24:05.264   17:07:58	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:24:05.264   17:07:58	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:24:05.264    17:07:58	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:24:05.264    17:07:58	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:24:05.522   17:07:58	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:24:05.522    "name": "raid_bdev1",
00:24:05.522    "uuid": "7288dc6c-e232-49b7-a3be-ee70369c35df",
00:24:05.522    "strip_size_kb": 64,
00:24:05.522    "state": "online",
00:24:05.522    "raid_level": "raid5f",
00:24:05.522    "superblock": false,
00:24:05.522    "num_base_bdevs": 4,
00:24:05.522    "num_base_bdevs_discovered": 4,
00:24:05.522    "num_base_bdevs_operational": 4,
00:24:05.522    "process": {
00:24:05.522      "type": "rebuild",
00:24:05.522      "target": "spare",
00:24:05.522      "progress": {
00:24:05.522        "blocks": 28800,
00:24:05.522        "percent": 14
00:24:05.522      }
00:24:05.522    },
00:24:05.522    "base_bdevs_list": [
00:24:05.522      {
00:24:05.522        "name": "spare",
00:24:05.522        "uuid": "8e6e6650-2f8e-507a-b372-8d732f471ab8",
00:24:05.522        "is_configured": true,
00:24:05.522        "data_offset": 0,
00:24:05.522        "data_size": 65536
00:24:05.522      },
00:24:05.522      {
00:24:05.522        "name": "BaseBdev2",
00:24:05.522        "uuid": "3832362e-a629-4cd0-bad3-afda0dd1ec88",
00:24:05.522        "is_configured": true,
00:24:05.522        "data_offset": 0,
00:24:05.522        "data_size": 65536
00:24:05.522      },
00:24:05.522      {
00:24:05.522        "name": "BaseBdev3",
00:24:05.522        "uuid": "d4a552fd-0944-4323-91f1-fd729c659287",
00:24:05.522        "is_configured": true,
00:24:05.522        "data_offset": 0,
00:24:05.522        "data_size": 65536
00:24:05.522      },
00:24:05.522      {
00:24:05.522        "name": "BaseBdev4",
00:24:05.522        "uuid": "7c980182-a73a-4053-ae2d-89406a613080",
00:24:05.522        "is_configured": true,
00:24:05.522        "data_offset": 0,
00:24:05.522        "data_size": 65536
00:24:05.522      }
00:24:05.522    ]
00:24:05.522  }'
00:24:05.522    17:07:58	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:24:05.522   17:07:58	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:24:05.522    17:07:58	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:24:05.781   17:07:58	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:24:05.781   17:07:58	-- bdev/bdev_raid.sh@662 -- # sleep 1
00:24:06.782   17:07:59	-- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout ))
00:24:06.782   17:07:59	-- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:24:06.782   17:07:59	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:24:06.782   17:07:59	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:24:06.782   17:07:59	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:24:06.782   17:07:59	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:24:06.782    17:07:59	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:24:06.782    17:07:59	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:24:07.042   17:07:59	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:24:07.042    "name": "raid_bdev1",
00:24:07.042    "uuid": "7288dc6c-e232-49b7-a3be-ee70369c35df",
00:24:07.042    "strip_size_kb": 64,
00:24:07.042    "state": "online",
00:24:07.042    "raid_level": "raid5f",
00:24:07.042    "superblock": false,
00:24:07.042    "num_base_bdevs": 4,
00:24:07.042    "num_base_bdevs_discovered": 4,
00:24:07.042    "num_base_bdevs_operational": 4,
00:24:07.042    "process": {
00:24:07.042      "type": "rebuild",
00:24:07.042      "target": "spare",
00:24:07.042      "progress": {
00:24:07.042        "blocks": 55680,
00:24:07.042        "percent": 28
00:24:07.042      }
00:24:07.042    },
00:24:07.042    "base_bdevs_list": [
00:24:07.042      {
00:24:07.042        "name": "spare",
00:24:07.042        "uuid": "8e6e6650-2f8e-507a-b372-8d732f471ab8",
00:24:07.042        "is_configured": true,
00:24:07.042        "data_offset": 0,
00:24:07.042        "data_size": 65536
00:24:07.042      },
00:24:07.042      {
00:24:07.042        "name": "BaseBdev2",
00:24:07.042        "uuid": "3832362e-a629-4cd0-bad3-afda0dd1ec88",
00:24:07.042        "is_configured": true,
00:24:07.042        "data_offset": 0,
00:24:07.042        "data_size": 65536
00:24:07.042      },
00:24:07.042      {
00:24:07.042        "name": "BaseBdev3",
00:24:07.042        "uuid": "d4a552fd-0944-4323-91f1-fd729c659287",
00:24:07.043        "is_configured": true,
00:24:07.043        "data_offset": 0,
00:24:07.043        "data_size": 65536
00:24:07.043      },
00:24:07.043      {
00:24:07.043        "name": "BaseBdev4",
00:24:07.043        "uuid": "7c980182-a73a-4053-ae2d-89406a613080",
00:24:07.043        "is_configured": true,
00:24:07.043        "data_offset": 0,
00:24:07.043        "data_size": 65536
00:24:07.043      }
00:24:07.043    ]
00:24:07.043  }'
00:24:07.043    17:07:59	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:24:07.043   17:07:59	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:24:07.043    17:07:59	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:24:07.043   17:07:59	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:24:07.043   17:07:59	-- bdev/bdev_raid.sh@662 -- # sleep 1
00:24:07.978   17:08:00	-- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout ))
00:24:07.978   17:08:00	-- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:24:07.978   17:08:00	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:24:07.978   17:08:00	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:24:07.978   17:08:00	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:24:07.978   17:08:00	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:24:07.978    17:08:00	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:24:07.978    17:08:00	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:24:08.236   17:08:00	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:24:08.236    "name": "raid_bdev1",
00:24:08.236    "uuid": "7288dc6c-e232-49b7-a3be-ee70369c35df",
00:24:08.236    "strip_size_kb": 64,
00:24:08.236    "state": "online",
00:24:08.236    "raid_level": "raid5f",
00:24:08.236    "superblock": false,
00:24:08.236    "num_base_bdevs": 4,
00:24:08.236    "num_base_bdevs_discovered": 4,
00:24:08.236    "num_base_bdevs_operational": 4,
00:24:08.236    "process": {
00:24:08.236      "type": "rebuild",
00:24:08.236      "target": "spare",
00:24:08.236      "progress": {
00:24:08.236        "blocks": 80640,
00:24:08.236        "percent": 41
00:24:08.236      }
00:24:08.236    },
00:24:08.236    "base_bdevs_list": [
00:24:08.236      {
00:24:08.236        "name": "spare",
00:24:08.236        "uuid": "8e6e6650-2f8e-507a-b372-8d732f471ab8",
00:24:08.236        "is_configured": true,
00:24:08.236        "data_offset": 0,
00:24:08.236        "data_size": 65536
00:24:08.236      },
00:24:08.236      {
00:24:08.236        "name": "BaseBdev2",
00:24:08.236        "uuid": "3832362e-a629-4cd0-bad3-afda0dd1ec88",
00:24:08.236        "is_configured": true,
00:24:08.236        "data_offset": 0,
00:24:08.236        "data_size": 65536
00:24:08.236      },
00:24:08.236      {
00:24:08.236        "name": "BaseBdev3",
00:24:08.236        "uuid": "d4a552fd-0944-4323-91f1-fd729c659287",
00:24:08.236        "is_configured": true,
00:24:08.236        "data_offset": 0,
00:24:08.236        "data_size": 65536
00:24:08.236      },
00:24:08.236      {
00:24:08.236        "name": "BaseBdev4",
00:24:08.236        "uuid": "7c980182-a73a-4053-ae2d-89406a613080",
00:24:08.236        "is_configured": true,
00:24:08.236        "data_offset": 0,
00:24:08.236        "data_size": 65536
00:24:08.236      }
00:24:08.236    ]
00:24:08.236  }'
00:24:08.236    17:08:00	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:24:08.236   17:08:01	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:24:08.236    17:08:01	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:24:08.236   17:08:01	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:24:08.236   17:08:01	-- bdev/bdev_raid.sh@662 -- # sleep 1
00:24:09.610   17:08:02	-- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout ))
00:24:09.610   17:08:02	-- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:24:09.610   17:08:02	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:24:09.610   17:08:02	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:24:09.610   17:08:02	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:24:09.610   17:08:02	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:24:09.610    17:08:02	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:24:09.610    17:08:02	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:24:09.610   17:08:02	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:24:09.610    "name": "raid_bdev1",
00:24:09.610    "uuid": "7288dc6c-e232-49b7-a3be-ee70369c35df",
00:24:09.610    "strip_size_kb": 64,
00:24:09.610    "state": "online",
00:24:09.610    "raid_level": "raid5f",
00:24:09.610    "superblock": false,
00:24:09.610    "num_base_bdevs": 4,
00:24:09.610    "num_base_bdevs_discovered": 4,
00:24:09.610    "num_base_bdevs_operational": 4,
00:24:09.610    "process": {
00:24:09.610      "type": "rebuild",
00:24:09.610      "target": "spare",
00:24:09.610      "progress": {
00:24:09.610        "blocks": 107520,
00:24:09.610        "percent": 54
00:24:09.610      }
00:24:09.610    },
00:24:09.610    "base_bdevs_list": [
00:24:09.610      {
00:24:09.610        "name": "spare",
00:24:09.610        "uuid": "8e6e6650-2f8e-507a-b372-8d732f471ab8",
00:24:09.610        "is_configured": true,
00:24:09.610        "data_offset": 0,
00:24:09.610        "data_size": 65536
00:24:09.610      },
00:24:09.610      {
00:24:09.610        "name": "BaseBdev2",
00:24:09.610        "uuid": "3832362e-a629-4cd0-bad3-afda0dd1ec88",
00:24:09.610        "is_configured": true,
00:24:09.610        "data_offset": 0,
00:24:09.610        "data_size": 65536
00:24:09.610      },
00:24:09.610      {
00:24:09.610        "name": "BaseBdev3",
00:24:09.611        "uuid": "d4a552fd-0944-4323-91f1-fd729c659287",
00:24:09.611        "is_configured": true,
00:24:09.611        "data_offset": 0,
00:24:09.611        "data_size": 65536
00:24:09.611      },
00:24:09.611      {
00:24:09.611        "name": "BaseBdev4",
00:24:09.611        "uuid": "7c980182-a73a-4053-ae2d-89406a613080",
00:24:09.611        "is_configured": true,
00:24:09.611        "data_offset": 0,
00:24:09.611        "data_size": 65536
00:24:09.611      }
00:24:09.611    ]
00:24:09.611  }'
00:24:09.611    17:08:02	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:24:09.611   17:08:02	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:24:09.611    17:08:02	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:24:09.870   17:08:02	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:24:09.870   17:08:02	-- bdev/bdev_raid.sh@662 -- # sleep 1
00:24:10.805   17:08:03	-- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout ))
00:24:10.805   17:08:03	-- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:24:10.805   17:08:03	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:24:10.805   17:08:03	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:24:10.805   17:08:03	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:24:10.805   17:08:03	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:24:10.805    17:08:03	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:24:10.805    17:08:03	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:24:11.064   17:08:03	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:24:11.064    "name": "raid_bdev1",
00:24:11.064    "uuid": "7288dc6c-e232-49b7-a3be-ee70369c35df",
00:24:11.064    "strip_size_kb": 64,
00:24:11.064    "state": "online",
00:24:11.064    "raid_level": "raid5f",
00:24:11.064    "superblock": false,
00:24:11.064    "num_base_bdevs": 4,
00:24:11.064    "num_base_bdevs_discovered": 4,
00:24:11.064    "num_base_bdevs_operational": 4,
00:24:11.064    "process": {
00:24:11.064      "type": "rebuild",
00:24:11.064      "target": "spare",
00:24:11.064      "progress": {
00:24:11.064        "blocks": 132480,
00:24:11.064        "percent": 67
00:24:11.064      }
00:24:11.064    },
00:24:11.064    "base_bdevs_list": [
00:24:11.064      {
00:24:11.064        "name": "spare",
00:24:11.064        "uuid": "8e6e6650-2f8e-507a-b372-8d732f471ab8",
00:24:11.064        "is_configured": true,
00:24:11.064        "data_offset": 0,
00:24:11.064        "data_size": 65536
00:24:11.064      },
00:24:11.064      {
00:24:11.064        "name": "BaseBdev2",
00:24:11.064        "uuid": "3832362e-a629-4cd0-bad3-afda0dd1ec88",
00:24:11.064        "is_configured": true,
00:24:11.064        "data_offset": 0,
00:24:11.064        "data_size": 65536
00:24:11.064      },
00:24:11.064      {
00:24:11.064        "name": "BaseBdev3",
00:24:11.064        "uuid": "d4a552fd-0944-4323-91f1-fd729c659287",
00:24:11.064        "is_configured": true,
00:24:11.064        "data_offset": 0,
00:24:11.064        "data_size": 65536
00:24:11.065      },
00:24:11.065      {
00:24:11.065        "name": "BaseBdev4",
00:24:11.065        "uuid": "7c980182-a73a-4053-ae2d-89406a613080",
00:24:11.065        "is_configured": true,
00:24:11.065        "data_offset": 0,
00:24:11.065        "data_size": 65536
00:24:11.065      }
00:24:11.065    ]
00:24:11.065  }'
00:24:11.065    17:08:03	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:24:11.065   17:08:03	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:24:11.065    17:08:03	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:24:11.065   17:08:03	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:24:11.065   17:08:03	-- bdev/bdev_raid.sh@662 -- # sleep 1
00:24:12.073   17:08:04	-- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout ))
00:24:12.073   17:08:04	-- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:24:12.073   17:08:04	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:24:12.073   17:08:04	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:24:12.073   17:08:04	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:24:12.073   17:08:04	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:24:12.073    17:08:04	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:24:12.073    17:08:04	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:24:12.331   17:08:05	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:24:12.331    "name": "raid_bdev1",
00:24:12.331    "uuid": "7288dc6c-e232-49b7-a3be-ee70369c35df",
00:24:12.331    "strip_size_kb": 64,
00:24:12.331    "state": "online",
00:24:12.331    "raid_level": "raid5f",
00:24:12.331    "superblock": false,
00:24:12.331    "num_base_bdevs": 4,
00:24:12.331    "num_base_bdevs_discovered": 4,
00:24:12.331    "num_base_bdevs_operational": 4,
00:24:12.331    "process": {
00:24:12.331      "type": "rebuild",
00:24:12.331      "target": "spare",
00:24:12.331      "progress": {
00:24:12.331        "blocks": 157440,
00:24:12.331        "percent": 80
00:24:12.331      }
00:24:12.331    },
00:24:12.331    "base_bdevs_list": [
00:24:12.331      {
00:24:12.331        "name": "spare",
00:24:12.331        "uuid": "8e6e6650-2f8e-507a-b372-8d732f471ab8",
00:24:12.331        "is_configured": true,
00:24:12.331        "data_offset": 0,
00:24:12.331        "data_size": 65536
00:24:12.331      },
00:24:12.331      {
00:24:12.331        "name": "BaseBdev2",
00:24:12.331        "uuid": "3832362e-a629-4cd0-bad3-afda0dd1ec88",
00:24:12.331        "is_configured": true,
00:24:12.331        "data_offset": 0,
00:24:12.331        "data_size": 65536
00:24:12.331      },
00:24:12.331      {
00:24:12.331        "name": "BaseBdev3",
00:24:12.331        "uuid": "d4a552fd-0944-4323-91f1-fd729c659287",
00:24:12.331        "is_configured": true,
00:24:12.331        "data_offset": 0,
00:24:12.331        "data_size": 65536
00:24:12.331      },
00:24:12.331      {
00:24:12.331        "name": "BaseBdev4",
00:24:12.331        "uuid": "7c980182-a73a-4053-ae2d-89406a613080",
00:24:12.331        "is_configured": true,
00:24:12.331        "data_offset": 0,
00:24:12.331        "data_size": 65536
00:24:12.331      }
00:24:12.331    ]
00:24:12.331  }'
00:24:12.331    17:08:05	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:24:12.331   17:08:05	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:24:12.331    17:08:05	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:24:12.331   17:08:05	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:24:12.331   17:08:05	-- bdev/bdev_raid.sh@662 -- # sleep 1
00:24:13.699   17:08:06	-- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout ))
00:24:13.699   17:08:06	-- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:24:13.699   17:08:06	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:24:13.699   17:08:06	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:24:13.699   17:08:06	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:24:13.699   17:08:06	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:24:13.699    17:08:06	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:24:13.700    17:08:06	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:24:13.700   17:08:06	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:24:13.700    "name": "raid_bdev1",
00:24:13.700    "uuid": "7288dc6c-e232-49b7-a3be-ee70369c35df",
00:24:13.700    "strip_size_kb": 64,
00:24:13.700    "state": "online",
00:24:13.700    "raid_level": "raid5f",
00:24:13.700    "superblock": false,
00:24:13.700    "num_base_bdevs": 4,
00:24:13.700    "num_base_bdevs_discovered": 4,
00:24:13.700    "num_base_bdevs_operational": 4,
00:24:13.700    "process": {
00:24:13.700      "type": "rebuild",
00:24:13.700      "target": "spare",
00:24:13.700      "progress": {
00:24:13.700        "blocks": 182400,
00:24:13.700        "percent": 92
00:24:13.700      }
00:24:13.700    },
00:24:13.700    "base_bdevs_list": [
00:24:13.700      {
00:24:13.700        "name": "spare",
00:24:13.700        "uuid": "8e6e6650-2f8e-507a-b372-8d732f471ab8",
00:24:13.700        "is_configured": true,
00:24:13.700        "data_offset": 0,
00:24:13.700        "data_size": 65536
00:24:13.700      },
00:24:13.700      {
00:24:13.700        "name": "BaseBdev2",
00:24:13.700        "uuid": "3832362e-a629-4cd0-bad3-afda0dd1ec88",
00:24:13.700        "is_configured": true,
00:24:13.700        "data_offset": 0,
00:24:13.700        "data_size": 65536
00:24:13.700      },
00:24:13.700      {
00:24:13.700        "name": "BaseBdev3",
00:24:13.700        "uuid": "d4a552fd-0944-4323-91f1-fd729c659287",
00:24:13.700        "is_configured": true,
00:24:13.700        "data_offset": 0,
00:24:13.700        "data_size": 65536
00:24:13.700      },
00:24:13.700      {
00:24:13.700        "name": "BaseBdev4",
00:24:13.700        "uuid": "7c980182-a73a-4053-ae2d-89406a613080",
00:24:13.700        "is_configured": true,
00:24:13.700        "data_offset": 0,
00:24:13.700        "data_size": 65536
00:24:13.700      }
00:24:13.700    ]
00:24:13.700  }'
00:24:13.700    17:08:06	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:24:13.700   17:08:06	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:24:13.700    17:08:06	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:24:13.700   17:08:06	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:24:13.700   17:08:06	-- bdev/bdev_raid.sh@662 -- # sleep 1
00:24:14.265  [2024-11-19 17:08:07.054624] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1
00:24:14.265  [2024-11-19 17:08:07.054733] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1
00:24:14.265  [2024-11-19 17:08:07.054840] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:24:14.830   17:08:07	-- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout ))
00:24:14.830   17:08:07	-- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:24:14.830   17:08:07	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:24:14.830   17:08:07	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:24:14.830   17:08:07	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:24:14.830   17:08:07	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:24:14.830    17:08:07	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:24:14.830    17:08:07	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:24:15.087   17:08:07	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:24:15.087    "name": "raid_bdev1",
00:24:15.087    "uuid": "7288dc6c-e232-49b7-a3be-ee70369c35df",
00:24:15.087    "strip_size_kb": 64,
00:24:15.087    "state": "online",
00:24:15.087    "raid_level": "raid5f",
00:24:15.087    "superblock": false,
00:24:15.087    "num_base_bdevs": 4,
00:24:15.087    "num_base_bdevs_discovered": 4,
00:24:15.087    "num_base_bdevs_operational": 4,
00:24:15.087    "base_bdevs_list": [
00:24:15.087      {
00:24:15.087        "name": "spare",
00:24:15.087        "uuid": "8e6e6650-2f8e-507a-b372-8d732f471ab8",
00:24:15.087        "is_configured": true,
00:24:15.087        "data_offset": 0,
00:24:15.087        "data_size": 65536
00:24:15.087      },
00:24:15.087      {
00:24:15.087        "name": "BaseBdev2",
00:24:15.087        "uuid": "3832362e-a629-4cd0-bad3-afda0dd1ec88",
00:24:15.087        "is_configured": true,
00:24:15.087        "data_offset": 0,
00:24:15.087        "data_size": 65536
00:24:15.087      },
00:24:15.087      {
00:24:15.087        "name": "BaseBdev3",
00:24:15.087        "uuid": "d4a552fd-0944-4323-91f1-fd729c659287",
00:24:15.087        "is_configured": true,
00:24:15.087        "data_offset": 0,
00:24:15.087        "data_size": 65536
00:24:15.087      },
00:24:15.087      {
00:24:15.087        "name": "BaseBdev4",
00:24:15.087        "uuid": "7c980182-a73a-4053-ae2d-89406a613080",
00:24:15.087        "is_configured": true,
00:24:15.087        "data_offset": 0,
00:24:15.087        "data_size": 65536
00:24:15.087      }
00:24:15.087    ]
00:24:15.087  }'
00:24:15.087    17:08:07	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:24:15.087   17:08:07	-- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]]
00:24:15.087    17:08:07	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:24:15.087   17:08:07	-- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]]
00:24:15.087   17:08:07	-- bdev/bdev_raid.sh@660 -- # break
00:24:15.087   17:08:07	-- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none
00:24:15.087   17:08:07	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:24:15.087   17:08:07	-- bdev/bdev_raid.sh@184 -- # local process_type=none
00:24:15.087   17:08:07	-- bdev/bdev_raid.sh@185 -- # local target=none
00:24:15.087   17:08:07	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:24:15.087    17:08:07	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:24:15.087    17:08:07	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:24:15.345   17:08:08	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:24:15.345    "name": "raid_bdev1",
00:24:15.345    "uuid": "7288dc6c-e232-49b7-a3be-ee70369c35df",
00:24:15.345    "strip_size_kb": 64,
00:24:15.345    "state": "online",
00:24:15.345    "raid_level": "raid5f",
00:24:15.345    "superblock": false,
00:24:15.345    "num_base_bdevs": 4,
00:24:15.345    "num_base_bdevs_discovered": 4,
00:24:15.345    "num_base_bdevs_operational": 4,
00:24:15.345    "base_bdevs_list": [
00:24:15.345      {
00:24:15.345        "name": "spare",
00:24:15.345        "uuid": "8e6e6650-2f8e-507a-b372-8d732f471ab8",
00:24:15.345        "is_configured": true,
00:24:15.345        "data_offset": 0,
00:24:15.345        "data_size": 65536
00:24:15.345      },
00:24:15.345      {
00:24:15.345        "name": "BaseBdev2",
00:24:15.345        "uuid": "3832362e-a629-4cd0-bad3-afda0dd1ec88",
00:24:15.345        "is_configured": true,
00:24:15.345        "data_offset": 0,
00:24:15.345        "data_size": 65536
00:24:15.345      },
00:24:15.345      {
00:24:15.345        "name": "BaseBdev3",
00:24:15.345        "uuid": "d4a552fd-0944-4323-91f1-fd729c659287",
00:24:15.345        "is_configured": true,
00:24:15.345        "data_offset": 0,
00:24:15.345        "data_size": 65536
00:24:15.345      },
00:24:15.345      {
00:24:15.345        "name": "BaseBdev4",
00:24:15.345        "uuid": "7c980182-a73a-4053-ae2d-89406a613080",
00:24:15.345        "is_configured": true,
00:24:15.345        "data_offset": 0,
00:24:15.345        "data_size": 65536
00:24:15.345      }
00:24:15.345    ]
00:24:15.345  }'
00:24:15.345    17:08:08	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:24:15.345   17:08:08	-- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]]
00:24:15.345    17:08:08	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:24:15.345   17:08:08	-- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]]
00:24:15.345   17:08:08	-- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4
00:24:15.345   17:08:08	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:24:15.345   17:08:08	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:24:15.345   17:08:08	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:24:15.345   17:08:08	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:24:15.345   17:08:08	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:24:15.345   17:08:08	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:24:15.345   17:08:08	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:24:15.345   17:08:08	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:24:15.345   17:08:08	-- bdev/bdev_raid.sh@125 -- # local tmp
00:24:15.345    17:08:08	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:24:15.345    17:08:08	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:24:15.602   17:08:08	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:24:15.602    "name": "raid_bdev1",
00:24:15.602    "uuid": "7288dc6c-e232-49b7-a3be-ee70369c35df",
00:24:15.602    "strip_size_kb": 64,
00:24:15.602    "state": "online",
00:24:15.602    "raid_level": "raid5f",
00:24:15.602    "superblock": false,
00:24:15.602    "num_base_bdevs": 4,
00:24:15.602    "num_base_bdevs_discovered": 4,
00:24:15.602    "num_base_bdevs_operational": 4,
00:24:15.602    "base_bdevs_list": [
00:24:15.602      {
00:24:15.602        "name": "spare",
00:24:15.602        "uuid": "8e6e6650-2f8e-507a-b372-8d732f471ab8",
00:24:15.602        "is_configured": true,
00:24:15.602        "data_offset": 0,
00:24:15.602        "data_size": 65536
00:24:15.602      },
00:24:15.602      {
00:24:15.602        "name": "BaseBdev2",
00:24:15.602        "uuid": "3832362e-a629-4cd0-bad3-afda0dd1ec88",
00:24:15.602        "is_configured": true,
00:24:15.602        "data_offset": 0,
00:24:15.602        "data_size": 65536
00:24:15.602      },
00:24:15.602      {
00:24:15.602        "name": "BaseBdev3",
00:24:15.602        "uuid": "d4a552fd-0944-4323-91f1-fd729c659287",
00:24:15.602        "is_configured": true,
00:24:15.602        "data_offset": 0,
00:24:15.602        "data_size": 65536
00:24:15.602      },
00:24:15.602      {
00:24:15.602        "name": "BaseBdev4",
00:24:15.602        "uuid": "7c980182-a73a-4053-ae2d-89406a613080",
00:24:15.602        "is_configured": true,
00:24:15.602        "data_offset": 0,
00:24:15.602        "data_size": 65536
00:24:15.602      }
00:24:15.602    ]
00:24:15.602  }'
00:24:15.602   17:08:08	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:24:15.602   17:08:08	-- common/autotest_common.sh@10 -- # set +x
00:24:16.167   17:08:08	-- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1
00:24:16.425  [2024-11-19 17:08:09.153262] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:24:16.425  [2024-11-19 17:08:09.153308] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:24:16.425  [2024-11-19 17:08:09.153423] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:24:16.425  [2024-11-19 17:08:09.153519] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:24:16.425  [2024-11-19 17:08:09.153530] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline
00:24:16.425    17:08:09	-- bdev/bdev_raid.sh@671 -- # jq length
00:24:16.425    17:08:09	-- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:24:16.683   17:08:09	-- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]]
00:24:16.683   17:08:09	-- bdev/bdev_raid.sh@673 -- # '[' false = true ']'
00:24:16.683   17:08:09	-- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1'
00:24:16.683   17:08:09	-- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:24:16.683   17:08:09	-- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare')
00:24:16.683   17:08:09	-- bdev/nbd_common.sh@10 -- # local bdev_list
00:24:16.683   17:08:09	-- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:24:16.683   17:08:09	-- bdev/nbd_common.sh@11 -- # local nbd_list
00:24:16.683   17:08:09	-- bdev/nbd_common.sh@12 -- # local i
00:24:16.683   17:08:09	-- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:24:16.683   17:08:09	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:24:16.683   17:08:09	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0
00:24:16.941  /dev/nbd0
00:24:16.941    17:08:09	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:24:16.941   17:08:09	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:24:16.941   17:08:09	-- common/autotest_common.sh@866 -- # local nbd_name=nbd0
00:24:16.941   17:08:09	-- common/autotest_common.sh@867 -- # local i
00:24:16.941   17:08:09	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:24:16.941   17:08:09	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:24:16.941   17:08:09	-- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions
00:24:16.941   17:08:09	-- common/autotest_common.sh@871 -- # break
00:24:16.941   17:08:09	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:24:16.941   17:08:09	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:24:16.941   17:08:09	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:24:16.941  1+0 records in
00:24:16.941  1+0 records out
00:24:16.941  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000237958 s, 17.2 MB/s
00:24:16.941    17:08:09	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:24:16.941   17:08:09	-- common/autotest_common.sh@884 -- # size=4096
00:24:16.941   17:08:09	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:24:16.941   17:08:09	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:24:16.941   17:08:09	-- common/autotest_common.sh@887 -- # return 0
00:24:16.941   17:08:09	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:24:16.941   17:08:09	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:24:16.941   17:08:09	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1
00:24:17.199  /dev/nbd1
00:24:17.199    17:08:09	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:24:17.199   17:08:10	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:24:17.199   17:08:10	-- common/autotest_common.sh@866 -- # local nbd_name=nbd1
00:24:17.199   17:08:10	-- common/autotest_common.sh@867 -- # local i
00:24:17.199   17:08:10	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:24:17.199   17:08:10	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:24:17.199   17:08:10	-- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions
00:24:17.199   17:08:10	-- common/autotest_common.sh@871 -- # break
00:24:17.199   17:08:10	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:24:17.199   17:08:10	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:24:17.199   17:08:10	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:24:17.199  1+0 records in
00:24:17.199  1+0 records out
00:24:17.199  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000514882 s, 8.0 MB/s
00:24:17.199    17:08:10	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:24:17.199   17:08:10	-- common/autotest_common.sh@884 -- # size=4096
00:24:17.199   17:08:10	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:24:17.199   17:08:10	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:24:17.199   17:08:10	-- common/autotest_common.sh@887 -- # return 0
00:24:17.199   17:08:10	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:24:17.199   17:08:10	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:24:17.199   17:08:10	-- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1
00:24:17.457   17:08:10	-- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1'
00:24:17.457   17:08:10	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:24:17.457   17:08:10	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:24:17.457   17:08:10	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:24:17.457   17:08:10	-- bdev/nbd_common.sh@51 -- # local i
00:24:17.457   17:08:10	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:24:17.457   17:08:10	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0
00:24:17.715    17:08:10	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:24:17.715   17:08:10	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:24:17.715   17:08:10	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:24:17.715   17:08:10	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:24:17.715   17:08:10	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:24:17.715   17:08:10	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:24:17.715   17:08:10	-- bdev/nbd_common.sh@41 -- # break
00:24:17.715   17:08:10	-- bdev/nbd_common.sh@45 -- # return 0
00:24:17.715   17:08:10	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:24:17.715   17:08:10	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1
00:24:17.715    17:08:10	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:24:17.715   17:08:10	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:24:17.715   17:08:10	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:24:17.715   17:08:10	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:24:17.715   17:08:10	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:24:17.715   17:08:10	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:24:17.715   17:08:10	-- bdev/nbd_common.sh@41 -- # break
00:24:17.715   17:08:10	-- bdev/nbd_common.sh@45 -- # return 0
00:24:17.715   17:08:10	-- bdev/bdev_raid.sh@692 -- # '[' false = true ']'
00:24:17.715   17:08:10	-- bdev/bdev_raid.sh@709 -- # killprocess 141333
00:24:17.715   17:08:10	-- common/autotest_common.sh@936 -- # '[' -z 141333 ']'
00:24:17.715   17:08:10	-- common/autotest_common.sh@940 -- # kill -0 141333
00:24:17.715    17:08:10	-- common/autotest_common.sh@941 -- # uname
00:24:17.716   17:08:10	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:24:17.716    17:08:10	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 141333
00:24:17.974  killing process with pid 141333
00:24:17.974  Received shutdown signal, test time was about 60.000000 seconds
00:24:17.974  
00:24:17.974                                                                                                  Latency(us)
00:24:17.974  
[2024-11-19T17:08:10.838Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:24:17.974  
[2024-11-19T17:08:10.838Z]  ===================================================================================================================
00:24:17.974  
[2024-11-19T17:08:10.838Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00 18446744073709551616.00       0.00
00:24:17.974   17:08:10	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:24:17.974   17:08:10	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:24:17.974   17:08:10	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 141333'
00:24:17.974   17:08:10	-- common/autotest_common.sh@955 -- # kill 141333
00:24:17.974   17:08:10	-- common/autotest_common.sh@960 -- # wait 141333
00:24:17.974  [2024-11-19 17:08:10.579482] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:24:17.974  [2024-11-19 17:08:10.633908] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:24:18.232   17:08:10	-- bdev/bdev_raid.sh@711 -- # return 0
00:24:18.232  
00:24:18.232  real	0m23.948s
00:24:18.232  user	0m34.732s
00:24:18.232  sys	0m3.544s
00:24:18.232   17:08:10	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:24:18.232   17:08:10	-- common/autotest_common.sh@10 -- # set +x
00:24:18.232  ************************************
00:24:18.232  END TEST raid5f_rebuild_test
00:24:18.232  ************************************
00:24:18.232   17:08:10	-- bdev/bdev_raid.sh@749 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false
00:24:18.232   17:08:10	-- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']'
00:24:18.232   17:08:10	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:24:18.232   17:08:10	-- common/autotest_common.sh@10 -- # set +x
00:24:18.232  ************************************
00:24:18.232  START TEST raid5f_rebuild_test_sb
00:24:18.232  ************************************
00:24:18.232   17:08:10	-- common/autotest_common.sh@1114 -- # raid_rebuild_test raid5f 4 true false
00:24:18.232   17:08:10	-- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f
00:24:18.232   17:08:10	-- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4
00:24:18.232   17:08:10	-- bdev/bdev_raid.sh@519 -- # local superblock=true
00:24:18.232   17:08:10	-- bdev/bdev_raid.sh@520 -- # local background_io=false
00:24:18.232    17:08:10	-- bdev/bdev_raid.sh@521 -- # (( i = 1 ))
00:24:18.232    17:08:10	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:24:18.232    17:08:10	-- bdev/bdev_raid.sh@521 -- # echo BaseBdev1
00:24:18.232    17:08:10	-- bdev/bdev_raid.sh@521 -- # (( i++ ))
00:24:18.232    17:08:10	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:24:18.232    17:08:10	-- bdev/bdev_raid.sh@521 -- # echo BaseBdev2
00:24:18.232    17:08:10	-- bdev/bdev_raid.sh@521 -- # (( i++ ))
00:24:18.232    17:08:10	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:24:18.232    17:08:10	-- bdev/bdev_raid.sh@521 -- # echo BaseBdev3
00:24:18.232    17:08:10	-- bdev/bdev_raid.sh@521 -- # (( i++ ))
00:24:18.232    17:08:10	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:24:18.232    17:08:10	-- bdev/bdev_raid.sh@521 -- # echo BaseBdev4
00:24:18.232    17:08:10	-- bdev/bdev_raid.sh@521 -- # (( i++ ))
00:24:18.232    17:08:10	-- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs ))
00:24:18.232   17:08:10	-- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4')
00:24:18.232   17:08:10	-- bdev/bdev_raid.sh@521 -- # local base_bdevs
00:24:18.232   17:08:10	-- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1
00:24:18.232   17:08:10	-- bdev/bdev_raid.sh@523 -- # local strip_size
00:24:18.232   17:08:10	-- bdev/bdev_raid.sh@524 -- # local create_arg
00:24:18.232   17:08:10	-- bdev/bdev_raid.sh@525 -- # local raid_bdev_size
00:24:18.232   17:08:10	-- bdev/bdev_raid.sh@526 -- # local data_offset
00:24:18.232   17:08:10	-- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']'
00:24:18.232   17:08:10	-- bdev/bdev_raid.sh@529 -- # '[' false = true ']'
00:24:18.232   17:08:10	-- bdev/bdev_raid.sh@533 -- # strip_size=64
00:24:18.232   17:08:10	-- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64'
00:24:18.232   17:08:10	-- bdev/bdev_raid.sh@539 -- # '[' true = true ']'
00:24:18.232   17:08:10	-- bdev/bdev_raid.sh@540 -- # create_arg+=' -s'
00:24:18.232   17:08:10	-- bdev/bdev_raid.sh@544 -- # raid_pid=141936
00:24:18.232   17:08:10	-- bdev/bdev_raid.sh@545 -- # waitforlisten 141936 /var/tmp/spdk-raid.sock
00:24:18.232   17:08:10	-- common/autotest_common.sh@829 -- # '[' -z 141936 ']'
00:24:18.232   17:08:10	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock
00:24:18.232   17:08:10	-- common/autotest_common.sh@834 -- # local max_retries=100
00:24:18.233   17:08:10	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...'
00:24:18.233   17:08:10	-- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid
00:24:18.233  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...
00:24:18.233   17:08:10	-- common/autotest_common.sh@838 -- # xtrace_disable
00:24:18.233   17:08:10	-- common/autotest_common.sh@10 -- # set +x
00:24:18.233  I/O size of 3145728 is greater than zero copy threshold (65536).
00:24:18.233  Zero copy mechanism will not be used.
00:24:18.233  [2024-11-19 17:08:11.026425] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:24:18.233  [2024-11-19 17:08:11.026604] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141936 ]
00:24:18.490  [2024-11-19 17:08:11.171877] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:24:18.490  [2024-11-19 17:08:11.222432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:24:18.490  [2024-11-19 17:08:11.266382] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size
00:24:19.423   17:08:11	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:24:19.423   17:08:11	-- common/autotest_common.sh@862 -- # return 0
00:24:19.423   17:08:11	-- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}"
00:24:19.423   17:08:11	-- bdev/bdev_raid.sh@549 -- # '[' true = true ']'
00:24:19.423   17:08:11	-- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc
00:24:19.423  BaseBdev1_malloc
00:24:19.423   17:08:12	-- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1
00:24:19.681  [2024-11-19 17:08:12.368219] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc
00:24:19.681  [2024-11-19 17:08:12.368332] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:24:19.681  [2024-11-19 17:08:12.368381] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80
00:24:19.681  [2024-11-19 17:08:12.368438] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:24:19.681  [2024-11-19 17:08:12.371238] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:24:19.681  [2024-11-19 17:08:12.371312] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1
00:24:19.681  BaseBdev1
00:24:19.681   17:08:12	-- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}"
00:24:19.681   17:08:12	-- bdev/bdev_raid.sh@549 -- # '[' true = true ']'
00:24:19.681   17:08:12	-- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc
00:24:19.939  BaseBdev2_malloc
00:24:19.939   17:08:12	-- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2
00:24:19.939  [2024-11-19 17:08:12.785509] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc
00:24:19.939  [2024-11-19 17:08:12.785630] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:24:19.939  [2024-11-19 17:08:12.785669] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680
00:24:19.939  [2024-11-19 17:08:12.785711] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:24:19.939  [2024-11-19 17:08:12.788188] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:24:19.939  [2024-11-19 17:08:12.788245] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2
00:24:19.939  BaseBdev2
00:24:20.197   17:08:12	-- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}"
00:24:20.197   17:08:12	-- bdev/bdev_raid.sh@549 -- # '[' true = true ']'
00:24:20.197   17:08:12	-- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc
00:24:20.197  BaseBdev3_malloc
00:24:20.197   17:08:13	-- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3
00:24:20.456  [2024-11-19 17:08:13.206224] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc
00:24:20.456  [2024-11-19 17:08:13.206310] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:24:20.456  [2024-11-19 17:08:13.206349] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280
00:24:20.456  [2024-11-19 17:08:13.206395] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:24:20.456  [2024-11-19 17:08:13.208873] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:24:20.456  [2024-11-19 17:08:13.208933] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3
00:24:20.456  BaseBdev3
00:24:20.456   17:08:13	-- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}"
00:24:20.456   17:08:13	-- bdev/bdev_raid.sh@549 -- # '[' true = true ']'
00:24:20.456   17:08:13	-- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc
00:24:20.715  BaseBdev4_malloc
00:24:20.715   17:08:13	-- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4
00:24:20.973  [2024-11-19 17:08:13.624647] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc
00:24:20.973  [2024-11-19 17:08:13.624757] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:24:20.973  [2024-11-19 17:08:13.624810] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80
00:24:20.973  [2024-11-19 17:08:13.624852] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:24:20.973  [2024-11-19 17:08:13.627348] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:24:20.973  [2024-11-19 17:08:13.627425] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4
00:24:20.973  BaseBdev4
00:24:20.973   17:08:13	-- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc
00:24:21.232  spare_malloc
00:24:21.232   17:08:13	-- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000
00:24:21.232  spare_delay
00:24:21.232   17:08:14	-- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare
00:24:21.490  [2024-11-19 17:08:14.270010] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay
00:24:21.490  [2024-11-19 17:08:14.270111] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:24:21.490  [2024-11-19 17:08:14.270145] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080
00:24:21.490  [2024-11-19 17:08:14.270185] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:24:21.490  [2024-11-19 17:08:14.272816] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:24:21.490  [2024-11-19 17:08:14.272879] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare
00:24:21.490  spare
00:24:21.490   17:08:14	-- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1
00:24:21.749  [2024-11-19 17:08:14.466143] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:24:21.749  [2024-11-19 17:08:14.468385] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:24:21.749  [2024-11-19 17:08:14.468458] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:24:21.749  [2024-11-19 17:08:14.468499] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed
00:24:21.749  [2024-11-19 17:08:14.468695] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680
00:24:21.749  [2024-11-19 17:08:14.468715] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512
00:24:21.749  [2024-11-19 17:08:14.468900] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600
00:24:21.749  [2024-11-19 17:08:14.469700] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680
00:24:21.749  [2024-11-19 17:08:14.469723] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680
00:24:21.749  [2024-11-19 17:08:14.469861] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:24:21.749   17:08:14	-- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4
00:24:21.749   17:08:14	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:24:21.749   17:08:14	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:24:21.749   17:08:14	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:24:21.749   17:08:14	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:24:21.749   17:08:14	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:24:21.749   17:08:14	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:24:21.749   17:08:14	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:24:21.749   17:08:14	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:24:21.749   17:08:14	-- bdev/bdev_raid.sh@125 -- # local tmp
00:24:21.749    17:08:14	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:24:21.749    17:08:14	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:24:22.007   17:08:14	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:24:22.007    "name": "raid_bdev1",
00:24:22.007    "uuid": "dd2872a9-6d17-4991-9ff7-64e80303d618",
00:24:22.007    "strip_size_kb": 64,
00:24:22.007    "state": "online",
00:24:22.007    "raid_level": "raid5f",
00:24:22.007    "superblock": true,
00:24:22.007    "num_base_bdevs": 4,
00:24:22.007    "num_base_bdevs_discovered": 4,
00:24:22.007    "num_base_bdevs_operational": 4,
00:24:22.007    "base_bdevs_list": [
00:24:22.007      {
00:24:22.007        "name": "BaseBdev1",
00:24:22.007        "uuid": "717f7e2c-b676-5fdb-bf87-d8742ff8502d",
00:24:22.007        "is_configured": true,
00:24:22.007        "data_offset": 2048,
00:24:22.007        "data_size": 63488
00:24:22.007      },
00:24:22.007      {
00:24:22.007        "name": "BaseBdev2",
00:24:22.007        "uuid": "a57adbf0-bd6b-5305-9d5c-1455f0a4789a",
00:24:22.007        "is_configured": true,
00:24:22.007        "data_offset": 2048,
00:24:22.007        "data_size": 63488
00:24:22.007      },
00:24:22.007      {
00:24:22.007        "name": "BaseBdev3",
00:24:22.007        "uuid": "3c1eb8e7-ce4e-5c31-bd67-9af9f4b607e6",
00:24:22.007        "is_configured": true,
00:24:22.007        "data_offset": 2048,
00:24:22.007        "data_size": 63488
00:24:22.007      },
00:24:22.007      {
00:24:22.007        "name": "BaseBdev4",
00:24:22.007        "uuid": "cb293b39-f1fc-545d-b8ce-f76ca314dd22",
00:24:22.008        "is_configured": true,
00:24:22.008        "data_offset": 2048,
00:24:22.008        "data_size": 63488
00:24:22.008      }
00:24:22.008    ]
00:24:22.008  }'
00:24:22.008   17:08:14	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:24:22.008   17:08:14	-- common/autotest_common.sh@10 -- # set +x
00:24:22.579    17:08:15	-- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1
00:24:22.579    17:08:15	-- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks'
00:24:22.845  [2024-11-19 17:08:15.628276] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json
00:24:22.845   17:08:15	-- bdev/bdev_raid.sh@567 -- # raid_bdev_size=190464
00:24:22.845    17:08:15	-- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:24:22.845    17:08:15	-- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset'
00:24:23.103   17:08:15	-- bdev/bdev_raid.sh@570 -- # data_offset=2048
00:24:23.103   17:08:15	-- bdev/bdev_raid.sh@572 -- # '[' false = true ']'
00:24:23.103   17:08:15	-- bdev/bdev_raid.sh@576 -- # local write_unit_size
00:24:23.103   17:08:15	-- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0
00:24:23.103   17:08:15	-- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:24:23.103   17:08:15	-- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1')
00:24:23.103   17:08:15	-- bdev/nbd_common.sh@10 -- # local bdev_list
00:24:23.103   17:08:15	-- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0')
00:24:23.103   17:08:15	-- bdev/nbd_common.sh@11 -- # local nbd_list
00:24:23.103   17:08:15	-- bdev/nbd_common.sh@12 -- # local i
00:24:23.103   17:08:15	-- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:24:23.103   17:08:15	-- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:24:23.103   17:08:15	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0
00:24:23.361  [2024-11-19 17:08:16.160282] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0
00:24:23.361  /dev/nbd0
00:24:23.361    17:08:16	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:24:23.361   17:08:16	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:24:23.361   17:08:16	-- common/autotest_common.sh@866 -- # local nbd_name=nbd0
00:24:23.361   17:08:16	-- common/autotest_common.sh@867 -- # local i
00:24:23.361   17:08:16	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:24:23.361   17:08:16	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:24:23.361   17:08:16	-- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions
00:24:23.361   17:08:16	-- common/autotest_common.sh@871 -- # break
00:24:23.361   17:08:16	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:24:23.620   17:08:16	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:24:23.620   17:08:16	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:24:23.620  1+0 records in
00:24:23.620  1+0 records out
00:24:23.620  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000262629 s, 15.6 MB/s
00:24:23.620    17:08:16	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:24:23.620   17:08:16	-- common/autotest_common.sh@884 -- # size=4096
00:24:23.620   17:08:16	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:24:23.620   17:08:16	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:24:23.620   17:08:16	-- common/autotest_common.sh@887 -- # return 0
00:24:23.620   17:08:16	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:24:23.620   17:08:16	-- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:24:23.620   17:08:16	-- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']'
00:24:23.620   17:08:16	-- bdev/bdev_raid.sh@581 -- # write_unit_size=384
00:24:23.620   17:08:16	-- bdev/bdev_raid.sh@582 -- # echo 192
00:24:23.620   17:08:16	-- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct
00:24:23.878  496+0 records in
00:24:23.878  496+0 records out
00:24:23.878  97517568 bytes (98 MB, 93 MiB) copied, 0.477603 s, 204 MB/s
00:24:23.878   17:08:16	-- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0
00:24:23.878   17:08:16	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:24:23.878   17:08:16	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:24:23.878   17:08:16	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:24:23.878   17:08:16	-- bdev/nbd_common.sh@51 -- # local i
00:24:23.878   17:08:16	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:24:23.878   17:08:16	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0
00:24:24.443    17:08:16	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:24:24.443  [2024-11-19 17:08:16.995157] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:24:24.443   17:08:16	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:24:24.443   17:08:16	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:24:24.443   17:08:16	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:24:24.443   17:08:16	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:24:24.443   17:08:17	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:24:24.443   17:08:17	-- bdev/nbd_common.sh@41 -- # break
00:24:24.443   17:08:17	-- bdev/nbd_common.sh@45 -- # return 0
00:24:24.443   17:08:17	-- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1
00:24:24.443  [2024-11-19 17:08:17.194605] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1
00:24:24.443   17:08:17	-- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3
00:24:24.443   17:08:17	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:24:24.443   17:08:17	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:24:24.443   17:08:17	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:24:24.443   17:08:17	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:24:24.443   17:08:17	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:24:24.443   17:08:17	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:24:24.443   17:08:17	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:24:24.443   17:08:17	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:24:24.443   17:08:17	-- bdev/bdev_raid.sh@125 -- # local tmp
00:24:24.443    17:08:17	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:24:24.443    17:08:17	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:24:24.701   17:08:17	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:24:24.701    "name": "raid_bdev1",
00:24:24.701    "uuid": "dd2872a9-6d17-4991-9ff7-64e80303d618",
00:24:24.701    "strip_size_kb": 64,
00:24:24.701    "state": "online",
00:24:24.701    "raid_level": "raid5f",
00:24:24.701    "superblock": true,
00:24:24.701    "num_base_bdevs": 4,
00:24:24.702    "num_base_bdevs_discovered": 3,
00:24:24.702    "num_base_bdevs_operational": 3,
00:24:24.702    "base_bdevs_list": [
00:24:24.702      {
00:24:24.702        "name": null,
00:24:24.702        "uuid": "00000000-0000-0000-0000-000000000000",
00:24:24.702        "is_configured": false,
00:24:24.702        "data_offset": 2048,
00:24:24.702        "data_size": 63488
00:24:24.702      },
00:24:24.702      {
00:24:24.702        "name": "BaseBdev2",
00:24:24.702        "uuid": "a57adbf0-bd6b-5305-9d5c-1455f0a4789a",
00:24:24.702        "is_configured": true,
00:24:24.702        "data_offset": 2048,
00:24:24.702        "data_size": 63488
00:24:24.702      },
00:24:24.702      {
00:24:24.702        "name": "BaseBdev3",
00:24:24.702        "uuid": "3c1eb8e7-ce4e-5c31-bd67-9af9f4b607e6",
00:24:24.702        "is_configured": true,
00:24:24.702        "data_offset": 2048,
00:24:24.702        "data_size": 63488
00:24:24.702      },
00:24:24.702      {
00:24:24.702        "name": "BaseBdev4",
00:24:24.702        "uuid": "cb293b39-f1fc-545d-b8ce-f76ca314dd22",
00:24:24.702        "is_configured": true,
00:24:24.702        "data_offset": 2048,
00:24:24.702        "data_size": 63488
00:24:24.702      }
00:24:24.702    ]
00:24:24.702  }'
00:24:24.702   17:08:17	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:24:24.702   17:08:17	-- common/autotest_common.sh@10 -- # set +x
00:24:25.268   17:08:18	-- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare
00:24:25.526  [2024-11-19 17:08:18.358922] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare
00:24:25.526  [2024-11-19 17:08:18.358996] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:24:25.526  [2024-11-19 17:08:18.362802] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000270a0
00:24:25.526  [2024-11-19 17:08:18.366168] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1
00:24:25.526   17:08:18	-- bdev/bdev_raid.sh@598 -- # sleep 1
00:24:26.900   17:08:19	-- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:24:26.900   17:08:19	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:24:26.900   17:08:19	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:24:26.900   17:08:19	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:24:26.900   17:08:19	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:24:26.900    17:08:19	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:24:26.900    17:08:19	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:24:26.900   17:08:19	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:24:26.900    "name": "raid_bdev1",
00:24:26.900    "uuid": "dd2872a9-6d17-4991-9ff7-64e80303d618",
00:24:26.900    "strip_size_kb": 64,
00:24:26.900    "state": "online",
00:24:26.900    "raid_level": "raid5f",
00:24:26.900    "superblock": true,
00:24:26.900    "num_base_bdevs": 4,
00:24:26.900    "num_base_bdevs_discovered": 4,
00:24:26.900    "num_base_bdevs_operational": 4,
00:24:26.900    "process": {
00:24:26.900      "type": "rebuild",
00:24:26.900      "target": "spare",
00:24:26.900      "progress": {
00:24:26.900        "blocks": 23040,
00:24:26.900        "percent": 12
00:24:26.900      }
00:24:26.900    },
00:24:26.900    "base_bdevs_list": [
00:24:26.900      {
00:24:26.900        "name": "spare",
00:24:26.900        "uuid": "df888a7f-b975-5380-b504-d8717d1a0743",
00:24:26.900        "is_configured": true,
00:24:26.900        "data_offset": 2048,
00:24:26.900        "data_size": 63488
00:24:26.900      },
00:24:26.900      {
00:24:26.900        "name": "BaseBdev2",
00:24:26.900        "uuid": "a57adbf0-bd6b-5305-9d5c-1455f0a4789a",
00:24:26.900        "is_configured": true,
00:24:26.900        "data_offset": 2048,
00:24:26.900        "data_size": 63488
00:24:26.900      },
00:24:26.900      {
00:24:26.900        "name": "BaseBdev3",
00:24:26.900        "uuid": "3c1eb8e7-ce4e-5c31-bd67-9af9f4b607e6",
00:24:26.900        "is_configured": true,
00:24:26.900        "data_offset": 2048,
00:24:26.900        "data_size": 63488
00:24:26.900      },
00:24:26.900      {
00:24:26.900        "name": "BaseBdev4",
00:24:26.900        "uuid": "cb293b39-f1fc-545d-b8ce-f76ca314dd22",
00:24:26.900        "is_configured": true,
00:24:26.900        "data_offset": 2048,
00:24:26.900        "data_size": 63488
00:24:26.900      }
00:24:26.900    ]
00:24:26.900  }'
00:24:26.900    17:08:19	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:24:26.900   17:08:19	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:24:26.900    17:08:19	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:24:26.900   17:08:19	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:24:26.900   17:08:19	-- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare
00:24:27.206  [2024-11-19 17:08:19.980968] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare
00:24:27.465  [2024-11-19 17:08:20.081263] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device
00:24:27.465  [2024-11-19 17:08:20.081380] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:24:27.465   17:08:20	-- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3
00:24:27.465   17:08:20	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:24:27.465   17:08:20	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:24:27.465   17:08:20	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:24:27.465   17:08:20	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:24:27.465   17:08:20	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3
00:24:27.465   17:08:20	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:24:27.465   17:08:20	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:24:27.465   17:08:20	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:24:27.465   17:08:20	-- bdev/bdev_raid.sh@125 -- # local tmp
00:24:27.465    17:08:20	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:24:27.465    17:08:20	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:24:27.465   17:08:20	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:24:27.465    "name": "raid_bdev1",
00:24:27.465    "uuid": "dd2872a9-6d17-4991-9ff7-64e80303d618",
00:24:27.465    "strip_size_kb": 64,
00:24:27.465    "state": "online",
00:24:27.465    "raid_level": "raid5f",
00:24:27.465    "superblock": true,
00:24:27.465    "num_base_bdevs": 4,
00:24:27.465    "num_base_bdevs_discovered": 3,
00:24:27.465    "num_base_bdevs_operational": 3,
00:24:27.465    "base_bdevs_list": [
00:24:27.465      {
00:24:27.465        "name": null,
00:24:27.465        "uuid": "00000000-0000-0000-0000-000000000000",
00:24:27.465        "is_configured": false,
00:24:27.465        "data_offset": 2048,
00:24:27.465        "data_size": 63488
00:24:27.465      },
00:24:27.465      {
00:24:27.465        "name": "BaseBdev2",
00:24:27.465        "uuid": "a57adbf0-bd6b-5305-9d5c-1455f0a4789a",
00:24:27.465        "is_configured": true,
00:24:27.465        "data_offset": 2048,
00:24:27.465        "data_size": 63488
00:24:27.465      },
00:24:27.465      {
00:24:27.465        "name": "BaseBdev3",
00:24:27.465        "uuid": "3c1eb8e7-ce4e-5c31-bd67-9af9f4b607e6",
00:24:27.465        "is_configured": true,
00:24:27.465        "data_offset": 2048,
00:24:27.465        "data_size": 63488
00:24:27.465      },
00:24:27.465      {
00:24:27.465        "name": "BaseBdev4",
00:24:27.465        "uuid": "cb293b39-f1fc-545d-b8ce-f76ca314dd22",
00:24:27.465        "is_configured": true,
00:24:27.465        "data_offset": 2048,
00:24:27.465        "data_size": 63488
00:24:27.465      }
00:24:27.465    ]
00:24:27.465  }'
00:24:27.465   17:08:20	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:24:27.465   17:08:20	-- common/autotest_common.sh@10 -- # set +x
00:24:28.400   17:08:20	-- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none
00:24:28.400   17:08:20	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:24:28.400   17:08:20	-- bdev/bdev_raid.sh@184 -- # local process_type=none
00:24:28.400   17:08:20	-- bdev/bdev_raid.sh@185 -- # local target=none
00:24:28.400   17:08:20	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:24:28.400    17:08:20	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:24:28.400    17:08:20	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:24:28.400   17:08:21	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:24:28.400    "name": "raid_bdev1",
00:24:28.400    "uuid": "dd2872a9-6d17-4991-9ff7-64e80303d618",
00:24:28.400    "strip_size_kb": 64,
00:24:28.400    "state": "online",
00:24:28.400    "raid_level": "raid5f",
00:24:28.400    "superblock": true,
00:24:28.400    "num_base_bdevs": 4,
00:24:28.400    "num_base_bdevs_discovered": 3,
00:24:28.400    "num_base_bdevs_operational": 3,
00:24:28.400    "base_bdevs_list": [
00:24:28.400      {
00:24:28.400        "name": null,
00:24:28.400        "uuid": "00000000-0000-0000-0000-000000000000",
00:24:28.400        "is_configured": false,
00:24:28.400        "data_offset": 2048,
00:24:28.400        "data_size": 63488
00:24:28.400      },
00:24:28.400      {
00:24:28.400        "name": "BaseBdev2",
00:24:28.400        "uuid": "a57adbf0-bd6b-5305-9d5c-1455f0a4789a",
00:24:28.400        "is_configured": true,
00:24:28.400        "data_offset": 2048,
00:24:28.400        "data_size": 63488
00:24:28.400      },
00:24:28.400      {
00:24:28.400        "name": "BaseBdev3",
00:24:28.400        "uuid": "3c1eb8e7-ce4e-5c31-bd67-9af9f4b607e6",
00:24:28.400        "is_configured": true,
00:24:28.400        "data_offset": 2048,
00:24:28.400        "data_size": 63488
00:24:28.400      },
00:24:28.400      {
00:24:28.400        "name": "BaseBdev4",
00:24:28.400        "uuid": "cb293b39-f1fc-545d-b8ce-f76ca314dd22",
00:24:28.400        "is_configured": true,
00:24:28.400        "data_offset": 2048,
00:24:28.400        "data_size": 63488
00:24:28.400      }
00:24:28.400    ]
00:24:28.400  }'
00:24:28.400    17:08:21	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:24:28.400   17:08:21	-- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]]
00:24:28.400    17:08:21	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:24:28.658   17:08:21	-- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]]
00:24:28.658   17:08:21	-- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare
00:24:28.916  [2024-11-19 17:08:21.612249] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare
00:24:28.917  [2024-11-19 17:08:21.612305] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:24:28.917  [2024-11-19 17:08:21.615844] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027240
00:24:28.917  [2024-11-19 17:08:21.618589] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1
00:24:28.917   17:08:21	-- bdev/bdev_raid.sh@614 -- # sleep 1
00:24:29.851   17:08:22	-- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:24:29.852   17:08:22	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:24:29.852   17:08:22	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:24:29.852   17:08:22	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:24:29.852   17:08:22	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:24:29.852    17:08:22	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:24:29.852    17:08:22	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:24:30.110   17:08:22	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:24:30.110    "name": "raid_bdev1",
00:24:30.110    "uuid": "dd2872a9-6d17-4991-9ff7-64e80303d618",
00:24:30.110    "strip_size_kb": 64,
00:24:30.110    "state": "online",
00:24:30.110    "raid_level": "raid5f",
00:24:30.110    "superblock": true,
00:24:30.110    "num_base_bdevs": 4,
00:24:30.110    "num_base_bdevs_discovered": 4,
00:24:30.110    "num_base_bdevs_operational": 4,
00:24:30.110    "process": {
00:24:30.110      "type": "rebuild",
00:24:30.110      "target": "spare",
00:24:30.110      "progress": {
00:24:30.110        "blocks": 23040,
00:24:30.110        "percent": 12
00:24:30.110      }
00:24:30.110    },
00:24:30.110    "base_bdevs_list": [
00:24:30.110      {
00:24:30.110        "name": "spare",
00:24:30.110        "uuid": "df888a7f-b975-5380-b504-d8717d1a0743",
00:24:30.110        "is_configured": true,
00:24:30.110        "data_offset": 2048,
00:24:30.110        "data_size": 63488
00:24:30.110      },
00:24:30.110      {
00:24:30.110        "name": "BaseBdev2",
00:24:30.110        "uuid": "a57adbf0-bd6b-5305-9d5c-1455f0a4789a",
00:24:30.110        "is_configured": true,
00:24:30.110        "data_offset": 2048,
00:24:30.110        "data_size": 63488
00:24:30.110      },
00:24:30.110      {
00:24:30.110        "name": "BaseBdev3",
00:24:30.110        "uuid": "3c1eb8e7-ce4e-5c31-bd67-9af9f4b607e6",
00:24:30.110        "is_configured": true,
00:24:30.110        "data_offset": 2048,
00:24:30.110        "data_size": 63488
00:24:30.110      },
00:24:30.110      {
00:24:30.110        "name": "BaseBdev4",
00:24:30.110        "uuid": "cb293b39-f1fc-545d-b8ce-f76ca314dd22",
00:24:30.110        "is_configured": true,
00:24:30.110        "data_offset": 2048,
00:24:30.110        "data_size": 63488
00:24:30.110      }
00:24:30.110    ]
00:24:30.110  }'
00:24:30.110    17:08:22	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:24:30.110   17:08:22	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:24:30.110    17:08:22	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:24:30.110   17:08:22	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:24:30.110   17:08:22	-- bdev/bdev_raid.sh@617 -- # '[' true = true ']'
00:24:30.110   17:08:22	-- bdev/bdev_raid.sh@617 -- # '[' = false ']'
00:24:30.110  /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected
00:24:30.110   17:08:22	-- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4
00:24:30.110   17:08:22	-- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']'
00:24:30.110   17:08:22	-- bdev/bdev_raid.sh@657 -- # local timeout=703
00:24:30.110   17:08:22	-- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout ))
00:24:30.110   17:08:22	-- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:24:30.110   17:08:22	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:24:30.110   17:08:22	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:24:30.110   17:08:22	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:24:30.110   17:08:22	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:24:30.110    17:08:22	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:24:30.110    17:08:22	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:24:30.369   17:08:23	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:24:30.369    "name": "raid_bdev1",
00:24:30.369    "uuid": "dd2872a9-6d17-4991-9ff7-64e80303d618",
00:24:30.369    "strip_size_kb": 64,
00:24:30.369    "state": "online",
00:24:30.369    "raid_level": "raid5f",
00:24:30.369    "superblock": true,
00:24:30.369    "num_base_bdevs": 4,
00:24:30.369    "num_base_bdevs_discovered": 4,
00:24:30.369    "num_base_bdevs_operational": 4,
00:24:30.369    "process": {
00:24:30.369      "type": "rebuild",
00:24:30.369      "target": "spare",
00:24:30.369      "progress": {
00:24:30.369        "blocks": 28800,
00:24:30.369        "percent": 15
00:24:30.369      }
00:24:30.369    },
00:24:30.369    "base_bdevs_list": [
00:24:30.369      {
00:24:30.369        "name": "spare",
00:24:30.369        "uuid": "df888a7f-b975-5380-b504-d8717d1a0743",
00:24:30.369        "is_configured": true,
00:24:30.369        "data_offset": 2048,
00:24:30.369        "data_size": 63488
00:24:30.369      },
00:24:30.369      {
00:24:30.369        "name": "BaseBdev2",
00:24:30.369        "uuid": "a57adbf0-bd6b-5305-9d5c-1455f0a4789a",
00:24:30.369        "is_configured": true,
00:24:30.369        "data_offset": 2048,
00:24:30.369        "data_size": 63488
00:24:30.369      },
00:24:30.369      {
00:24:30.369        "name": "BaseBdev3",
00:24:30.369        "uuid": "3c1eb8e7-ce4e-5c31-bd67-9af9f4b607e6",
00:24:30.369        "is_configured": true,
00:24:30.369        "data_offset": 2048,
00:24:30.369        "data_size": 63488
00:24:30.369      },
00:24:30.369      {
00:24:30.369        "name": "BaseBdev4",
00:24:30.369        "uuid": "cb293b39-f1fc-545d-b8ce-f76ca314dd22",
00:24:30.369        "is_configured": true,
00:24:30.369        "data_offset": 2048,
00:24:30.369        "data_size": 63488
00:24:30.369      }
00:24:30.369    ]
00:24:30.369  }'
00:24:30.369    17:08:23	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:24:30.628   17:08:23	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:24:30.628    17:08:23	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:24:30.628   17:08:23	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:24:30.628   17:08:23	-- bdev/bdev_raid.sh@662 -- # sleep 1
00:24:31.562   17:08:24	-- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout ))
00:24:31.562   17:08:24	-- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:24:31.562   17:08:24	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:24:31.562   17:08:24	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:24:31.562   17:08:24	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:24:31.562   17:08:24	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:24:31.562    17:08:24	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:24:31.562    17:08:24	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:24:31.820   17:08:24	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:24:31.820    "name": "raid_bdev1",
00:24:31.820    "uuid": "dd2872a9-6d17-4991-9ff7-64e80303d618",
00:24:31.820    "strip_size_kb": 64,
00:24:31.820    "state": "online",
00:24:31.820    "raid_level": "raid5f",
00:24:31.820    "superblock": true,
00:24:31.820    "num_base_bdevs": 4,
00:24:31.820    "num_base_bdevs_discovered": 4,
00:24:31.820    "num_base_bdevs_operational": 4,
00:24:31.820    "process": {
00:24:31.820      "type": "rebuild",
00:24:31.820      "target": "spare",
00:24:31.820      "progress": {
00:24:31.820        "blocks": 53760,
00:24:31.820        "percent": 28
00:24:31.820      }
00:24:31.820    },
00:24:31.820    "base_bdevs_list": [
00:24:31.820      {
00:24:31.820        "name": "spare",
00:24:31.820        "uuid": "df888a7f-b975-5380-b504-d8717d1a0743",
00:24:31.820        "is_configured": true,
00:24:31.820        "data_offset": 2048,
00:24:31.820        "data_size": 63488
00:24:31.820      },
00:24:31.820      {
00:24:31.820        "name": "BaseBdev2",
00:24:31.820        "uuid": "a57adbf0-bd6b-5305-9d5c-1455f0a4789a",
00:24:31.820        "is_configured": true,
00:24:31.820        "data_offset": 2048,
00:24:31.820        "data_size": 63488
00:24:31.820      },
00:24:31.820      {
00:24:31.820        "name": "BaseBdev3",
00:24:31.820        "uuid": "3c1eb8e7-ce4e-5c31-bd67-9af9f4b607e6",
00:24:31.820        "is_configured": true,
00:24:31.820        "data_offset": 2048,
00:24:31.820        "data_size": 63488
00:24:31.820      },
00:24:31.820      {
00:24:31.820        "name": "BaseBdev4",
00:24:31.820        "uuid": "cb293b39-f1fc-545d-b8ce-f76ca314dd22",
00:24:31.820        "is_configured": true,
00:24:31.820        "data_offset": 2048,
00:24:31.820        "data_size": 63488
00:24:31.820      }
00:24:31.820    ]
00:24:31.820  }'
00:24:31.820    17:08:24	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:24:31.820   17:08:24	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:24:31.820    17:08:24	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:24:31.820   17:08:24	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:24:31.820   17:08:24	-- bdev/bdev_raid.sh@662 -- # sleep 1
00:24:33.200   17:08:25	-- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout ))
00:24:33.200   17:08:25	-- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:24:33.200   17:08:25	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:24:33.200   17:08:25	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:24:33.200   17:08:25	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:24:33.200   17:08:25	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:24:33.200    17:08:25	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:24:33.200    17:08:25	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:24:33.200   17:08:25	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:24:33.200    "name": "raid_bdev1",
00:24:33.200    "uuid": "dd2872a9-6d17-4991-9ff7-64e80303d618",
00:24:33.200    "strip_size_kb": 64,
00:24:33.200    "state": "online",
00:24:33.200    "raid_level": "raid5f",
00:24:33.200    "superblock": true,
00:24:33.200    "num_base_bdevs": 4,
00:24:33.200    "num_base_bdevs_discovered": 4,
00:24:33.200    "num_base_bdevs_operational": 4,
00:24:33.200    "process": {
00:24:33.200      "type": "rebuild",
00:24:33.200      "target": "spare",
00:24:33.200      "progress": {
00:24:33.200        "blocks": 82560,
00:24:33.200        "percent": 43
00:24:33.200      }
00:24:33.200    },
00:24:33.200    "base_bdevs_list": [
00:24:33.200      {
00:24:33.200        "name": "spare",
00:24:33.200        "uuid": "df888a7f-b975-5380-b504-d8717d1a0743",
00:24:33.200        "is_configured": true,
00:24:33.200        "data_offset": 2048,
00:24:33.200        "data_size": 63488
00:24:33.200      },
00:24:33.200      {
00:24:33.200        "name": "BaseBdev2",
00:24:33.200        "uuid": "a57adbf0-bd6b-5305-9d5c-1455f0a4789a",
00:24:33.200        "is_configured": true,
00:24:33.200        "data_offset": 2048,
00:24:33.200        "data_size": 63488
00:24:33.200      },
00:24:33.200      {
00:24:33.200        "name": "BaseBdev3",
00:24:33.200        "uuid": "3c1eb8e7-ce4e-5c31-bd67-9af9f4b607e6",
00:24:33.200        "is_configured": true,
00:24:33.200        "data_offset": 2048,
00:24:33.200        "data_size": 63488
00:24:33.200      },
00:24:33.200      {
00:24:33.200        "name": "BaseBdev4",
00:24:33.200        "uuid": "cb293b39-f1fc-545d-b8ce-f76ca314dd22",
00:24:33.200        "is_configured": true,
00:24:33.200        "data_offset": 2048,
00:24:33.200        "data_size": 63488
00:24:33.200      }
00:24:33.200    ]
00:24:33.200  }'
00:24:33.200    17:08:25	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:24:33.200   17:08:26	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:24:33.200    17:08:26	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:24:33.458   17:08:26	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:24:33.458   17:08:26	-- bdev/bdev_raid.sh@662 -- # sleep 1
00:24:34.391   17:08:27	-- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout ))
00:24:34.391   17:08:27	-- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:24:34.391   17:08:27	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:24:34.391   17:08:27	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:24:34.391   17:08:27	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:24:34.391   17:08:27	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:24:34.391    17:08:27	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:24:34.391    17:08:27	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:24:34.649   17:08:27	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:24:34.649    "name": "raid_bdev1",
00:24:34.649    "uuid": "dd2872a9-6d17-4991-9ff7-64e80303d618",
00:24:34.649    "strip_size_kb": 64,
00:24:34.649    "state": "online",
00:24:34.649    "raid_level": "raid5f",
00:24:34.649    "superblock": true,
00:24:34.649    "num_base_bdevs": 4,
00:24:34.649    "num_base_bdevs_discovered": 4,
00:24:34.649    "num_base_bdevs_operational": 4,
00:24:34.649    "process": {
00:24:34.649      "type": "rebuild",
00:24:34.649      "target": "spare",
00:24:34.649      "progress": {
00:24:34.649        "blocks": 107520,
00:24:34.649        "percent": 56
00:24:34.649      }
00:24:34.649    },
00:24:34.649    "base_bdevs_list": [
00:24:34.649      {
00:24:34.649        "name": "spare",
00:24:34.649        "uuid": "df888a7f-b975-5380-b504-d8717d1a0743",
00:24:34.649        "is_configured": true,
00:24:34.649        "data_offset": 2048,
00:24:34.649        "data_size": 63488
00:24:34.649      },
00:24:34.649      {
00:24:34.649        "name": "BaseBdev2",
00:24:34.649        "uuid": "a57adbf0-bd6b-5305-9d5c-1455f0a4789a",
00:24:34.649        "is_configured": true,
00:24:34.649        "data_offset": 2048,
00:24:34.649        "data_size": 63488
00:24:34.649      },
00:24:34.649      {
00:24:34.649        "name": "BaseBdev3",
00:24:34.649        "uuid": "3c1eb8e7-ce4e-5c31-bd67-9af9f4b607e6",
00:24:34.649        "is_configured": true,
00:24:34.649        "data_offset": 2048,
00:24:34.649        "data_size": 63488
00:24:34.649      },
00:24:34.649      {
00:24:34.649        "name": "BaseBdev4",
00:24:34.649        "uuid": "cb293b39-f1fc-545d-b8ce-f76ca314dd22",
00:24:34.649        "is_configured": true,
00:24:34.649        "data_offset": 2048,
00:24:34.649        "data_size": 63488
00:24:34.649      }
00:24:34.649    ]
00:24:34.649  }'
00:24:34.649    17:08:27	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:24:34.649   17:08:27	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:24:34.649    17:08:27	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:24:34.649   17:08:27	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:24:34.649   17:08:27	-- bdev/bdev_raid.sh@662 -- # sleep 1
00:24:36.023   17:08:28	-- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout ))
00:24:36.023   17:08:28	-- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:24:36.023   17:08:28	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:24:36.023   17:08:28	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:24:36.023   17:08:28	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:24:36.023   17:08:28	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:24:36.023    17:08:28	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:24:36.023    17:08:28	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:24:36.023   17:08:28	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:24:36.023    "name": "raid_bdev1",
00:24:36.023    "uuid": "dd2872a9-6d17-4991-9ff7-64e80303d618",
00:24:36.023    "strip_size_kb": 64,
00:24:36.023    "state": "online",
00:24:36.023    "raid_level": "raid5f",
00:24:36.023    "superblock": true,
00:24:36.023    "num_base_bdevs": 4,
00:24:36.023    "num_base_bdevs_discovered": 4,
00:24:36.023    "num_base_bdevs_operational": 4,
00:24:36.023    "process": {
00:24:36.023      "type": "rebuild",
00:24:36.023      "target": "spare",
00:24:36.023      "progress": {
00:24:36.023        "blocks": 134400,
00:24:36.023        "percent": 70
00:24:36.023      }
00:24:36.023    },
00:24:36.023    "base_bdevs_list": [
00:24:36.023      {
00:24:36.023        "name": "spare",
00:24:36.023        "uuid": "df888a7f-b975-5380-b504-d8717d1a0743",
00:24:36.023        "is_configured": true,
00:24:36.023        "data_offset": 2048,
00:24:36.023        "data_size": 63488
00:24:36.023      },
00:24:36.023      {
00:24:36.023        "name": "BaseBdev2",
00:24:36.023        "uuid": "a57adbf0-bd6b-5305-9d5c-1455f0a4789a",
00:24:36.023        "is_configured": true,
00:24:36.023        "data_offset": 2048,
00:24:36.023        "data_size": 63488
00:24:36.023      },
00:24:36.023      {
00:24:36.023        "name": "BaseBdev3",
00:24:36.023        "uuid": "3c1eb8e7-ce4e-5c31-bd67-9af9f4b607e6",
00:24:36.023        "is_configured": true,
00:24:36.023        "data_offset": 2048,
00:24:36.023        "data_size": 63488
00:24:36.023      },
00:24:36.023      {
00:24:36.023        "name": "BaseBdev4",
00:24:36.023        "uuid": "cb293b39-f1fc-545d-b8ce-f76ca314dd22",
00:24:36.023        "is_configured": true,
00:24:36.023        "data_offset": 2048,
00:24:36.023        "data_size": 63488
00:24:36.023      }
00:24:36.023    ]
00:24:36.023  }'
00:24:36.023    17:08:28	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:24:36.023   17:08:28	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:24:36.023    17:08:28	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:24:36.281   17:08:28	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:24:36.281   17:08:28	-- bdev/bdev_raid.sh@662 -- # sleep 1
00:24:37.212   17:08:29	-- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout ))
00:24:37.212   17:08:29	-- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:24:37.212   17:08:29	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:24:37.213   17:08:29	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:24:37.213   17:08:29	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:24:37.213   17:08:29	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:24:37.213    17:08:29	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:24:37.213    17:08:29	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:24:37.471   17:08:30	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:24:37.471    "name": "raid_bdev1",
00:24:37.471    "uuid": "dd2872a9-6d17-4991-9ff7-64e80303d618",
00:24:37.471    "strip_size_kb": 64,
00:24:37.471    "state": "online",
00:24:37.471    "raid_level": "raid5f",
00:24:37.471    "superblock": true,
00:24:37.471    "num_base_bdevs": 4,
00:24:37.471    "num_base_bdevs_discovered": 4,
00:24:37.471    "num_base_bdevs_operational": 4,
00:24:37.471    "process": {
00:24:37.471      "type": "rebuild",
00:24:37.471      "target": "spare",
00:24:37.471      "progress": {
00:24:37.471        "blocks": 161280,
00:24:37.471        "percent": 84
00:24:37.471      }
00:24:37.471    },
00:24:37.471    "base_bdevs_list": [
00:24:37.471      {
00:24:37.471        "name": "spare",
00:24:37.471        "uuid": "df888a7f-b975-5380-b504-d8717d1a0743",
00:24:37.471        "is_configured": true,
00:24:37.471        "data_offset": 2048,
00:24:37.471        "data_size": 63488
00:24:37.471      },
00:24:37.471      {
00:24:37.471        "name": "BaseBdev2",
00:24:37.471        "uuid": "a57adbf0-bd6b-5305-9d5c-1455f0a4789a",
00:24:37.471        "is_configured": true,
00:24:37.471        "data_offset": 2048,
00:24:37.471        "data_size": 63488
00:24:37.471      },
00:24:37.471      {
00:24:37.471        "name": "BaseBdev3",
00:24:37.471        "uuid": "3c1eb8e7-ce4e-5c31-bd67-9af9f4b607e6",
00:24:37.471        "is_configured": true,
00:24:37.471        "data_offset": 2048,
00:24:37.471        "data_size": 63488
00:24:37.471      },
00:24:37.471      {
00:24:37.471        "name": "BaseBdev4",
00:24:37.471        "uuid": "cb293b39-f1fc-545d-b8ce-f76ca314dd22",
00:24:37.471        "is_configured": true,
00:24:37.471        "data_offset": 2048,
00:24:37.471        "data_size": 63488
00:24:37.471      }
00:24:37.471    ]
00:24:37.471  }'
00:24:37.471    17:08:30	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:24:37.471   17:08:30	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:24:37.471    17:08:30	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:24:37.471   17:08:30	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:24:37.471   17:08:30	-- bdev/bdev_raid.sh@662 -- # sleep 1
00:24:38.847   17:08:31	-- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout ))
00:24:38.847   17:08:31	-- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:24:38.847   17:08:31	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:24:38.847   17:08:31	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:24:38.847   17:08:31	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:24:38.847   17:08:31	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:24:38.847    17:08:31	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:24:38.847    17:08:31	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:24:39.106   17:08:31	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:24:39.106    "name": "raid_bdev1",
00:24:39.106    "uuid": "dd2872a9-6d17-4991-9ff7-64e80303d618",
00:24:39.106    "strip_size_kb": 64,
00:24:39.106    "state": "online",
00:24:39.106    "raid_level": "raid5f",
00:24:39.106    "superblock": true,
00:24:39.106    "num_base_bdevs": 4,
00:24:39.106    "num_base_bdevs_discovered": 4,
00:24:39.106    "num_base_bdevs_operational": 4,
00:24:39.106    "process": {
00:24:39.106      "type": "rebuild",
00:24:39.106      "target": "spare",
00:24:39.106      "progress": {
00:24:39.106        "blocks": 190080,
00:24:39.106        "percent": 99
00:24:39.106      }
00:24:39.106    },
00:24:39.106    "base_bdevs_list": [
00:24:39.106      {
00:24:39.106        "name": "spare",
00:24:39.106        "uuid": "df888a7f-b975-5380-b504-d8717d1a0743",
00:24:39.106        "is_configured": true,
00:24:39.106        "data_offset": 2048,
00:24:39.106        "data_size": 63488
00:24:39.106      },
00:24:39.106      {
00:24:39.106        "name": "BaseBdev2",
00:24:39.106        "uuid": "a57adbf0-bd6b-5305-9d5c-1455f0a4789a",
00:24:39.106        "is_configured": true,
00:24:39.106        "data_offset": 2048,
00:24:39.106        "data_size": 63488
00:24:39.106      },
00:24:39.106      {
00:24:39.106        "name": "BaseBdev3",
00:24:39.106        "uuid": "3c1eb8e7-ce4e-5c31-bd67-9af9f4b607e6",
00:24:39.106        "is_configured": true,
00:24:39.106        "data_offset": 2048,
00:24:39.106        "data_size": 63488
00:24:39.106      },
00:24:39.106      {
00:24:39.106        "name": "BaseBdev4",
00:24:39.106        "uuid": "cb293b39-f1fc-545d-b8ce-f76ca314dd22",
00:24:39.106        "is_configured": true,
00:24:39.106        "data_offset": 2048,
00:24:39.106        "data_size": 63488
00:24:39.106      }
00:24:39.106    ]
00:24:39.106  }'
00:24:39.106    17:08:31	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:24:39.106  [2024-11-19 17:08:31.705973] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1
00:24:39.106  [2024-11-19 17:08:31.706075] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1
00:24:39.106  [2024-11-19 17:08:31.706240] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:24:39.106   17:08:31	-- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]]
00:24:39.106    17:08:31	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:24:39.106   17:08:31	-- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]]
00:24:39.106   17:08:31	-- bdev/bdev_raid.sh@662 -- # sleep 1
00:24:40.043   17:08:32	-- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout ))
00:24:40.043   17:08:32	-- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare
00:24:40.043   17:08:32	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:24:40.043   17:08:32	-- bdev/bdev_raid.sh@184 -- # local process_type=rebuild
00:24:40.043   17:08:32	-- bdev/bdev_raid.sh@185 -- # local target=spare
00:24:40.043   17:08:32	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:24:40.043    17:08:32	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:24:40.043    17:08:32	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:24:40.301   17:08:33	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:24:40.301    "name": "raid_bdev1",
00:24:40.301    "uuid": "dd2872a9-6d17-4991-9ff7-64e80303d618",
00:24:40.301    "strip_size_kb": 64,
00:24:40.301    "state": "online",
00:24:40.301    "raid_level": "raid5f",
00:24:40.301    "superblock": true,
00:24:40.301    "num_base_bdevs": 4,
00:24:40.301    "num_base_bdevs_discovered": 4,
00:24:40.301    "num_base_bdevs_operational": 4,
00:24:40.301    "base_bdevs_list": [
00:24:40.301      {
00:24:40.301        "name": "spare",
00:24:40.301        "uuid": "df888a7f-b975-5380-b504-d8717d1a0743",
00:24:40.301        "is_configured": true,
00:24:40.301        "data_offset": 2048,
00:24:40.301        "data_size": 63488
00:24:40.301      },
00:24:40.301      {
00:24:40.301        "name": "BaseBdev2",
00:24:40.301        "uuid": "a57adbf0-bd6b-5305-9d5c-1455f0a4789a",
00:24:40.301        "is_configured": true,
00:24:40.301        "data_offset": 2048,
00:24:40.301        "data_size": 63488
00:24:40.301      },
00:24:40.301      {
00:24:40.301        "name": "BaseBdev3",
00:24:40.301        "uuid": "3c1eb8e7-ce4e-5c31-bd67-9af9f4b607e6",
00:24:40.301        "is_configured": true,
00:24:40.301        "data_offset": 2048,
00:24:40.301        "data_size": 63488
00:24:40.301      },
00:24:40.301      {
00:24:40.301        "name": "BaseBdev4",
00:24:40.301        "uuid": "cb293b39-f1fc-545d-b8ce-f76ca314dd22",
00:24:40.301        "is_configured": true,
00:24:40.302        "data_offset": 2048,
00:24:40.302        "data_size": 63488
00:24:40.302      }
00:24:40.302    ]
00:24:40.302  }'
00:24:40.302    17:08:33	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:24:40.302   17:08:33	-- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]]
00:24:40.302    17:08:33	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:24:40.560   17:08:33	-- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]]
00:24:40.560   17:08:33	-- bdev/bdev_raid.sh@660 -- # break
00:24:40.560   17:08:33	-- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none
00:24:40.560   17:08:33	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:24:40.560   17:08:33	-- bdev/bdev_raid.sh@184 -- # local process_type=none
00:24:40.560   17:08:33	-- bdev/bdev_raid.sh@185 -- # local target=none
00:24:40.560   17:08:33	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:24:40.560    17:08:33	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:24:40.560    17:08:33	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:24:40.819   17:08:33	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:24:40.819    "name": "raid_bdev1",
00:24:40.819    "uuid": "dd2872a9-6d17-4991-9ff7-64e80303d618",
00:24:40.819    "strip_size_kb": 64,
00:24:40.819    "state": "online",
00:24:40.819    "raid_level": "raid5f",
00:24:40.819    "superblock": true,
00:24:40.819    "num_base_bdevs": 4,
00:24:40.819    "num_base_bdevs_discovered": 4,
00:24:40.819    "num_base_bdevs_operational": 4,
00:24:40.819    "base_bdevs_list": [
00:24:40.819      {
00:24:40.819        "name": "spare",
00:24:40.819        "uuid": "df888a7f-b975-5380-b504-d8717d1a0743",
00:24:40.819        "is_configured": true,
00:24:40.819        "data_offset": 2048,
00:24:40.819        "data_size": 63488
00:24:40.819      },
00:24:40.819      {
00:24:40.819        "name": "BaseBdev2",
00:24:40.819        "uuid": "a57adbf0-bd6b-5305-9d5c-1455f0a4789a",
00:24:40.819        "is_configured": true,
00:24:40.819        "data_offset": 2048,
00:24:40.819        "data_size": 63488
00:24:40.819      },
00:24:40.819      {
00:24:40.819        "name": "BaseBdev3",
00:24:40.819        "uuid": "3c1eb8e7-ce4e-5c31-bd67-9af9f4b607e6",
00:24:40.819        "is_configured": true,
00:24:40.819        "data_offset": 2048,
00:24:40.819        "data_size": 63488
00:24:40.819      },
00:24:40.819      {
00:24:40.819        "name": "BaseBdev4",
00:24:40.819        "uuid": "cb293b39-f1fc-545d-b8ce-f76ca314dd22",
00:24:40.819        "is_configured": true,
00:24:40.819        "data_offset": 2048,
00:24:40.819        "data_size": 63488
00:24:40.819      }
00:24:40.819    ]
00:24:40.819  }'
00:24:40.819    17:08:33	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:24:40.819   17:08:33	-- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]]
00:24:40.819    17:08:33	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:24:40.819   17:08:33	-- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]]
00:24:40.819   17:08:33	-- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4
00:24:40.819   17:08:33	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:24:40.819   17:08:33	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:24:40.819   17:08:33	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:24:40.819   17:08:33	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:24:40.819   17:08:33	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:24:40.819   17:08:33	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:24:40.819   17:08:33	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:24:40.819   17:08:33	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:24:40.819   17:08:33	-- bdev/bdev_raid.sh@125 -- # local tmp
00:24:40.819    17:08:33	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:24:40.819    17:08:33	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:24:41.078   17:08:33	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:24:41.078    "name": "raid_bdev1",
00:24:41.078    "uuid": "dd2872a9-6d17-4991-9ff7-64e80303d618",
00:24:41.078    "strip_size_kb": 64,
00:24:41.078    "state": "online",
00:24:41.078    "raid_level": "raid5f",
00:24:41.078    "superblock": true,
00:24:41.078    "num_base_bdevs": 4,
00:24:41.078    "num_base_bdevs_discovered": 4,
00:24:41.078    "num_base_bdevs_operational": 4,
00:24:41.078    "base_bdevs_list": [
00:24:41.078      {
00:24:41.078        "name": "spare",
00:24:41.078        "uuid": "df888a7f-b975-5380-b504-d8717d1a0743",
00:24:41.078        "is_configured": true,
00:24:41.078        "data_offset": 2048,
00:24:41.078        "data_size": 63488
00:24:41.078      },
00:24:41.078      {
00:24:41.078        "name": "BaseBdev2",
00:24:41.078        "uuid": "a57adbf0-bd6b-5305-9d5c-1455f0a4789a",
00:24:41.078        "is_configured": true,
00:24:41.078        "data_offset": 2048,
00:24:41.078        "data_size": 63488
00:24:41.078      },
00:24:41.078      {
00:24:41.078        "name": "BaseBdev3",
00:24:41.078        "uuid": "3c1eb8e7-ce4e-5c31-bd67-9af9f4b607e6",
00:24:41.078        "is_configured": true,
00:24:41.078        "data_offset": 2048,
00:24:41.078        "data_size": 63488
00:24:41.078      },
00:24:41.078      {
00:24:41.078        "name": "BaseBdev4",
00:24:41.078        "uuid": "cb293b39-f1fc-545d-b8ce-f76ca314dd22",
00:24:41.078        "is_configured": true,
00:24:41.078        "data_offset": 2048,
00:24:41.078        "data_size": 63488
00:24:41.078      }
00:24:41.078    ]
00:24:41.078  }'
00:24:41.078   17:08:33	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:24:41.078   17:08:33	-- common/autotest_common.sh@10 -- # set +x
00:24:41.644   17:08:34	-- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1
00:24:41.902  [2024-11-19 17:08:34.704603] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:24:41.902  [2024-11-19 17:08:34.704644] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline
00:24:41.902  [2024-11-19 17:08:34.704753] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:24:41.902  [2024-11-19 17:08:34.704872] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:24:41.902  [2024-11-19 17:08:34.704885] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline
00:24:41.902    17:08:34	-- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:24:41.902    17:08:34	-- bdev/bdev_raid.sh@671 -- # jq length
00:24:42.467   17:08:35	-- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]]
00:24:42.467   17:08:35	-- bdev/bdev_raid.sh@673 -- # '[' false = true ']'
00:24:42.467   17:08:35	-- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1'
00:24:42.467   17:08:35	-- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:24:42.467   17:08:35	-- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare')
00:24:42.467   17:08:35	-- bdev/nbd_common.sh@10 -- # local bdev_list
00:24:42.467   17:08:35	-- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:24:42.467   17:08:35	-- bdev/nbd_common.sh@11 -- # local nbd_list
00:24:42.467   17:08:35	-- bdev/nbd_common.sh@12 -- # local i
00:24:42.467   17:08:35	-- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:24:42.467   17:08:35	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:24:42.467   17:08:35	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0
00:24:42.749  /dev/nbd0
00:24:42.749    17:08:35	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:24:42.749   17:08:35	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:24:42.749   17:08:35	-- common/autotest_common.sh@866 -- # local nbd_name=nbd0
00:24:42.749   17:08:35	-- common/autotest_common.sh@867 -- # local i
00:24:42.749   17:08:35	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:24:42.749   17:08:35	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:24:42.749   17:08:35	-- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions
00:24:42.749   17:08:35	-- common/autotest_common.sh@871 -- # break
00:24:42.749   17:08:35	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:24:42.749   17:08:35	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:24:42.749   17:08:35	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:24:42.749  1+0 records in
00:24:42.749  1+0 records out
00:24:42.749  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000392073 s, 10.4 MB/s
00:24:42.749    17:08:35	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:24:42.749   17:08:35	-- common/autotest_common.sh@884 -- # size=4096
00:24:42.749   17:08:35	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:24:42.749   17:08:35	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:24:42.749   17:08:35	-- common/autotest_common.sh@887 -- # return 0
00:24:42.749   17:08:35	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:24:42.749   17:08:35	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:24:42.749   17:08:35	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1
00:24:43.030  /dev/nbd1
00:24:43.030    17:08:35	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:24:43.030   17:08:35	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:24:43.030   17:08:35	-- common/autotest_common.sh@866 -- # local nbd_name=nbd1
00:24:43.030   17:08:35	-- common/autotest_common.sh@867 -- # local i
00:24:43.030   17:08:35	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:24:43.030   17:08:35	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:24:43.030   17:08:35	-- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions
00:24:43.030   17:08:35	-- common/autotest_common.sh@871 -- # break
00:24:43.030   17:08:35	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:24:43.030   17:08:35	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:24:43.030   17:08:35	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:24:43.030  1+0 records in
00:24:43.030  1+0 records out
00:24:43.030  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000615185 s, 6.7 MB/s
00:24:43.030    17:08:35	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:24:43.030   17:08:35	-- common/autotest_common.sh@884 -- # size=4096
00:24:43.030   17:08:35	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:24:43.030   17:08:35	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:24:43.030   17:08:35	-- common/autotest_common.sh@887 -- # return 0
00:24:43.030   17:08:35	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:24:43.030   17:08:35	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:24:43.030   17:08:35	-- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1
00:24:43.030   17:08:35	-- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1'
00:24:43.030   17:08:35	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock
00:24:43.030   17:08:35	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:24:43.030   17:08:35	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:24:43.030   17:08:35	-- bdev/nbd_common.sh@51 -- # local i
00:24:43.030   17:08:35	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:24:43.030   17:08:35	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0
00:24:43.289    17:08:36	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:24:43.548   17:08:36	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:24:43.548   17:08:36	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:24:43.548   17:08:36	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:24:43.548   17:08:36	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:24:43.548   17:08:36	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:24:43.548   17:08:36	-- bdev/nbd_common.sh@41 -- # break
00:24:43.548   17:08:36	-- bdev/nbd_common.sh@45 -- # return 0
00:24:43.548   17:08:36	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:24:43.548   17:08:36	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1
00:24:43.807    17:08:36	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:24:43.807   17:08:36	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:24:43.807   17:08:36	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:24:43.807   17:08:36	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:24:43.807   17:08:36	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:24:43.807   17:08:36	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:24:43.807   17:08:36	-- bdev/nbd_common.sh@41 -- # break
00:24:43.807   17:08:36	-- bdev/nbd_common.sh@45 -- # return 0
00:24:43.807   17:08:36	-- bdev/bdev_raid.sh@692 -- # '[' true = true ']'
00:24:43.807   17:08:36	-- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}"
00:24:43.807   17:08:36	-- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']'
00:24:43.807   17:08:36	-- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1
00:24:43.807   17:08:36	-- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1
00:24:44.067  [2024-11-19 17:08:36.895122] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc
00:24:44.067  [2024-11-19 17:08:36.895222] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:24:44.067  [2024-11-19 17:08:36.895269] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580
00:24:44.067  [2024-11-19 17:08:36.895291] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:24:44.067  [2024-11-19 17:08:36.897942] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:24:44.067  [2024-11-19 17:08:36.898002] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1
00:24:44.067  [2024-11-19 17:08:36.898093] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1
00:24:44.067  [2024-11-19 17:08:36.898182] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed
00:24:44.067  BaseBdev1
00:24:44.067   17:08:36	-- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}"
00:24:44.067   17:08:36	-- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']'
00:24:44.067   17:08:36	-- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2
00:24:44.634   17:08:37	-- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2
00:24:44.634  [2024-11-19 17:08:37.435249] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc
00:24:44.635  [2024-11-19 17:08:37.435352] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:24:44.635  [2024-11-19 17:08:37.435393] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80
00:24:44.635  [2024-11-19 17:08:37.435415] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:24:44.635  [2024-11-19 17:08:37.435838] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:24:44.635  [2024-11-19 17:08:37.435896] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2
00:24:44.635  [2024-11-19 17:08:37.435985] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2
00:24:44.635  [2024-11-19 17:08:37.436004] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1)
00:24:44.635  [2024-11-19 17:08:37.436011] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1
00:24:44.635  [2024-11-19 17:08:37.436047] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state configuring
00:24:44.635  [2024-11-19 17:08:37.436102] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed
00:24:44.635  BaseBdev2
00:24:44.635   17:08:37	-- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}"
00:24:44.635   17:08:37	-- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']'
00:24:44.635   17:08:37	-- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3
00:24:44.893   17:08:37	-- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3
00:24:45.152  [2024-11-19 17:08:37.899332] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc
00:24:45.152  [2024-11-19 17:08:37.899442] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:24:45.152  [2024-11-19 17:08:37.899476] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480
00:24:45.152  [2024-11-19 17:08:37.899501] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:24:45.152  [2024-11-19 17:08:37.899940] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:24:45.152  [2024-11-19 17:08:37.900002] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3
00:24:45.152  [2024-11-19 17:08:37.900086] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3
00:24:45.152  [2024-11-19 17:08:37.900110] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed
00:24:45.152  BaseBdev3
00:24:45.152   17:08:37	-- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}"
00:24:45.152   17:08:37	-- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev4 ']'
00:24:45.152   17:08:37	-- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev4
00:24:45.411   17:08:38	-- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4
00:24:45.671  [2024-11-19 17:08:38.298964] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc
00:24:45.671  [2024-11-19 17:08:38.299089] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:24:45.671  [2024-11-19 17:08:38.299128] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780
00:24:45.671  [2024-11-19 17:08:38.299159] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:24:45.671  [2024-11-19 17:08:38.299601] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:24:45.671  [2024-11-19 17:08:38.299663] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4
00:24:45.671  [2024-11-19 17:08:38.299748] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev4
00:24:45.671  [2024-11-19 17:08:38.299772] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed
00:24:45.671  BaseBdev4
00:24:45.671   17:08:38	-- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare
00:24:45.930   17:08:38	-- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare
00:24:46.188  [2024-11-19 17:08:38.807051] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay
00:24:46.188  [2024-11-19 17:08:38.807156] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:24:46.188  [2024-11-19 17:08:38.807207] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80
00:24:46.188  [2024-11-19 17:08:38.807237] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:24:46.188  [2024-11-19 17:08:38.807693] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:24:46.188  [2024-11-19 17:08:38.807754] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare
00:24:46.188  [2024-11-19 17:08:38.807853] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare
00:24:46.188  [2024-11-19 17:08:38.807885] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed
00:24:46.188  spare
00:24:46.188   17:08:38	-- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4
00:24:46.188   17:08:38	-- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1
00:24:46.188   17:08:38	-- bdev/bdev_raid.sh@118 -- # local expected_state=online
00:24:46.188   17:08:38	-- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f
00:24:46.188   17:08:38	-- bdev/bdev_raid.sh@120 -- # local strip_size=64
00:24:46.188   17:08:38	-- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4
00:24:46.188   17:08:38	-- bdev/bdev_raid.sh@122 -- # local raid_bdev_info
00:24:46.188   17:08:38	-- bdev/bdev_raid.sh@123 -- # local num_base_bdevs
00:24:46.188   17:08:38	-- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered
00:24:46.188   17:08:38	-- bdev/bdev_raid.sh@125 -- # local tmp
00:24:46.188    17:08:38	-- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:24:46.188    17:08:38	-- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:24:46.188  [2024-11-19 17:08:38.908019] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000b180
00:24:46.188  [2024-11-19 17:08:38.908064] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512
00:24:46.188  [2024-11-19 17:08:38.908261] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000045ea0
00:24:46.188  [2024-11-19 17:08:38.909180] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000b180
00:24:46.188  [2024-11-19 17:08:38.909206] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000b180
00:24:46.188  [2024-11-19 17:08:38.909403] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb
00:24:46.449   17:08:39	-- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{
00:24:46.449    "name": "raid_bdev1",
00:24:46.449    "uuid": "dd2872a9-6d17-4991-9ff7-64e80303d618",
00:24:46.449    "strip_size_kb": 64,
00:24:46.449    "state": "online",
00:24:46.449    "raid_level": "raid5f",
00:24:46.449    "superblock": true,
00:24:46.449    "num_base_bdevs": 4,
00:24:46.449    "num_base_bdevs_discovered": 4,
00:24:46.449    "num_base_bdevs_operational": 4,
00:24:46.449    "base_bdevs_list": [
00:24:46.449      {
00:24:46.449        "name": "spare",
00:24:46.449        "uuid": "df888a7f-b975-5380-b504-d8717d1a0743",
00:24:46.449        "is_configured": true,
00:24:46.449        "data_offset": 2048,
00:24:46.449        "data_size": 63488
00:24:46.449      },
00:24:46.449      {
00:24:46.449        "name": "BaseBdev2",
00:24:46.449        "uuid": "a57adbf0-bd6b-5305-9d5c-1455f0a4789a",
00:24:46.449        "is_configured": true,
00:24:46.449        "data_offset": 2048,
00:24:46.449        "data_size": 63488
00:24:46.449      },
00:24:46.449      {
00:24:46.449        "name": "BaseBdev3",
00:24:46.449        "uuid": "3c1eb8e7-ce4e-5c31-bd67-9af9f4b607e6",
00:24:46.449        "is_configured": true,
00:24:46.449        "data_offset": 2048,
00:24:46.449        "data_size": 63488
00:24:46.449      },
00:24:46.449      {
00:24:46.449        "name": "BaseBdev4",
00:24:46.449        "uuid": "cb293b39-f1fc-545d-b8ce-f76ca314dd22",
00:24:46.449        "is_configured": true,
00:24:46.449        "data_offset": 2048,
00:24:46.449        "data_size": 63488
00:24:46.449      }
00:24:46.449    ]
00:24:46.449  }'
00:24:46.449   17:08:39	-- bdev/bdev_raid.sh@129 -- # xtrace_disable
00:24:46.449   17:08:39	-- common/autotest_common.sh@10 -- # set +x
00:24:47.017   17:08:39	-- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none
00:24:47.017   17:08:39	-- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1
00:24:47.017   17:08:39	-- bdev/bdev_raid.sh@184 -- # local process_type=none
00:24:47.017   17:08:39	-- bdev/bdev_raid.sh@185 -- # local target=none
00:24:47.017   17:08:39	-- bdev/bdev_raid.sh@186 -- # local raid_bdev_info
00:24:47.017    17:08:39	-- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:24:47.017    17:08:39	-- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")'
00:24:47.017   17:08:39	-- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{
00:24:47.017    "name": "raid_bdev1",
00:24:47.017    "uuid": "dd2872a9-6d17-4991-9ff7-64e80303d618",
00:24:47.017    "strip_size_kb": 64,
00:24:47.017    "state": "online",
00:24:47.017    "raid_level": "raid5f",
00:24:47.017    "superblock": true,
00:24:47.017    "num_base_bdevs": 4,
00:24:47.017    "num_base_bdevs_discovered": 4,
00:24:47.017    "num_base_bdevs_operational": 4,
00:24:47.017    "base_bdevs_list": [
00:24:47.017      {
00:24:47.017        "name": "spare",
00:24:47.017        "uuid": "df888a7f-b975-5380-b504-d8717d1a0743",
00:24:47.017        "is_configured": true,
00:24:47.017        "data_offset": 2048,
00:24:47.017        "data_size": 63488
00:24:47.017      },
00:24:47.017      {
00:24:47.017        "name": "BaseBdev2",
00:24:47.017        "uuid": "a57adbf0-bd6b-5305-9d5c-1455f0a4789a",
00:24:47.017        "is_configured": true,
00:24:47.017        "data_offset": 2048,
00:24:47.017        "data_size": 63488
00:24:47.017      },
00:24:47.017      {
00:24:47.017        "name": "BaseBdev3",
00:24:47.017        "uuid": "3c1eb8e7-ce4e-5c31-bd67-9af9f4b607e6",
00:24:47.017        "is_configured": true,
00:24:47.017        "data_offset": 2048,
00:24:47.017        "data_size": 63488
00:24:47.017      },
00:24:47.017      {
00:24:47.017        "name": "BaseBdev4",
00:24:47.017        "uuid": "cb293b39-f1fc-545d-b8ce-f76ca314dd22",
00:24:47.017        "is_configured": true,
00:24:47.017        "data_offset": 2048,
00:24:47.017        "data_size": 63488
00:24:47.017      }
00:24:47.017    ]
00:24:47.017  }'
00:24:47.017    17:08:39	-- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"'
00:24:47.277   17:08:39	-- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]]
00:24:47.277    17:08:39	-- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"'
00:24:47.277   17:08:39	-- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]]
00:24:47.277    17:08:39	-- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all
00:24:47.277    17:08:39	-- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name'
00:24:47.536   17:08:40	-- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]]
00:24:47.536   17:08:40	-- bdev/bdev_raid.sh@709 -- # killprocess 141936
00:24:47.536   17:08:40	-- common/autotest_common.sh@936 -- # '[' -z 141936 ']'
00:24:47.536   17:08:40	-- common/autotest_common.sh@940 -- # kill -0 141936
00:24:47.536    17:08:40	-- common/autotest_common.sh@941 -- # uname
00:24:47.536   17:08:40	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:24:47.536    17:08:40	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 141936
00:24:47.536  killing process with pid 141936
00:24:47.536  Received shutdown signal, test time was about 60.000000 seconds
00:24:47.536  
00:24:47.536                                                                                                  Latency(us)
00:24:47.536  
[2024-11-19T17:08:40.400Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:24:47.536  
[2024-11-19T17:08:40.400Z]  ===================================================================================================================
00:24:47.536  
[2024-11-19T17:08:40.400Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00 18446744073709551616.00       0.00
00:24:47.536   17:08:40	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:24:47.536   17:08:40	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:24:47.536   17:08:40	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 141936'
00:24:47.536   17:08:40	-- common/autotest_common.sh@955 -- # kill 141936
00:24:47.536  [2024-11-19 17:08:40.246945] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start
00:24:47.536  [2024-11-19 17:08:40.247043] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct
00:24:47.536  [2024-11-19 17:08:40.247127] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct
00:24:47.536  [2024-11-19 17:08:40.247136] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000b180 name raid_bdev1, state offline
00:24:47.536   17:08:40	-- common/autotest_common.sh@960 -- # wait 141936
00:24:47.536  [2024-11-19 17:08:40.300161] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit
00:24:47.796  ************************************
00:24:47.796  END TEST raid5f_rebuild_test_sb
00:24:47.796  ************************************
00:24:47.796   17:08:40	-- bdev/bdev_raid.sh@711 -- # return 0
00:24:47.796  
00:24:47.796  real	0m29.591s
00:24:47.796  user	0m45.267s
00:24:47.796  sys	0m4.287s
00:24:47.796   17:08:40	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:24:47.796   17:08:40	-- common/autotest_common.sh@10 -- # set +x
00:24:47.796   17:08:40	-- bdev/bdev_raid.sh@754 -- # rm -f /raidrandtest
00:24:47.796  ************************************
00:24:47.796  END TEST bdev_raid
00:24:47.796  ************************************
00:24:47.796  
00:24:47.796  real	11m31.177s
00:24:47.796  user	19m6.335s
00:24:47.796  sys	2m0.051s
00:24:47.796   17:08:40	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:24:47.796   17:08:40	-- common/autotest_common.sh@10 -- # set +x
00:24:48.055   17:08:40	-- spdk/autotest.sh@184 -- # run_test bdevperf_config /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh
00:24:48.055   17:08:40	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:24:48.055   17:08:40	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:24:48.055   17:08:40	-- common/autotest_common.sh@10 -- # set +x
00:24:48.055  ************************************
00:24:48.055  START TEST bdevperf_config
00:24:48.055  ************************************
00:24:48.055   17:08:40	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh
00:24:48.055  * Looking for test storage...
00:24:48.055  * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf
00:24:48.055    17:08:40	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:24:48.055     17:08:40	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:24:48.055     17:08:40	-- common/autotest_common.sh@1690 -- # lcov --version
00:24:48.055    17:08:40	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:24:48.055    17:08:40	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:24:48.055    17:08:40	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:24:48.055    17:08:40	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:24:48.055    17:08:40	-- scripts/common.sh@335 -- # IFS=.-:
00:24:48.055    17:08:40	-- scripts/common.sh@335 -- # read -ra ver1
00:24:48.055    17:08:40	-- scripts/common.sh@336 -- # IFS=.-:
00:24:48.055    17:08:40	-- scripts/common.sh@336 -- # read -ra ver2
00:24:48.055    17:08:40	-- scripts/common.sh@337 -- # local 'op=<'
00:24:48.055    17:08:40	-- scripts/common.sh@339 -- # ver1_l=2
00:24:48.055    17:08:40	-- scripts/common.sh@340 -- # ver2_l=1
00:24:48.055    17:08:40	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:24:48.055    17:08:40	-- scripts/common.sh@343 -- # case "$op" in
00:24:48.055    17:08:40	-- scripts/common.sh@344 -- # : 1
00:24:48.055    17:08:40	-- scripts/common.sh@363 -- # (( v = 0 ))
00:24:48.055    17:08:40	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:24:48.055     17:08:40	-- scripts/common.sh@364 -- # decimal 1
00:24:48.055     17:08:40	-- scripts/common.sh@352 -- # local d=1
00:24:48.055     17:08:40	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:24:48.055     17:08:40	-- scripts/common.sh@354 -- # echo 1
00:24:48.055    17:08:40	-- scripts/common.sh@364 -- # ver1[v]=1
00:24:48.055     17:08:40	-- scripts/common.sh@365 -- # decimal 2
00:24:48.055     17:08:40	-- scripts/common.sh@352 -- # local d=2
00:24:48.055     17:08:40	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:24:48.055     17:08:40	-- scripts/common.sh@354 -- # echo 2
00:24:48.055    17:08:40	-- scripts/common.sh@365 -- # ver2[v]=2
00:24:48.055    17:08:40	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:24:48.055    17:08:40	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:24:48.055    17:08:40	-- scripts/common.sh@367 -- # return 0
00:24:48.055    17:08:40	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:24:48.055    17:08:40	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:24:48.055  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:24:48.055  		--rc genhtml_branch_coverage=1
00:24:48.055  		--rc genhtml_function_coverage=1
00:24:48.055  		--rc genhtml_legend=1
00:24:48.055  		--rc geninfo_all_blocks=1
00:24:48.055  		--rc geninfo_unexecuted_blocks=1
00:24:48.055  		
00:24:48.055  		'
00:24:48.055    17:08:40	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:24:48.056  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:24:48.056  		--rc genhtml_branch_coverage=1
00:24:48.056  		--rc genhtml_function_coverage=1
00:24:48.056  		--rc genhtml_legend=1
00:24:48.056  		--rc geninfo_all_blocks=1
00:24:48.056  		--rc geninfo_unexecuted_blocks=1
00:24:48.056  		
00:24:48.056  		'
00:24:48.056    17:08:40	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:24:48.056  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:24:48.056  		--rc genhtml_branch_coverage=1
00:24:48.056  		--rc genhtml_function_coverage=1
00:24:48.056  		--rc genhtml_legend=1
00:24:48.056  		--rc geninfo_all_blocks=1
00:24:48.056  		--rc geninfo_unexecuted_blocks=1
00:24:48.056  		
00:24:48.056  		'
00:24:48.056    17:08:40	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:24:48.056  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:24:48.056  		--rc genhtml_branch_coverage=1
00:24:48.056  		--rc genhtml_function_coverage=1
00:24:48.056  		--rc genhtml_legend=1
00:24:48.056  		--rc geninfo_all_blocks=1
00:24:48.056  		--rc geninfo_unexecuted_blocks=1
00:24:48.056  		
00:24:48.056  		'
00:24:48.056   17:08:40	-- bdevperf/test_config.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/common.sh
00:24:48.056    17:08:40	-- bdevperf/common.sh@5 -- # bdevperf=/home/vagrant/spdk_repo/spdk/build/examples/bdevperf
00:24:48.056   17:08:40	-- bdevperf/test_config.sh@12 -- # jsonconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json
00:24:48.056   17:08:40	-- bdevperf/test_config.sh@13 -- # testconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf
00:24:48.056   17:08:40	-- bdevperf/test_config.sh@15 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
00:24:48.056   17:08:40	-- bdevperf/test_config.sh@17 -- # create_job global read Malloc0
00:24:48.056   17:08:40	-- bdevperf/common.sh@8 -- # local job_section=global
00:24:48.056   17:08:40	-- bdevperf/common.sh@9 -- # local rw=read
00:24:48.056   17:08:40	-- bdevperf/common.sh@10 -- # local filename=Malloc0
00:24:48.056   17:08:40	-- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]]
00:24:48.056   17:08:40	-- bdevperf/common.sh@13 -- # cat
00:24:48.056   17:08:40	-- bdevperf/common.sh@18 -- # job='[global]'
00:24:48.056  
00:24:48.056   17:08:40	-- bdevperf/common.sh@19 -- # echo
00:24:48.056   17:08:40	-- bdevperf/common.sh@20 -- # cat
00:24:48.056   17:08:40	-- bdevperf/test_config.sh@18 -- # create_job job0
00:24:48.056   17:08:40	-- bdevperf/common.sh@8 -- # local job_section=job0
00:24:48.056   17:08:40	-- bdevperf/common.sh@9 -- # local rw=
00:24:48.056   17:08:40	-- bdevperf/common.sh@10 -- # local filename=
00:24:48.056   17:08:40	-- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]]
00:24:48.056   17:08:40	-- bdevperf/common.sh@18 -- # job='[job0]'
00:24:48.056   17:08:40	-- bdevperf/common.sh@19 -- # echo
00:24:48.056  
00:24:48.056   17:08:40	-- bdevperf/common.sh@20 -- # cat
00:24:48.056   17:08:40	-- bdevperf/test_config.sh@19 -- # create_job job1
00:24:48.056   17:08:40	-- bdevperf/common.sh@8 -- # local job_section=job1
00:24:48.056   17:08:40	-- bdevperf/common.sh@9 -- # local rw=
00:24:48.056   17:08:40	-- bdevperf/common.sh@10 -- # local filename=
00:24:48.056   17:08:40	-- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]]
00:24:48.056   17:08:40	-- bdevperf/common.sh@18 -- # job='[job1]'
00:24:48.056  
00:24:48.056   17:08:40	-- bdevperf/common.sh@19 -- # echo
00:24:48.056   17:08:40	-- bdevperf/common.sh@20 -- # cat
00:24:48.056   17:08:40	-- bdevperf/test_config.sh@20 -- # create_job job2
00:24:48.056   17:08:40	-- bdevperf/common.sh@8 -- # local job_section=job2
00:24:48.056   17:08:40	-- bdevperf/common.sh@9 -- # local rw=
00:24:48.056   17:08:40	-- bdevperf/common.sh@10 -- # local filename=
00:24:48.056   17:08:40	-- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]]
00:24:48.056   17:08:40	-- bdevperf/common.sh@18 -- # job='[job2]'
00:24:48.056  
00:24:48.056   17:08:40	-- bdevperf/common.sh@19 -- # echo
00:24:48.056   17:08:40	-- bdevperf/common.sh@20 -- # cat
00:24:48.056   17:08:40	-- bdevperf/test_config.sh@21 -- # create_job job3
00:24:48.056   17:08:40	-- bdevperf/common.sh@8 -- # local job_section=job3
00:24:48.056   17:08:40	-- bdevperf/common.sh@9 -- # local rw=
00:24:48.056   17:08:40	-- bdevperf/common.sh@10 -- # local filename=
00:24:48.056   17:08:40	-- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]]
00:24:48.056   17:08:40	-- bdevperf/common.sh@18 -- # job='[job3]'
00:24:48.056   17:08:40	-- bdevperf/common.sh@19 -- # echo
00:24:48.056  
00:24:48.056   17:08:40	-- bdevperf/common.sh@20 -- # cat
00:24:48.315    17:08:40	-- bdevperf/test_config.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf
00:24:50.851   17:08:43	-- bdevperf/test_config.sh@22 -- # bdevperf_output='[2024-11-19 17:08:40.967878] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:24:50.852  [2024-11-19 17:08:40.968105] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142710 ]
00:24:50.852  Using job config with 4 jobs
00:24:50.852  [2024-11-19 17:08:41.120872] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:24:50.852  [2024-11-19 17:08:41.179199] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:24:50.852  cpumask for '\''job0'\'' is too big
00:24:50.852  cpumask for '\''job1'\'' is too big
00:24:50.852  cpumask for '\''job2'\'' is too big
00:24:50.852  cpumask for '\''job3'\'' is too big
00:24:50.852  Running I/O for 2 seconds...
00:24:50.852  
00:24:50.852                                                                                                  Latency(us)
00:24:50.852  
[2024-11-19T17:08:43.716Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:24:50.852  
[2024-11-19T17:08:43.716Z]  Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024)
00:24:50.852  	 Malloc0             :       2.02   29854.14      29.15       0.00     0.00    8569.33    1365.33   11983.73
00:24:50.852  
[2024-11-19T17:08:43.716Z]  Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024)
00:24:50.852  	 Malloc0             :       2.02   29834.08      29.13       0.00     0.00    8560.24    1357.53   10673.01
00:24:50.852  
[2024-11-19T17:08:43.716Z]  Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024)
00:24:50.852  	 Malloc0             :       2.02   29814.03      29.12       0.00     0.00    8552.26    1365.33    9986.44
00:24:50.852  
[2024-11-19T17:08:43.716Z]  Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024)
00:24:50.852  	 Malloc0             :       2.02   29793.20      29.09       0.00     0.00    8545.13    1341.93    9986.44
00:24:50.852  
[2024-11-19T17:08:43.716Z]  ===================================================================================================================
00:24:50.852  
[2024-11-19T17:08:43.716Z]  Total                       :             119295.44     116.50       0.00     0.00    8556.74    1341.93   11983.73'
00:24:50.852    17:08:43	-- bdevperf/test_config.sh@23 -- # get_num_jobs '[2024-11-19 17:08:40.967878] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:24:50.852  [2024-11-19 17:08:40.968105] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142710 ]
00:24:50.852  Using job config with 4 jobs
00:24:50.852  [2024-11-19 17:08:41.120872] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:24:50.852  [2024-11-19 17:08:41.179199] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:24:50.852  cpumask for '\''job0'\'' is too big
00:24:50.852  cpumask for '\''job1'\'' is too big
00:24:50.852  cpumask for '\''job2'\'' is too big
00:24:50.852  cpumask for '\''job3'\'' is too big
00:24:50.852  Running I/O for 2 seconds...
00:24:50.852  
00:24:50.852                                                                                                  Latency(us)
00:24:50.852  
[2024-11-19T17:08:43.716Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:24:50.852  
[2024-11-19T17:08:43.716Z]  Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024)
00:24:50.852  	 Malloc0             :       2.02   29854.14      29.15       0.00     0.00    8569.33    1365.33   11983.73
00:24:50.852  
[2024-11-19T17:08:43.716Z]  Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024)
00:24:50.852  	 Malloc0             :       2.02   29834.08      29.13       0.00     0.00    8560.24    1357.53   10673.01
00:24:50.852  
[2024-11-19T17:08:43.716Z]  Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024)
00:24:50.852  	 Malloc0             :       2.02   29814.03      29.12       0.00     0.00    8552.26    1365.33    9986.44
00:24:50.852  
[2024-11-19T17:08:43.716Z]  Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024)
00:24:50.852  	 Malloc0             :       2.02   29793.20      29.09       0.00     0.00    8545.13    1341.93    9986.44
00:24:50.852  
[2024-11-19T17:08:43.716Z]  ===================================================================================================================
00:24:50.852  
[2024-11-19T17:08:43.716Z]  Total                       :             119295.44     116.50       0.00     0.00    8556.74    1341.93   11983.73'
00:24:50.852    17:08:43	-- bdevperf/common.sh@32 -- # echo '[2024-11-19 17:08:40.967878] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:24:50.852  [2024-11-19 17:08:40.968105] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142710 ]
00:24:50.852  Using job config with 4 jobs
00:24:50.852  [2024-11-19 17:08:41.120872] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:24:50.852  [2024-11-19 17:08:41.179199] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:24:50.852  cpumask for '\''job0'\'' is too big
00:24:50.852  cpumask for '\''job1'\'' is too big
00:24:50.852  cpumask for '\''job2'\'' is too big
00:24:50.852  cpumask for '\''job3'\'' is too big
00:24:50.852  Running I/O for 2 seconds...
00:24:50.852  
00:24:50.852                                                                                                  Latency(us)
00:24:50.852  
[2024-11-19T17:08:43.716Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:24:50.852  
[2024-11-19T17:08:43.716Z]  Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024)
00:24:50.852  	 Malloc0             :       2.02   29854.14      29.15       0.00     0.00    8569.33    1365.33   11983.73
00:24:50.852  
[2024-11-19T17:08:43.716Z]  Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024)
00:24:50.852  	 Malloc0             :       2.02   29834.08      29.13       0.00     0.00    8560.24    1357.53   10673.01
00:24:50.852  
[2024-11-19T17:08:43.716Z]  Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024)
00:24:50.852  	 Malloc0             :       2.02   29814.03      29.12       0.00     0.00    8552.26    1365.33    9986.44
00:24:50.852  
[2024-11-19T17:08:43.716Z]  Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024)
00:24:50.852  	 Malloc0             :       2.02   29793.20      29.09       0.00     0.00    8545.13    1341.93    9986.44
00:24:50.852  
[2024-11-19T17:08:43.716Z]  ===================================================================================================================
00:24:50.852  
[2024-11-19T17:08:43.716Z]  Total                       :             119295.44     116.50       0.00     0.00    8556.74    1341.93   11983.73'
00:24:50.852    17:08:43	-- bdevperf/common.sh@32 -- # grep -oE '[0-9]+'
00:24:50.852    17:08:43	-- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs'
00:24:50.852   17:08:43	-- bdevperf/test_config.sh@23 -- # [[ 4 == \4 ]]
00:24:50.852    17:08:43	-- bdevperf/test_config.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -C -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf
00:24:51.111  [2024-11-19 17:08:43.720238] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:24:51.111  [2024-11-19 17:08:43.720591] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142752 ]
00:24:51.111  [2024-11-19 17:08:43.875679] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:24:51.111  [2024-11-19 17:08:43.957998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:24:51.369  cpumask for 'job0' is too big
00:24:51.369  cpumask for 'job1' is too big
00:24:51.369  cpumask for 'job2' is too big
00:24:51.369  cpumask for 'job3' is too big
00:24:53.902   17:08:46	-- bdevperf/test_config.sh@25 -- # bdevperf_output='Using job config with 4 jobs
00:24:53.902  Running I/O for 2 seconds...
00:24:53.902  
00:24:53.902                                                                                                  Latency(us)
00:24:53.902  
[2024-11-19T17:08:46.766Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:24:53.902  
[2024-11-19T17:08:46.766Z]  Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024)
00:24:53.902  	 Malloc0             :       2.01   29159.31      28.48       0.00     0.00    8770.27    2293.76   19723.22
00:24:53.902  
[2024-11-19T17:08:46.766Z]  Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024)
00:24:53.902  	 Malloc0             :       2.02   29181.48      28.50       0.00     0.00    8739.19    2246.95   17476.27
00:24:53.902  
[2024-11-19T17:08:46.766Z]  Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024)
00:24:53.902  	 Malloc0             :       2.02   29162.56      28.48       0.00     0.00    8721.07    2200.14   15229.32
00:24:53.902  
[2024-11-19T17:08:46.766Z]  Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024)
00:24:53.902  	 Malloc0             :       2.02   29237.64      28.55       0.00     0.00    8676.17     823.10   13169.62
00:24:53.902  
[2024-11-19T17:08:46.766Z]  ===================================================================================================================
00:24:53.902  
[2024-11-19T17:08:46.766Z]  Total                       :             116740.99     114.00       0.00     0.00    8726.58     823.10   19723.22'
00:24:53.902   17:08:46	-- bdevperf/test_config.sh@27 -- # cleanup
00:24:53.902   17:08:46	-- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf
00:24:53.902   17:08:46	-- bdevperf/test_config.sh@29 -- # create_job job0 write Malloc0
00:24:53.902   17:08:46	-- bdevperf/common.sh@8 -- # local job_section=job0
00:24:53.902   17:08:46	-- bdevperf/common.sh@9 -- # local rw=write
00:24:53.902   17:08:46	-- bdevperf/common.sh@10 -- # local filename=Malloc0
00:24:53.902   17:08:46	-- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]]
00:24:53.902   17:08:46	-- bdevperf/common.sh@18 -- # job='[job0]'
00:24:53.902   17:08:46	-- bdevperf/common.sh@19 -- # echo
00:24:53.902  
00:24:53.902   17:08:46	-- bdevperf/common.sh@20 -- # cat
00:24:53.902   17:08:46	-- bdevperf/test_config.sh@30 -- # create_job job1 write Malloc0
00:24:53.902   17:08:46	-- bdevperf/common.sh@8 -- # local job_section=job1
00:24:53.902   17:08:46	-- bdevperf/common.sh@9 -- # local rw=write
00:24:53.902   17:08:46	-- bdevperf/common.sh@10 -- # local filename=Malloc0
00:24:53.902   17:08:46	-- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]]
00:24:53.902   17:08:46	-- bdevperf/common.sh@18 -- # job='[job1]'
00:24:53.902  
00:24:53.902   17:08:46	-- bdevperf/common.sh@19 -- # echo
00:24:53.902   17:08:46	-- bdevperf/common.sh@20 -- # cat
00:24:53.902   17:08:46	-- bdevperf/test_config.sh@31 -- # create_job job2 write Malloc0
00:24:53.902   17:08:46	-- bdevperf/common.sh@8 -- # local job_section=job2
00:24:53.902   17:08:46	-- bdevperf/common.sh@9 -- # local rw=write
00:24:53.902   17:08:46	-- bdevperf/common.sh@10 -- # local filename=Malloc0
00:24:53.902   17:08:46	-- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]]
00:24:53.902   17:08:46	-- bdevperf/common.sh@18 -- # job='[job2]'
00:24:53.902  
00:24:53.902   17:08:46	-- bdevperf/common.sh@19 -- # echo
00:24:53.902   17:08:46	-- bdevperf/common.sh@20 -- # cat
00:24:53.902    17:08:46	-- bdevperf/test_config.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf
00:24:56.506   17:08:49	-- bdevperf/test_config.sh@32 -- # bdevperf_output='[2024-11-19 17:08:46.503590] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:24:56.506  [2024-11-19 17:08:46.503804] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142791 ]
00:24:56.506  Using job config with 3 jobs
00:24:56.506  [2024-11-19 17:08:46.655382] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:24:56.506  [2024-11-19 17:08:46.721498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:24:56.506  cpumask for '\''job0'\'' is too big
00:24:56.506  cpumask for '\''job1'\'' is too big
00:24:56.506  cpumask for '\''job2'\'' is too big
00:24:56.506  Running I/O for 2 seconds...
00:24:56.506  
00:24:56.506                                                                                                  Latency(us)
00:24:56.506  
[2024-11-19T17:08:49.370Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:24:56.506  
[2024-11-19T17:08:49.370Z]  Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024)
00:24:56.506  	 Malloc0             :       2.01   40263.43      39.32       0.00     0.00    6351.37    1607.19    9986.44
00:24:56.506  
[2024-11-19T17:08:49.370Z]  Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024)
00:24:56.506  	 Malloc0             :       2.01   40231.69      39.29       0.00     0.00    6343.65    1575.98    8488.47
00:24:56.506  
[2024-11-19T17:08:49.370Z]  Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024)
00:24:56.506  	 Malloc0             :       2.01   40282.36      39.34       0.00     0.00    6323.58     803.60    6990.51
00:24:56.506  
[2024-11-19T17:08:49.370Z]  ===================================================================================================================
00:24:56.506  
[2024-11-19T17:08:49.370Z]  Total                       :             120777.49     117.95       0.00     0.00    6339.51     803.60    9986.44'
00:24:56.506    17:08:49	-- bdevperf/test_config.sh@33 -- # get_num_jobs '[2024-11-19 17:08:46.503590] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:24:56.506  [2024-11-19 17:08:46.503804] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142791 ]
00:24:56.506  Using job config with 3 jobs
00:24:56.506  [2024-11-19 17:08:46.655382] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:24:56.506  [2024-11-19 17:08:46.721498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:24:56.506  cpumask for '\''job0'\'' is too big
00:24:56.506  cpumask for '\''job1'\'' is too big
00:24:56.506  cpumask for '\''job2'\'' is too big
00:24:56.506  Running I/O for 2 seconds...
00:24:56.506  
00:24:56.506                                                                                                  Latency(us)
00:24:56.506  
[2024-11-19T17:08:49.370Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:24:56.506  
[2024-11-19T17:08:49.370Z]  Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024)
00:24:56.506  	 Malloc0             :       2.01   40263.43      39.32       0.00     0.00    6351.37    1607.19    9986.44
00:24:56.506  
[2024-11-19T17:08:49.370Z]  Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024)
00:24:56.506  	 Malloc0             :       2.01   40231.69      39.29       0.00     0.00    6343.65    1575.98    8488.47
00:24:56.506  
[2024-11-19T17:08:49.370Z]  Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024)
00:24:56.506  	 Malloc0             :       2.01   40282.36      39.34       0.00     0.00    6323.58     803.60    6990.51
00:24:56.506  
[2024-11-19T17:08:49.370Z]  ===================================================================================================================
00:24:56.506  
[2024-11-19T17:08:49.371Z]  Total                       :             120777.49     117.95       0.00     0.00    6339.51     803.60    9986.44'
00:24:56.507    17:08:49	-- bdevperf/common.sh@32 -- # echo '[2024-11-19 17:08:46.503590] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:24:56.507  [2024-11-19 17:08:46.503804] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142791 ]
00:24:56.507  Using job config with 3 jobs
00:24:56.507  [2024-11-19 17:08:46.655382] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:24:56.507  [2024-11-19 17:08:46.721498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:24:56.507  cpumask for '\''job0'\'' is too big
00:24:56.507  cpumask for '\''job1'\'' is too big
00:24:56.507  cpumask for '\''job2'\'' is too big
00:24:56.507  Running I/O for 2 seconds...
00:24:56.507  
00:24:56.507                                                                                                  Latency(us)
00:24:56.507  
[2024-11-19T17:08:49.371Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:24:56.507  
[2024-11-19T17:08:49.371Z]  Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024)
00:24:56.507  	 Malloc0             :       2.01   40263.43      39.32       0.00     0.00    6351.37    1607.19    9986.44
00:24:56.507  
[2024-11-19T17:08:49.371Z]  Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024)
00:24:56.507  	 Malloc0             :       2.01   40231.69      39.29       0.00     0.00    6343.65    1575.98    8488.47
00:24:56.507  
[2024-11-19T17:08:49.371Z]  Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024)
00:24:56.507  	 Malloc0             :       2.01   40282.36      39.34       0.00     0.00    6323.58     803.60    6990.51
00:24:56.507  
[2024-11-19T17:08:49.371Z]  ===================================================================================================================
00:24:56.507  
[2024-11-19T17:08:49.371Z]  Total                       :             120777.49     117.95       0.00     0.00    6339.51     803.60    9986.44'
00:24:56.507    17:08:49	-- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs'
00:24:56.507    17:08:49	-- bdevperf/common.sh@32 -- # grep -oE '[0-9]+'
00:24:56.507   17:08:49	-- bdevperf/test_config.sh@33 -- # [[ 3 == \3 ]]
00:24:56.507   17:08:49	-- bdevperf/test_config.sh@35 -- # cleanup
00:24:56.507   17:08:49	-- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf
00:24:56.507   17:08:49	-- bdevperf/test_config.sh@37 -- # create_job global rw Malloc0:Malloc1
00:24:56.507   17:08:49	-- bdevperf/common.sh@8 -- # local job_section=global
00:24:56.507   17:08:49	-- bdevperf/common.sh@9 -- # local rw=rw
00:24:56.507   17:08:49	-- bdevperf/common.sh@10 -- # local filename=Malloc0:Malloc1
00:24:56.507   17:08:49	-- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]]
00:24:56.507   17:08:49	-- bdevperf/common.sh@13 -- # cat
00:24:56.507   17:08:49	-- bdevperf/common.sh@18 -- # job='[global]'
00:24:56.507  
00:24:56.507   17:08:49	-- bdevperf/common.sh@19 -- # echo
00:24:56.507   17:08:49	-- bdevperf/common.sh@20 -- # cat
00:24:56.507   17:08:49	-- bdevperf/test_config.sh@38 -- # create_job job0
00:24:56.507   17:08:49	-- bdevperf/common.sh@8 -- # local job_section=job0
00:24:56.507   17:08:49	-- bdevperf/common.sh@9 -- # local rw=
00:24:56.507   17:08:49	-- bdevperf/common.sh@10 -- # local filename=
00:24:56.507   17:08:49	-- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]]
00:24:56.507   17:08:49	-- bdevperf/common.sh@18 -- # job='[job0]'
00:24:56.507  
00:24:56.507   17:08:49	-- bdevperf/common.sh@19 -- # echo
00:24:56.507   17:08:49	-- bdevperf/common.sh@20 -- # cat
00:24:56.507   17:08:49	-- bdevperf/test_config.sh@39 -- # create_job job1
00:24:56.507   17:08:49	-- bdevperf/common.sh@8 -- # local job_section=job1
00:24:56.507   17:08:49	-- bdevperf/common.sh@9 -- # local rw=
00:24:56.507   17:08:49	-- bdevperf/common.sh@10 -- # local filename=
00:24:56.507   17:08:49	-- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]]
00:24:56.507   17:08:49	-- bdevperf/common.sh@18 -- # job='[job1]'
00:24:56.507  
00:24:56.507   17:08:49	-- bdevperf/common.sh@19 -- # echo
00:24:56.507   17:08:49	-- bdevperf/common.sh@20 -- # cat
00:24:56.507   17:08:49	-- bdevperf/test_config.sh@40 -- # create_job job2
00:24:56.507   17:08:49	-- bdevperf/common.sh@8 -- # local job_section=job2
00:24:56.507   17:08:49	-- bdevperf/common.sh@9 -- # local rw=
00:24:56.507   17:08:49	-- bdevperf/common.sh@10 -- # local filename=
00:24:56.507   17:08:49	-- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]]
00:24:56.507   17:08:49	-- bdevperf/common.sh@18 -- # job='[job2]'
00:24:56.507  
00:24:56.507   17:08:49	-- bdevperf/common.sh@19 -- # echo
00:24:56.507   17:08:49	-- bdevperf/common.sh@20 -- # cat
00:24:56.507   17:08:49	-- bdevperf/test_config.sh@41 -- # create_job job3
00:24:56.507   17:08:49	-- bdevperf/common.sh@8 -- # local job_section=job3
00:24:56.507   17:08:49	-- bdevperf/common.sh@9 -- # local rw=
00:24:56.507   17:08:49	-- bdevperf/common.sh@10 -- # local filename=
00:24:56.507   17:08:49	-- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]]
00:24:56.507   17:08:49	-- bdevperf/common.sh@18 -- # job='[job3]'
00:24:56.507  
00:24:56.507   17:08:49	-- bdevperf/common.sh@19 -- # echo
00:24:56.507   17:08:49	-- bdevperf/common.sh@20 -- # cat
00:24:56.507    17:08:49	-- bdevperf/test_config.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf
00:24:59.795   17:08:52	-- bdevperf/test_config.sh@42 -- # bdevperf_output='[2024-11-19 17:08:49.313442] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:24:59.795  [2024-11-19 17:08:49.314223] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142832 ]
00:24:59.795  Using job config with 4 jobs
00:24:59.795  [2024-11-19 17:08:49.468551] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:24:59.795  [2024-11-19 17:08:49.538830] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:24:59.795  cpumask for '\''job0'\'' is too big
00:24:59.795  cpumask for '\''job1'\'' is too big
00:24:59.795  cpumask for '\''job2'\'' is too big
00:24:59.795  cpumask for '\''job3'\'' is too big
00:24:59.795  Running I/O for 2 seconds...
00:24:59.795  
00:24:59.795                                                                                                  Latency(us)
00:24:59.795  
[2024-11-19T17:08:52.659Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:24:59.795  
[2024-11-19T17:08:52.659Z]  Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:24:59.795  	 Malloc0             :       2.03   15405.98      15.04       0.00     0.00   16604.26    3027.14   25964.74
00:24:59.795  
[2024-11-19T17:08:52.659Z]  Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:24:59.795  	 Malloc1             :       2.03   15395.43      15.03       0.00     0.00   16604.22    3651.29   25964.74
00:24:59.795  
[2024-11-19T17:08:52.660Z]  Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:24:59.796  	 Malloc0             :       2.03   15385.41      15.02       0.00     0.00   16569.38    3011.54   23093.64
00:24:59.796  
[2024-11-19T17:08:52.660Z]  Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:24:59.796  	 Malloc1             :       2.03   15375.22      15.01       0.00     0.00   16568.77    3573.27   23218.47
00:24:59.796  
[2024-11-19T17:08:52.660Z]  Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:24:59.796  	 Malloc0             :       2.03   15365.20      15.01       0.00     0.00   16529.67    3042.74   20222.54
00:24:59.796  
[2024-11-19T17:08:52.660Z]  Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:24:59.796  	 Malloc1             :       2.03   15354.20      14.99       0.00     0.00   16531.02    3744.91   20097.71
00:24:59.796  
[2024-11-19T17:08:52.660Z]  Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:24:59.796  	 Malloc0             :       2.04   15344.26      14.98       0.00     0.00   16492.61    3058.35   18474.91
00:24:59.796  
[2024-11-19T17:08:52.660Z]  Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:24:59.796  	 Malloc1             :       2.04   15444.42      15.08       0.00     0.00   16375.33     756.78   18474.91
00:24:59.796  
[2024-11-19T17:08:52.660Z]  ===================================================================================================================
00:24:59.796  
[2024-11-19T17:08:52.660Z]  Total                       :             123070.13     120.19       0.00     0.00   16534.24     756.78   25964.74'
00:24:59.796    17:08:52	-- bdevperf/test_config.sh@43 -- # get_num_jobs '[2024-11-19 17:08:49.313442] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:24:59.796  [2024-11-19 17:08:49.314223] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142832 ]
00:24:59.796  Using job config with 4 jobs
00:24:59.796  [2024-11-19 17:08:49.468551] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:24:59.796  [2024-11-19 17:08:49.538830] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:24:59.796  cpumask for '\''job0'\'' is too big
00:24:59.796  cpumask for '\''job1'\'' is too big
00:24:59.796  cpumask for '\''job2'\'' is too big
00:24:59.796  cpumask for '\''job3'\'' is too big
00:24:59.796  Running I/O for 2 seconds...
00:24:59.796  
00:24:59.796                                                                                                  Latency(us)
00:24:59.796  
[2024-11-19T17:08:52.660Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:24:59.796  
[2024-11-19T17:08:52.660Z]  Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:24:59.796  	 Malloc0             :       2.03   15405.98      15.04       0.00     0.00   16604.26    3027.14   25964.74
00:24:59.796  
[2024-11-19T17:08:52.660Z]  Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:24:59.796  	 Malloc1             :       2.03   15395.43      15.03       0.00     0.00   16604.22    3651.29   25964.74
00:24:59.796  
[2024-11-19T17:08:52.660Z]  Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:24:59.796  	 Malloc0             :       2.03   15385.41      15.02       0.00     0.00   16569.38    3011.54   23093.64
00:24:59.796  
[2024-11-19T17:08:52.660Z]  Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:24:59.796  	 Malloc1             :       2.03   15375.22      15.01       0.00     0.00   16568.77    3573.27   23218.47
00:24:59.796  
[2024-11-19T17:08:52.660Z]  Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:24:59.796  	 Malloc0             :       2.03   15365.20      15.01       0.00     0.00   16529.67    3042.74   20222.54
00:24:59.796  
[2024-11-19T17:08:52.660Z]  Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:24:59.796  	 Malloc1             :       2.03   15354.20      14.99       0.00     0.00   16531.02    3744.91   20097.71
00:24:59.796  
[2024-11-19T17:08:52.660Z]  Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:24:59.796  	 Malloc0             :       2.04   15344.26      14.98       0.00     0.00   16492.61    3058.35   18474.91
00:24:59.796  
[2024-11-19T17:08:52.660Z]  Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:24:59.796  	 Malloc1             :       2.04   15444.42      15.08       0.00     0.00   16375.33     756.78   18474.91
00:24:59.796  
[2024-11-19T17:08:52.660Z]  ===================================================================================================================
00:24:59.796  
[2024-11-19T17:08:52.660Z]  Total                       :             123070.13     120.19       0.00     0.00   16534.24     756.78   25964.74'
00:24:59.796    17:08:52	-- bdevperf/common.sh@32 -- # grep -oE '[0-9]+'
00:24:59.796    17:08:52	-- bdevperf/common.sh@32 -- # echo '[2024-11-19 17:08:49.313442] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:24:59.796  [2024-11-19 17:08:49.314223] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142832 ]
00:24:59.796  Using job config with 4 jobs
00:24:59.796  [2024-11-19 17:08:49.468551] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:24:59.796  [2024-11-19 17:08:49.538830] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:24:59.796  cpumask for '\''job0'\'' is too big
00:24:59.796  cpumask for '\''job1'\'' is too big
00:24:59.796  cpumask for '\''job2'\'' is too big
00:24:59.796  cpumask for '\''job3'\'' is too big
00:24:59.796  Running I/O for 2 seconds...
00:24:59.796  
00:24:59.796                                                                                                  Latency(us)
00:24:59.796  
[2024-11-19T17:08:52.660Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:24:59.796  
[2024-11-19T17:08:52.660Z]  Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:24:59.796  	 Malloc0             :       2.03   15405.98      15.04       0.00     0.00   16604.26    3027.14   25964.74
00:24:59.796  
[2024-11-19T17:08:52.660Z]  Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:24:59.796  	 Malloc1             :       2.03   15395.43      15.03       0.00     0.00   16604.22    3651.29   25964.74
00:24:59.796  
[2024-11-19T17:08:52.660Z]  Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:24:59.796  	 Malloc0             :       2.03   15385.41      15.02       0.00     0.00   16569.38    3011.54   23093.64
00:24:59.796  
[2024-11-19T17:08:52.660Z]  Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:24:59.796  	 Malloc1             :       2.03   15375.22      15.01       0.00     0.00   16568.77    3573.27   23218.47
00:24:59.796  
[2024-11-19T17:08:52.660Z]  Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:24:59.796  	 Malloc0             :       2.03   15365.20      15.01       0.00     0.00   16529.67    3042.74   20222.54
00:24:59.796  
[2024-11-19T17:08:52.660Z]  Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:24:59.796  	 Malloc1             :       2.03   15354.20      14.99       0.00     0.00   16531.02    3744.91   20097.71
00:24:59.796  
[2024-11-19T17:08:52.660Z]  Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:24:59.796  	 Malloc0             :       2.04   15344.26      14.98       0.00     0.00   16492.61    3058.35   18474.91
00:24:59.796  
[2024-11-19T17:08:52.660Z]  Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:24:59.796  	 Malloc1             :       2.04   15444.42      15.08       0.00     0.00   16375.33     756.78   18474.91
00:24:59.796  
[2024-11-19T17:08:52.660Z]  ===================================================================================================================
00:24:59.796  
[2024-11-19T17:08:52.660Z]  Total                       :             123070.13     120.19       0.00     0.00   16534.24     756.78   25964.74'
00:24:59.796    17:08:52	-- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs'
00:24:59.796   17:08:52	-- bdevperf/test_config.sh@43 -- # [[ 4 == \4 ]]
00:24:59.796   17:08:52	-- bdevperf/test_config.sh@44 -- # cleanup
00:24:59.796   17:08:52	-- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf
00:24:59.796   17:08:52	-- bdevperf/test_config.sh@45 -- # trap - SIGINT SIGTERM EXIT
00:24:59.796  ************************************
00:24:59.796  END TEST bdevperf_config
00:24:59.796  ************************************
00:24:59.796  
00:24:59.796  real	0m11.348s
00:24:59.796  user	0m9.721s
00:24:59.796  sys	0m1.068s
00:24:59.796   17:08:52	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:24:59.796   17:08:52	-- common/autotest_common.sh@10 -- # set +x
00:24:59.796    17:08:52	-- spdk/autotest.sh@185 -- # uname -s
00:24:59.796   17:08:52	-- spdk/autotest.sh@185 -- # [[ Linux == Linux ]]
00:24:59.796   17:08:52	-- spdk/autotest.sh@186 -- # run_test reactor_set_interrupt /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh
00:24:59.796   17:08:52	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:24:59.796   17:08:52	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:24:59.796   17:08:52	-- common/autotest_common.sh@10 -- # set +x
00:24:59.796  ************************************
00:24:59.796  START TEST reactor_set_interrupt
00:24:59.796  ************************************
00:24:59.796   17:08:52	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh
00:24:59.796  * Looking for test storage...
00:24:59.796  * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt
00:24:59.796    17:08:52	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:24:59.796     17:08:52	-- common/autotest_common.sh@1690 -- # lcov --version
00:24:59.796     17:08:52	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:24:59.796    17:08:52	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:24:59.796    17:08:52	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:24:59.796    17:08:52	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:24:59.796    17:08:52	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:24:59.796    17:08:52	-- scripts/common.sh@335 -- # IFS=.-:
00:24:59.796    17:08:52	-- scripts/common.sh@335 -- # read -ra ver1
00:24:59.796    17:08:52	-- scripts/common.sh@336 -- # IFS=.-:
00:24:59.796    17:08:52	-- scripts/common.sh@336 -- # read -ra ver2
00:24:59.796    17:08:52	-- scripts/common.sh@337 -- # local 'op=<'
00:24:59.796    17:08:52	-- scripts/common.sh@339 -- # ver1_l=2
00:24:59.796    17:08:52	-- scripts/common.sh@340 -- # ver2_l=1
00:24:59.796    17:08:52	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:24:59.796    17:08:52	-- scripts/common.sh@343 -- # case "$op" in
00:24:59.796    17:08:52	-- scripts/common.sh@344 -- # : 1
00:24:59.796    17:08:52	-- scripts/common.sh@363 -- # (( v = 0 ))
00:24:59.796    17:08:52	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:24:59.797     17:08:52	-- scripts/common.sh@364 -- # decimal 1
00:24:59.797     17:08:52	-- scripts/common.sh@352 -- # local d=1
00:24:59.797     17:08:52	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:24:59.797     17:08:52	-- scripts/common.sh@354 -- # echo 1
00:24:59.797    17:08:52	-- scripts/common.sh@364 -- # ver1[v]=1
00:24:59.797     17:08:52	-- scripts/common.sh@365 -- # decimal 2
00:24:59.797     17:08:52	-- scripts/common.sh@352 -- # local d=2
00:24:59.797     17:08:52	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:24:59.797     17:08:52	-- scripts/common.sh@354 -- # echo 2
00:24:59.797    17:08:52	-- scripts/common.sh@365 -- # ver2[v]=2
00:24:59.797    17:08:52	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:24:59.797    17:08:52	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:24:59.797    17:08:52	-- scripts/common.sh@367 -- # return 0
00:24:59.797    17:08:52	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:24:59.797    17:08:52	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:24:59.797  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:24:59.797  		--rc genhtml_branch_coverage=1
00:24:59.797  		--rc genhtml_function_coverage=1
00:24:59.797  		--rc genhtml_legend=1
00:24:59.797  		--rc geninfo_all_blocks=1
00:24:59.797  		--rc geninfo_unexecuted_blocks=1
00:24:59.797  		
00:24:59.797  		'
00:24:59.797    17:08:52	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:24:59.797  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:24:59.797  		--rc genhtml_branch_coverage=1
00:24:59.797  		--rc genhtml_function_coverage=1
00:24:59.797  		--rc genhtml_legend=1
00:24:59.797  		--rc geninfo_all_blocks=1
00:24:59.797  		--rc geninfo_unexecuted_blocks=1
00:24:59.797  		
00:24:59.797  		'
00:24:59.797    17:08:52	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:24:59.797  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:24:59.797  		--rc genhtml_branch_coverage=1
00:24:59.797  		--rc genhtml_function_coverage=1
00:24:59.797  		--rc genhtml_legend=1
00:24:59.797  		--rc geninfo_all_blocks=1
00:24:59.797  		--rc geninfo_unexecuted_blocks=1
00:24:59.797  		
00:24:59.797  		'
00:24:59.797    17:08:52	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:24:59.797  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:24:59.797  		--rc genhtml_branch_coverage=1
00:24:59.797  		--rc genhtml_function_coverage=1
00:24:59.797  		--rc genhtml_legend=1
00:24:59.797  		--rc geninfo_all_blocks=1
00:24:59.797  		--rc geninfo_unexecuted_blocks=1
00:24:59.797  		
00:24:59.797  		'
00:24:59.797   17:08:52	-- interrupt/reactor_set_interrupt.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh
00:24:59.797      17:08:52	-- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh
00:24:59.797     17:08:52	-- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt
00:24:59.797    17:08:52	-- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt
00:24:59.797     17:08:52	-- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../..
00:24:59.797    17:08:52	-- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk
00:24:59.797    17:08:52	-- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh
00:24:59.797     17:08:52	-- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd
00:24:59.797     17:08:52	-- common/autotest_common.sh@34 -- # set -e
00:24:59.797     17:08:52	-- common/autotest_common.sh@35 -- # shopt -s nullglob
00:24:59.797     17:08:52	-- common/autotest_common.sh@36 -- # shopt -s extglob
00:24:59.797     17:08:52	-- common/autotest_common.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]]
00:24:59.797     17:08:52	-- common/autotest_common.sh@39 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh
00:24:59.797      17:08:52	-- common/build_config.sh@1 -- # CONFIG_WPDK_DIR=
00:24:59.797      17:08:52	-- common/build_config.sh@2 -- # CONFIG_ASAN=y
00:24:59.797      17:08:52	-- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n
00:24:59.797      17:08:52	-- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y
00:24:59.797      17:08:52	-- common/build_config.sh@5 -- # CONFIG_USDT=n
00:24:59.797      17:08:52	-- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n
00:24:59.797      17:08:52	-- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local
00:24:59.797      17:08:52	-- common/build_config.sh@8 -- # CONFIG_RBD=n
00:24:59.797      17:08:52	-- common/build_config.sh@9 -- # CONFIG_LIBDIR=
00:24:59.797      17:08:52	-- common/build_config.sh@10 -- # CONFIG_IDXD=y
00:24:59.797      17:08:52	-- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y
00:24:59.797      17:08:52	-- common/build_config.sh@12 -- # CONFIG_SMA=n
00:24:59.797      17:08:52	-- common/build_config.sh@13 -- # CONFIG_VTUNE=n
00:24:59.797      17:08:52	-- common/build_config.sh@14 -- # CONFIG_TSAN=n
00:24:59.797      17:08:52	-- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y
00:24:59.797      17:08:52	-- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR=
00:24:59.797      17:08:52	-- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n
00:24:59.797      17:08:52	-- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y
00:24:59.797      17:08:52	-- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk
00:24:59.797      17:08:52	-- common/build_config.sh@20 -- # CONFIG_LTO=n
00:24:59.797      17:08:52	-- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y
00:24:59.797      17:08:52	-- common/build_config.sh@22 -- # CONFIG_CET=n
00:24:59.797      17:08:52	-- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n
00:24:59.797      17:08:52	-- common/build_config.sh@24 -- # CONFIG_OCF_PATH=
00:24:59.797      17:08:52	-- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y
00:24:59.797      17:08:52	-- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=n
00:24:59.797      17:08:52	-- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n
00:24:59.797      17:08:52	-- common/build_config.sh@28 -- # CONFIG_UBLK=n
00:24:59.797      17:08:52	-- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y
00:24:59.797      17:08:52	-- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH=
00:24:59.797      17:08:52	-- common/build_config.sh@31 -- # CONFIG_OCF=n
00:24:59.797      17:08:52	-- common/build_config.sh@32 -- # CONFIG_FUSE=n
00:24:59.797      17:08:52	-- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR=
00:24:59.797      17:08:52	-- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB=
00:24:59.797      17:08:52	-- common/build_config.sh@35 -- # CONFIG_FUZZER=n
00:24:59.797      17:08:52	-- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build
00:24:59.797      17:08:52	-- common/build_config.sh@37 -- # CONFIG_CRYPTO=n
00:24:59.797      17:08:52	-- common/build_config.sh@38 -- # CONFIG_PGO_USE=n
00:24:59.797      17:08:52	-- common/build_config.sh@39 -- # CONFIG_VHOST=y
00:24:59.797      17:08:52	-- common/build_config.sh@40 -- # CONFIG_DAOS=n
00:24:59.797      17:08:52	-- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include
00:24:59.797      17:08:52	-- common/build_config.sh@42 -- # CONFIG_DAOS_DIR=
00:24:59.797      17:08:52	-- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=y
00:24:59.797      17:08:52	-- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y
00:24:59.797      17:08:52	-- common/build_config.sh@45 -- # CONFIG_VIRTIO=y
00:24:59.797      17:08:52	-- common/build_config.sh@46 -- # CONFIG_COVERAGE=y
00:24:59.797      17:08:52	-- common/build_config.sh@47 -- # CONFIG_RDMA=y
00:24:59.797      17:08:52	-- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio
00:24:59.797      17:08:52	-- common/build_config.sh@49 -- # CONFIG_URING_PATH=
00:24:59.797      17:08:52	-- common/build_config.sh@50 -- # CONFIG_XNVME=n
00:24:59.797      17:08:52	-- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n
00:24:59.797      17:08:52	-- common/build_config.sh@52 -- # CONFIG_ARCH=native
00:24:59.797      17:08:52	-- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n
00:24:59.797      17:08:52	-- common/build_config.sh@54 -- # CONFIG_WERROR=y
00:24:59.797      17:08:52	-- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n
00:24:59.797      17:08:52	-- common/build_config.sh@56 -- # CONFIG_UBSAN=y
00:24:59.797      17:08:52	-- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR=
00:24:59.797      17:08:52	-- common/build_config.sh@58 -- # CONFIG_GOLANG=n
00:24:59.797      17:08:52	-- common/build_config.sh@59 -- # CONFIG_ISAL=y
00:24:59.797      17:08:52	-- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=n
00:24:59.797      17:08:52	-- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib
00:24:59.797      17:08:52	-- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs
00:24:59.797      17:08:52	-- common/build_config.sh@63 -- # CONFIG_APPS=y
00:24:59.797      17:08:52	-- common/build_config.sh@64 -- # CONFIG_SHARED=n
00:24:59.797      17:08:52	-- common/build_config.sh@65 -- # CONFIG_FC_PATH=
00:24:59.797      17:08:52	-- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n
00:24:59.797      17:08:52	-- common/build_config.sh@67 -- # CONFIG_FC=n
00:24:59.797      17:08:52	-- common/build_config.sh@68 -- # CONFIG_AVAHI=n
00:24:59.797      17:08:52	-- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y
00:24:59.797      17:08:52	-- common/build_config.sh@70 -- # CONFIG_RAID5F=y
00:24:59.797      17:08:52	-- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y
00:24:59.797      17:08:52	-- common/build_config.sh@72 -- # CONFIG_TESTS=y
00:24:59.797      17:08:52	-- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n
00:24:59.797      17:08:52	-- common/build_config.sh@74 -- # CONFIG_MAX_LCORES=
00:24:59.797      17:08:52	-- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n
00:24:59.797      17:08:52	-- common/build_config.sh@76 -- # CONFIG_DEBUG=y
00:24:59.797      17:08:52	-- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n
00:24:59.797      17:08:52	-- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX=
00:24:59.797      17:08:52	-- common/build_config.sh@79 -- # CONFIG_URING=n
00:24:59.797     17:08:52	-- common/autotest_common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh
00:24:59.797        17:08:52	-- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh
00:24:59.797       17:08:52	-- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common
00:24:59.797      17:08:52	-- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common
00:24:59.797      17:08:52	-- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk
00:24:59.797      17:08:52	-- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin
00:24:59.797      17:08:52	-- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app
00:24:59.797      17:08:52	-- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples
00:24:59.797      17:08:52	-- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz")
00:24:59.797      17:08:52	-- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt")
00:24:59.798      17:08:52	-- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt")
00:24:59.798      17:08:52	-- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost")
00:24:59.798      17:08:52	-- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd")
00:24:59.798      17:08:52	-- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt")
00:24:59.798      17:08:52	-- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]]
00:24:59.798      17:08:52	-- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H
00:24:59.798  #define SPDK_CONFIG_H
00:24:59.798  #define SPDK_CONFIG_APPS 1
00:24:59.798  #define SPDK_CONFIG_ARCH native
00:24:59.798  #define SPDK_CONFIG_ASAN 1
00:24:59.798  #undef SPDK_CONFIG_AVAHI
00:24:59.798  #undef SPDK_CONFIG_CET
00:24:59.798  #define SPDK_CONFIG_COVERAGE 1
00:24:59.798  #define SPDK_CONFIG_CROSS_PREFIX 
00:24:59.798  #undef SPDK_CONFIG_CRYPTO
00:24:59.798  #undef SPDK_CONFIG_CRYPTO_MLX5
00:24:59.798  #undef SPDK_CONFIG_CUSTOMOCF
00:24:59.798  #undef SPDK_CONFIG_DAOS
00:24:59.798  #define SPDK_CONFIG_DAOS_DIR 
00:24:59.798  #define SPDK_CONFIG_DEBUG 1
00:24:59.798  #undef SPDK_CONFIG_DPDK_COMPRESSDEV
00:24:59.798  #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build
00:24:59.798  #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include
00:24:59.798  #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib
00:24:59.798  #undef SPDK_CONFIG_DPDK_PKG_CONFIG
00:24:59.798  #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk
00:24:59.798  #define SPDK_CONFIG_EXAMPLES 1
00:24:59.798  #undef SPDK_CONFIG_FC
00:24:59.798  #define SPDK_CONFIG_FC_PATH 
00:24:59.798  #define SPDK_CONFIG_FIO_PLUGIN 1
00:24:59.798  #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio
00:24:59.798  #undef SPDK_CONFIG_FUSE
00:24:59.798  #undef SPDK_CONFIG_FUZZER
00:24:59.798  #define SPDK_CONFIG_FUZZER_LIB 
00:24:59.798  #undef SPDK_CONFIG_GOLANG
00:24:59.798  #undef SPDK_CONFIG_HAVE_ARC4RANDOM
00:24:59.798  #define SPDK_CONFIG_HAVE_EXECINFO_H 1
00:24:59.798  #undef SPDK_CONFIG_HAVE_LIBARCHIVE
00:24:59.798  #undef SPDK_CONFIG_HAVE_LIBBSD
00:24:59.798  #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1
00:24:59.798  #define SPDK_CONFIG_IDXD 1
00:24:59.798  #undef SPDK_CONFIG_IDXD_KERNEL
00:24:59.798  #undef SPDK_CONFIG_IPSEC_MB
00:24:59.798  #define SPDK_CONFIG_IPSEC_MB_DIR 
00:24:59.798  #define SPDK_CONFIG_ISAL 1
00:24:59.798  #define SPDK_CONFIG_ISAL_CRYPTO 1
00:24:59.798  #define SPDK_CONFIG_ISCSI_INITIATOR 1
00:24:59.798  #define SPDK_CONFIG_LIBDIR 
00:24:59.798  #undef SPDK_CONFIG_LTO
00:24:59.798  #define SPDK_CONFIG_MAX_LCORES 
00:24:59.798  #define SPDK_CONFIG_NVME_CUSE 1
00:24:59.798  #undef SPDK_CONFIG_OCF
00:24:59.798  #define SPDK_CONFIG_OCF_PATH 
00:24:59.798  #define SPDK_CONFIG_OPENSSL_PATH 
00:24:59.798  #undef SPDK_CONFIG_PGO_CAPTURE
00:24:59.798  #undef SPDK_CONFIG_PGO_USE
00:24:59.798  #define SPDK_CONFIG_PREFIX /usr/local
00:24:59.798  #define SPDK_CONFIG_RAID5F 1
00:24:59.798  #undef SPDK_CONFIG_RBD
00:24:59.798  #define SPDK_CONFIG_RDMA 1
00:24:59.798  #define SPDK_CONFIG_RDMA_PROV verbs
00:24:59.798  #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1
00:24:59.798  #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1
00:24:59.798  #define SPDK_CONFIG_RDMA_SET_TOS 1
00:24:59.798  #undef SPDK_CONFIG_SHARED
00:24:59.798  #undef SPDK_CONFIG_SMA
00:24:59.798  #define SPDK_CONFIG_TESTS 1
00:24:59.798  #undef SPDK_CONFIG_TSAN
00:24:59.798  #undef SPDK_CONFIG_UBLK
00:24:59.798  #define SPDK_CONFIG_UBSAN 1
00:24:59.798  #define SPDK_CONFIG_UNIT_TESTS 1
00:24:59.798  #undef SPDK_CONFIG_URING
00:24:59.798  #define SPDK_CONFIG_URING_PATH 
00:24:59.798  #undef SPDK_CONFIG_URING_ZNS
00:24:59.798  #undef SPDK_CONFIG_USDT
00:24:59.798  #undef SPDK_CONFIG_VBDEV_COMPRESS
00:24:59.798  #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5
00:24:59.798  #undef SPDK_CONFIG_VFIO_USER
00:24:59.798  #define SPDK_CONFIG_VFIO_USER_DIR 
00:24:59.798  #define SPDK_CONFIG_VHOST 1
00:24:59.798  #define SPDK_CONFIG_VIRTIO 1
00:24:59.798  #undef SPDK_CONFIG_VTUNE
00:24:59.798  #define SPDK_CONFIG_VTUNE_DIR 
00:24:59.798  #define SPDK_CONFIG_WERROR 1
00:24:59.798  #define SPDK_CONFIG_WPDK_DIR 
00:24:59.798  #undef SPDK_CONFIG_XNVME
00:24:59.798  #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]]
00:24:59.798      17:08:52	-- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS ))
00:24:59.798     17:08:52	-- common/autotest_common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:24:59.798      17:08:52	-- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]]
00:24:59.798      17:08:52	-- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:24:59.798      17:08:52	-- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:24:59.798       17:08:52	-- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:24:59.798       17:08:52	-- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:24:59.798       17:08:52	-- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:24:59.798       17:08:52	-- paths/export.sh@5 -- # export PATH
00:24:59.798       17:08:52	-- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:24:59.798     17:08:52	-- common/autotest_common.sh@50 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common
00:24:59.798        17:08:52	-- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common
00:24:59.798       17:08:52	-- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm
00:24:59.798      17:08:52	-- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm
00:24:59.798       17:08:52	-- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../
00:24:59.798      17:08:52	-- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk
00:24:59.798      17:08:52	-- pm/common@16 -- # TEST_TAG=N/A
00:24:59.798      17:08:52	-- pm/common@17 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name
00:24:59.798     17:08:52	-- common/autotest_common.sh@52 -- # : 1
00:24:59.798     17:08:52	-- common/autotest_common.sh@53 -- # export RUN_NIGHTLY
00:24:59.798     17:08:52	-- common/autotest_common.sh@56 -- # : 0
00:24:59.798     17:08:52	-- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS
00:24:59.798     17:08:52	-- common/autotest_common.sh@58 -- # : 0
00:24:59.798     17:08:52	-- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND
00:24:59.798     17:08:52	-- common/autotest_common.sh@60 -- # : 1
00:24:59.798     17:08:52	-- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST
00:24:59.798     17:08:52	-- common/autotest_common.sh@62 -- # : 1
00:24:59.798     17:08:52	-- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST
00:24:59.798     17:08:52	-- common/autotest_common.sh@64 -- # :
00:24:59.798     17:08:52	-- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD
00:24:59.798     17:08:52	-- common/autotest_common.sh@66 -- # : 0
00:24:59.798     17:08:52	-- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD
00:24:59.798     17:08:52	-- common/autotest_common.sh@68 -- # : 0
00:24:59.798     17:08:52	-- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL
00:24:59.798     17:08:52	-- common/autotest_common.sh@70 -- # : 0
00:24:59.798     17:08:52	-- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI
00:24:59.798     17:08:52	-- common/autotest_common.sh@72 -- # : 0
00:24:59.798     17:08:52	-- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR
00:24:59.798     17:08:52	-- common/autotest_common.sh@74 -- # : 1
00:24:59.798     17:08:52	-- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME
00:24:59.798     17:08:52	-- common/autotest_common.sh@76 -- # : 0
00:24:59.798     17:08:52	-- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR
00:24:59.798     17:08:52	-- common/autotest_common.sh@78 -- # : 0
00:24:59.798     17:08:52	-- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP
00:24:59.798     17:08:52	-- common/autotest_common.sh@80 -- # : 0
00:24:59.798     17:08:52	-- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI
00:24:59.798     17:08:52	-- common/autotest_common.sh@82 -- # : 0
00:24:59.798     17:08:52	-- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE
00:24:59.798     17:08:52	-- common/autotest_common.sh@84 -- # : 0
00:24:59.798     17:08:52	-- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP
00:24:59.798     17:08:52	-- common/autotest_common.sh@86 -- # : 0
00:24:59.798     17:08:52	-- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF
00:24:59.798     17:08:52	-- common/autotest_common.sh@88 -- # : 0
00:24:59.798     17:08:52	-- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER
00:24:59.798     17:08:52	-- common/autotest_common.sh@90 -- # : 0
00:24:59.798     17:08:52	-- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU
00:24:59.798     17:08:52	-- common/autotest_common.sh@92 -- # : 0
00:24:59.798     17:08:52	-- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER
00:24:59.798     17:08:52	-- common/autotest_common.sh@94 -- # : 0
00:24:59.798     17:08:52	-- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT
00:24:59.798     17:08:52	-- common/autotest_common.sh@96 -- # : rdma
00:24:59.798     17:08:52	-- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT
00:24:59.798     17:08:52	-- common/autotest_common.sh@98 -- # : 0
00:24:59.798     17:08:52	-- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD
00:24:59.798     17:08:52	-- common/autotest_common.sh@100 -- # : 0
00:24:59.798     17:08:52	-- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST
00:24:59.798     17:08:52	-- common/autotest_common.sh@102 -- # : 1
00:24:59.798     17:08:52	-- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV
00:24:59.798     17:08:52	-- common/autotest_common.sh@104 -- # : 0
00:24:59.798     17:08:52	-- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT
00:24:59.798     17:08:52	-- common/autotest_common.sh@106 -- # : 0
00:24:59.798     17:08:52	-- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS
00:24:59.798     17:08:52	-- common/autotest_common.sh@108 -- # : 0
00:24:59.799     17:08:52	-- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT
00:24:59.799     17:08:52	-- common/autotest_common.sh@110 -- # : 0
00:24:59.799     17:08:52	-- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL
00:24:59.799     17:08:52	-- common/autotest_common.sh@112 -- # : 0
00:24:59.799     17:08:52	-- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS
00:24:59.799     17:08:52	-- common/autotest_common.sh@114 -- # : 1
00:24:59.799     17:08:52	-- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN
00:24:59.799     17:08:52	-- common/autotest_common.sh@116 -- # : 1
00:24:59.799     17:08:52	-- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN
00:24:59.799     17:08:52	-- common/autotest_common.sh@118 -- # : /home/vagrant/spdk_repo/dpdk/build
00:24:59.799     17:08:52	-- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK
00:24:59.799     17:08:52	-- common/autotest_common.sh@120 -- # : 0
00:24:59.799     17:08:52	-- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT
00:24:59.799     17:08:52	-- common/autotest_common.sh@122 -- # : 0
00:24:59.799     17:08:52	-- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO
00:24:59.799     17:08:52	-- common/autotest_common.sh@124 -- # : 0
00:24:59.799     17:08:52	-- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL
00:24:59.799     17:08:52	-- common/autotest_common.sh@126 -- # : 0
00:24:59.799     17:08:52	-- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF
00:24:59.799     17:08:52	-- common/autotest_common.sh@128 -- # : 0
00:24:59.799     17:08:52	-- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD
00:24:59.799     17:08:52	-- common/autotest_common.sh@130 -- # : 0
00:24:59.799     17:08:52	-- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL
00:24:59.799     17:08:52	-- common/autotest_common.sh@132 -- # : v22.11.4
00:24:59.799     17:08:52	-- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK
00:24:59.799     17:08:52	-- common/autotest_common.sh@134 -- # : true
00:24:59.799     17:08:52	-- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X
00:24:59.799     17:08:52	-- common/autotest_common.sh@136 -- # : 1
00:24:59.799     17:08:52	-- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5
00:24:59.799     17:08:52	-- common/autotest_common.sh@138 -- # : 0
00:24:59.799     17:08:52	-- common/autotest_common.sh@139 -- # export SPDK_TEST_URING
00:24:59.799     17:08:52	-- common/autotest_common.sh@140 -- # : 0
00:24:59.799     17:08:52	-- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT
00:24:59.799     17:08:52	-- common/autotest_common.sh@142 -- # : 0
00:24:59.799     17:08:52	-- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO
00:24:59.799     17:08:52	-- common/autotest_common.sh@144 -- # : 0
00:24:59.799     17:08:52	-- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER
00:24:59.799     17:08:52	-- common/autotest_common.sh@146 -- # : 0
00:24:59.799     17:08:52	-- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD
00:24:59.799     17:08:52	-- common/autotest_common.sh@148 -- # :
00:24:59.799     17:08:52	-- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS
00:24:59.799     17:08:52	-- common/autotest_common.sh@150 -- # : 0
00:24:59.799     17:08:52	-- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA
00:24:59.799     17:08:52	-- common/autotest_common.sh@152 -- # : 0
00:24:59.799     17:08:52	-- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS
00:24:59.799     17:08:52	-- common/autotest_common.sh@154 -- # : 0
00:24:59.799     17:08:52	-- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME
00:24:59.799     17:08:52	-- common/autotest_common.sh@156 -- # : 0
00:24:59.799     17:08:52	-- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA
00:24:59.799     17:08:52	-- common/autotest_common.sh@158 -- # : 0
00:24:59.799     17:08:52	-- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA
00:24:59.799     17:08:52	-- common/autotest_common.sh@160 -- # : 0
00:24:59.799     17:08:52	-- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT
00:24:59.799     17:08:52	-- common/autotest_common.sh@163 -- # :
00:24:59.799     17:08:52	-- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET
00:24:59.799     17:08:52	-- common/autotest_common.sh@165 -- # : 0
00:24:59.799     17:08:52	-- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS
00:24:59.799     17:08:52	-- common/autotest_common.sh@167 -- # : 0
00:24:59.799     17:08:52	-- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT
00:24:59.799     17:08:52	-- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib
00:24:59.799     17:08:52	-- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib
00:24:59.799     17:08:52	-- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib
00:24:59.799     17:08:52	-- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib
00:24:59.799     17:08:52	-- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib
00:24:59.799     17:08:52	-- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib
00:24:59.799     17:08:52	-- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib
00:24:59.799     17:08:52	-- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib
00:24:59.799     17:08:52	-- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes
00:24:59.799     17:08:52	-- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes
00:24:59.799     17:08:52	-- common/autotest_common.sh@181 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python
00:24:59.799     17:08:52	-- common/autotest_common.sh@181 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python
00:24:59.799     17:08:52	-- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1
00:24:59.799     17:08:52	-- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1
00:24:59.799     17:08:52	-- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0
00:24:59.799     17:08:52	-- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0
00:24:59.799     17:08:52	-- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134
00:24:59.799     17:08:52	-- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134
00:24:59.799     17:08:52	-- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file
00:24:59.799     17:08:52	-- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file
00:24:59.799     17:08:52	-- common/autotest_common.sh@196 -- # cat
00:24:59.799     17:08:52	-- common/autotest_common.sh@222 -- # echo leak:libfuse3.so
00:24:59.799     17:08:52	-- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file
00:24:59.799     17:08:52	-- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file
00:24:59.799     17:08:52	-- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock
00:24:59.799     17:08:52	-- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock
00:24:59.799     17:08:52	-- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']'
00:24:59.799     17:08:52	-- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR
00:24:59.799     17:08:52	-- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin
00:24:59.799     17:08:52	-- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin
00:24:59.799     17:08:52	-- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples
00:24:59.799     17:08:52	-- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples
00:24:59.799     17:08:52	-- common/autotest_common.sh@239 -- # export QEMU_BIN=
00:24:59.799     17:08:52	-- common/autotest_common.sh@239 -- # QEMU_BIN=
00:24:59.799     17:08:52	-- common/autotest_common.sh@240 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64'
00:24:59.799     17:08:52	-- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64'
00:24:59.799     17:08:52	-- common/autotest_common.sh@242 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer
00:24:59.799     17:08:52	-- common/autotest_common.sh@242 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer
00:24:59.799     17:08:52	-- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes
00:24:59.799     17:08:52	-- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes
00:24:59.799     17:08:52	-- common/autotest_common.sh@247 -- # _LCOV_MAIN=0
00:24:59.799     17:08:52	-- common/autotest_common.sh@248 -- # _LCOV_LLVM=1
00:24:59.799     17:08:52	-- common/autotest_common.sh@249 -- # _LCOV=
00:24:59.799     17:08:52	-- common/autotest_common.sh@250 -- # [[ '' == *clang* ]]
00:24:59.799     17:08:52	-- common/autotest_common.sh@250 -- # [[ 0 -eq 1 ]]
00:24:59.799     17:08:52	-- common/autotest_common.sh@252 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh'
00:24:59.799     17:08:52	-- common/autotest_common.sh@253 -- # _lcov_opt[_LCOV_MAIN]=
00:24:59.799     17:08:52	-- common/autotest_common.sh@255 -- # lcov_opt=
00:24:59.799     17:08:52	-- common/autotest_common.sh@258 -- # '[' 0 -eq 0 ']'
00:24:59.799     17:08:52	-- common/autotest_common.sh@259 -- # export valgrind=
00:24:59.799     17:08:52	-- common/autotest_common.sh@259 -- # valgrind=
00:24:59.799      17:08:52	-- common/autotest_common.sh@265 -- # uname -s
00:24:59.799     17:08:52	-- common/autotest_common.sh@265 -- # '[' Linux = Linux ']'
00:24:59.799     17:08:52	-- common/autotest_common.sh@266 -- # HUGEMEM=4096
00:24:59.799     17:08:52	-- common/autotest_common.sh@267 -- # export CLEAR_HUGE=yes
00:24:59.799     17:08:52	-- common/autotest_common.sh@267 -- # CLEAR_HUGE=yes
00:24:59.799     17:08:52	-- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]]
00:24:59.799     17:08:52	-- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]]
00:24:59.799     17:08:52	-- common/autotest_common.sh@275 -- # MAKE=make
00:24:59.799     17:08:52	-- common/autotest_common.sh@276 -- # MAKEFLAGS=-j10
00:24:59.799     17:08:52	-- common/autotest_common.sh@292 -- # export HUGEMEM=4096
00:24:59.799     17:08:52	-- common/autotest_common.sh@292 -- # HUGEMEM=4096
00:24:59.799     17:08:52	-- common/autotest_common.sh@294 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']'
00:24:59.799     17:08:52	-- common/autotest_common.sh@299 -- # NO_HUGE=()
00:24:59.800     17:08:52	-- common/autotest_common.sh@300 -- # TEST_MODE=
00:24:59.800     17:08:52	-- common/autotest_common.sh@319 -- # [[ -z 142911 ]]
00:24:59.800     17:08:52	-- common/autotest_common.sh@319 -- # kill -0 142911
00:24:59.800     17:08:52	-- common/autotest_common.sh@1675 -- # set_test_storage 2147483648
00:24:59.800     17:08:52	-- common/autotest_common.sh@329 -- # [[ -v testdir ]]
00:24:59.800     17:08:52	-- common/autotest_common.sh@331 -- # local requested_size=2147483648
00:24:59.800     17:08:52	-- common/autotest_common.sh@332 -- # local mount target_dir
00:24:59.800     17:08:52	-- common/autotest_common.sh@334 -- # local -A mounts fss sizes avails uses
00:24:59.800     17:08:52	-- common/autotest_common.sh@335 -- # local source fs size avail mount use
00:24:59.800     17:08:52	-- common/autotest_common.sh@337 -- # local storage_fallback storage_candidates
00:24:59.800      17:08:52	-- common/autotest_common.sh@339 -- # mktemp -udt spdk.XXXXXX
00:24:59.800     17:08:52	-- common/autotest_common.sh@339 -- # storage_fallback=/tmp/spdk.M46TMJ
00:24:59.800     17:08:52	-- common/autotest_common.sh@344 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback")
00:24:59.800     17:08:52	-- common/autotest_common.sh@346 -- # [[ -n '' ]]
00:24:59.800     17:08:52	-- common/autotest_common.sh@351 -- # [[ -n '' ]]
00:24:59.800     17:08:52	-- common/autotest_common.sh@356 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.M46TMJ/tests/interrupt /tmp/spdk.M46TMJ
00:24:59.800     17:08:52	-- common/autotest_common.sh@359 -- # requested_size=2214592512
00:24:59.800     17:08:52	-- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount
00:24:59.800      17:08:52	-- common/autotest_common.sh@328 -- # df -T
00:24:59.800      17:08:52	-- common/autotest_common.sh@328 -- # grep -v Filesystem
00:24:59.800     17:08:52	-- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs
00:24:59.800     17:08:52	-- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs
00:24:59.800     17:08:52	-- common/autotest_common.sh@363 -- # avails["$mount"]=1248956416
00:24:59.800     17:08:52	-- common/autotest_common.sh@363 -- # sizes["$mount"]=1253683200
00:24:59.800     17:08:52	-- common/autotest_common.sh@364 -- # uses["$mount"]=4726784
00:24:59.800     17:08:52	-- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount
00:24:59.800     17:08:52	-- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda1
00:24:59.800     17:08:52	-- common/autotest_common.sh@362 -- # fss["$mount"]=ext4
00:24:59.800     17:08:52	-- common/autotest_common.sh@363 -- # avails["$mount"]=9433849856
00:24:59.800     17:08:52	-- common/autotest_common.sh@363 -- # sizes["$mount"]=20616794112
00:24:59.800     17:08:52	-- common/autotest_common.sh@364 -- # uses["$mount"]=11166167040
00:24:59.800     17:08:52	-- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount
00:24:59.800     17:08:52	-- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs
00:24:59.800     17:08:52	-- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs
00:24:59.800     17:08:52	-- common/autotest_common.sh@363 -- # avails["$mount"]=6267142144
00:24:59.800     17:08:52	-- common/autotest_common.sh@363 -- # sizes["$mount"]=6268399616
00:24:59.800     17:08:52	-- common/autotest_common.sh@364 -- # uses["$mount"]=1257472
00:24:59.800     17:08:52	-- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount
00:24:59.800     17:08:52	-- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs
00:24:59.800     17:08:52	-- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs
00:24:59.800     17:08:52	-- common/autotest_common.sh@363 -- # avails["$mount"]=5242880
00:24:59.800     17:08:52	-- common/autotest_common.sh@363 -- # sizes["$mount"]=5242880
00:24:59.800     17:08:52	-- common/autotest_common.sh@364 -- # uses["$mount"]=0
00:24:59.800     17:08:52	-- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount
00:24:59.800     17:08:52	-- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda15
00:24:59.800     17:08:52	-- common/autotest_common.sh@362 -- # fss["$mount"]=vfat
00:24:59.800     17:08:52	-- common/autotest_common.sh@363 -- # avails["$mount"]=103061504
00:24:59.800     17:08:52	-- common/autotest_common.sh@363 -- # sizes["$mount"]=109395968
00:24:59.800     17:08:52	-- common/autotest_common.sh@364 -- # uses["$mount"]=6334464
00:24:59.800     17:08:52	-- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount
00:24:59.800     17:08:52	-- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs
00:24:59.800     17:08:52	-- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs
00:24:59.800     17:08:52	-- common/autotest_common.sh@363 -- # avails["$mount"]=1253675008
00:24:59.800     17:08:52	-- common/autotest_common.sh@363 -- # sizes["$mount"]=1253679104
00:24:59.800     17:08:52	-- common/autotest_common.sh@364 -- # uses["$mount"]=4096
00:24:59.800     17:08:52	-- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount
00:24:59.800     17:08:52	-- common/autotest_common.sh@362 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt/output
00:24:59.800     17:08:52	-- common/autotest_common.sh@362 -- # fss["$mount"]=fuse.sshfs
00:24:59.800     17:08:52	-- common/autotest_common.sh@363 -- # avails["$mount"]=92755410944
00:24:59.800     17:08:52	-- common/autotest_common.sh@363 -- # sizes["$mount"]=105088212992
00:24:59.800     17:08:52	-- common/autotest_common.sh@364 -- # uses["$mount"]=6947368960
00:24:59.800     17:08:52	-- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount
00:24:59.800     17:08:52	-- common/autotest_common.sh@367 -- # printf '* Looking for test storage...\n'
00:24:59.800  * Looking for test storage...
00:24:59.800     17:08:52	-- common/autotest_common.sh@369 -- # local target_space new_size
00:24:59.800     17:08:52	-- common/autotest_common.sh@370 -- # for target_dir in "${storage_candidates[@]}"
00:24:59.800      17:08:52	-- common/autotest_common.sh@373 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt
00:24:59.800      17:08:52	-- common/autotest_common.sh@373 -- # awk '$1 !~ /Filesystem/{print $6}'
00:24:59.800     17:08:52	-- common/autotest_common.sh@373 -- # mount=/
00:24:59.800     17:08:52	-- common/autotest_common.sh@375 -- # target_space=9433849856
00:24:59.800     17:08:52	-- common/autotest_common.sh@376 -- # (( target_space == 0 || target_space < requested_size ))
00:24:59.800     17:08:52	-- common/autotest_common.sh@379 -- # (( target_space >= requested_size ))
00:24:59.800     17:08:52	-- common/autotest_common.sh@381 -- # [[ ext4 == tmpfs ]]
00:24:59.800     17:08:52	-- common/autotest_common.sh@381 -- # [[ ext4 == ramfs ]]
00:24:59.800     17:08:52	-- common/autotest_common.sh@381 -- # [[ / == / ]]
00:24:59.800     17:08:52	-- common/autotest_common.sh@382 -- # new_size=13380759552
00:24:59.800     17:08:52	-- common/autotest_common.sh@383 -- # (( new_size * 100 / sizes[/] > 95 ))
00:24:59.800     17:08:52	-- common/autotest_common.sh@388 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt
00:24:59.800     17:08:52	-- common/autotest_common.sh@388 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt
00:24:59.800     17:08:52	-- common/autotest_common.sh@389 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt
00:24:59.800  * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt
00:24:59.800     17:08:52	-- common/autotest_common.sh@390 -- # return 0
00:24:59.800     17:08:52	-- common/autotest_common.sh@1677 -- # set -o errtrace
00:24:59.800     17:08:52	-- common/autotest_common.sh@1678 -- # shopt -s extdebug
00:24:59.800     17:08:52	-- common/autotest_common.sh@1679 -- # trap 'trap - ERR; print_backtrace >&2' ERR
00:24:59.800     17:08:52	-- common/autotest_common.sh@1681 -- # PS4=' \t	-- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ '
00:24:59.800     17:08:52	-- common/autotest_common.sh@1682 -- # true
00:24:59.800     17:08:52	-- common/autotest_common.sh@1684 -- # xtrace_fd
00:24:59.800     17:08:52	-- common/autotest_common.sh@25 -- # [[ -n 13 ]]
00:24:59.800     17:08:52	-- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]]
00:24:59.800     17:08:52	-- common/autotest_common.sh@27 -- # exec
00:24:59.800     17:08:52	-- common/autotest_common.sh@29 -- # exec
00:24:59.800     17:08:52	-- common/autotest_common.sh@31 -- # xtrace_restore
00:24:59.800     17:08:52	-- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]'
00:24:59.800     17:08:52	-- common/autotest_common.sh@17 -- # (( 0 == 0 ))
00:24:59.800     17:08:52	-- common/autotest_common.sh@18 -- # set -x
00:24:59.800     17:08:52	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:24:59.800      17:08:52	-- common/autotest_common.sh@1690 -- # lcov --version
00:24:59.800      17:08:52	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:24:59.800     17:08:52	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:24:59.800     17:08:52	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:24:59.800     17:08:52	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:24:59.800     17:08:52	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:24:59.800     17:08:52	-- scripts/common.sh@335 -- # IFS=.-:
00:24:59.800     17:08:52	-- scripts/common.sh@335 -- # read -ra ver1
00:24:59.800     17:08:52	-- scripts/common.sh@336 -- # IFS=.-:
00:24:59.800     17:08:52	-- scripts/common.sh@336 -- # read -ra ver2
00:24:59.800     17:08:52	-- scripts/common.sh@337 -- # local 'op=<'
00:24:59.800     17:08:52	-- scripts/common.sh@339 -- # ver1_l=2
00:24:59.801     17:08:52	-- scripts/common.sh@340 -- # ver2_l=1
00:24:59.801     17:08:52	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:24:59.801     17:08:52	-- scripts/common.sh@343 -- # case "$op" in
00:24:59.801     17:08:52	-- scripts/common.sh@344 -- # : 1
00:24:59.801     17:08:52	-- scripts/common.sh@363 -- # (( v = 0 ))
00:24:59.801     17:08:52	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:24:59.801      17:08:52	-- scripts/common.sh@364 -- # decimal 1
00:24:59.801      17:08:52	-- scripts/common.sh@352 -- # local d=1
00:24:59.801      17:08:52	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:24:59.801      17:08:52	-- scripts/common.sh@354 -- # echo 1
00:24:59.801     17:08:52	-- scripts/common.sh@364 -- # ver1[v]=1
00:24:59.801      17:08:52	-- scripts/common.sh@365 -- # decimal 2
00:24:59.801      17:08:52	-- scripts/common.sh@352 -- # local d=2
00:24:59.801      17:08:52	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:24:59.801      17:08:52	-- scripts/common.sh@354 -- # echo 2
00:24:59.801     17:08:52	-- scripts/common.sh@365 -- # ver2[v]=2
00:24:59.801     17:08:52	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:24:59.801     17:08:52	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:24:59.801     17:08:52	-- scripts/common.sh@367 -- # return 0
00:24:59.801     17:08:52	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:24:59.801     17:08:52	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:24:59.801  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:24:59.801  		--rc genhtml_branch_coverage=1
00:24:59.801  		--rc genhtml_function_coverage=1
00:24:59.801  		--rc genhtml_legend=1
00:24:59.801  		--rc geninfo_all_blocks=1
00:24:59.801  		--rc geninfo_unexecuted_blocks=1
00:24:59.801  		
00:24:59.801  		'
00:24:59.801     17:08:52	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:24:59.801  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:24:59.801  		--rc genhtml_branch_coverage=1
00:24:59.801  		--rc genhtml_function_coverage=1
00:24:59.801  		--rc genhtml_legend=1
00:24:59.801  		--rc geninfo_all_blocks=1
00:24:59.801  		--rc geninfo_unexecuted_blocks=1
00:24:59.801  		
00:24:59.801  		'
00:24:59.801     17:08:52	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:24:59.801  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:24:59.801  		--rc genhtml_branch_coverage=1
00:24:59.801  		--rc genhtml_function_coverage=1
00:24:59.801  		--rc genhtml_legend=1
00:24:59.801  		--rc geninfo_all_blocks=1
00:24:59.801  		--rc geninfo_unexecuted_blocks=1
00:24:59.801  		
00:24:59.801  		'
00:24:59.801     17:08:52	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:24:59.801  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:24:59.801  		--rc genhtml_branch_coverage=1
00:24:59.801  		--rc genhtml_function_coverage=1
00:24:59.801  		--rc genhtml_legend=1
00:24:59.801  		--rc geninfo_all_blocks=1
00:24:59.801  		--rc geninfo_unexecuted_blocks=1
00:24:59.801  		
00:24:59.801  		'
00:24:59.801    17:08:52	-- interrupt/interrupt_common.sh@9 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:24:59.801    17:08:52	-- interrupt/interrupt_common.sh@11 -- # r0_mask=0x1
00:24:59.801    17:08:52	-- interrupt/interrupt_common.sh@12 -- # r1_mask=0x2
00:24:59.801    17:08:52	-- interrupt/interrupt_common.sh@13 -- # r2_mask=0x4
00:24:59.801    17:08:52	-- interrupt/interrupt_common.sh@15 -- # cpu_server_mask=0x07
00:24:59.801    17:08:52	-- interrupt/interrupt_common.sh@16 -- # rpc_server_addr=/var/tmp/spdk.sock
00:24:59.801   17:08:52	-- interrupt/reactor_set_interrupt.sh@11 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt
00:24:59.801   17:08:52	-- interrupt/reactor_set_interrupt.sh@11 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt
00:24:59.801   17:08:52	-- interrupt/reactor_set_interrupt.sh@86 -- # start_intr_tgt
00:24:59.801   17:08:52	-- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock
00:24:59.801   17:08:52	-- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07
00:24:59.801   17:08:52	-- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=142976
00:24:59.801   17:08:52	-- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT
00:24:59.801   17:08:52	-- interrupt/interrupt_common.sh@29 -- # waitforlisten 142976 /var/tmp/spdk.sock
00:24:59.801   17:08:52	-- common/autotest_common.sh@829 -- # '[' -z 142976 ']'
00:24:59.801   17:08:52	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:24:59.801   17:08:52	-- common/autotest_common.sh@834 -- # local max_retries=100
00:24:59.801  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:24:59.801   17:08:52	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:24:59.801   17:08:52	-- common/autotest_common.sh@838 -- # xtrace_disable
00:24:59.801   17:08:52	-- common/autotest_common.sh@10 -- # set +x
00:24:59.801   17:08:52	-- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g
00:24:59.801  [2024-11-19 17:08:52.643632] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:24:59.801  [2024-11-19 17:08:52.644260] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142976 ]
00:25:00.059  [2024-11-19 17:08:52.835333] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3
00:25:00.059  [2024-11-19 17:08:52.910058] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:25:00.059  [2024-11-19 17:08:52.910118] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:25:00.059  [2024-11-19 17:08:52.910108] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:25:00.316  [2024-11-19 17:08:52.987734] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode.
00:25:01.252   17:08:53	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:25:01.252   17:08:53	-- common/autotest_common.sh@862 -- # return 0
00:25:01.252   17:08:53	-- interrupt/reactor_set_interrupt.sh@87 -- # setup_bdev_mem
00:25:01.252   17:08:53	-- interrupt/interrupt_common.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:25:01.510  Malloc0
00:25:01.510  Malloc1
00:25:01.510  Malloc2
00:25:01.510   17:08:54	-- interrupt/reactor_set_interrupt.sh@88 -- # setup_bdev_aio
00:25:01.510    17:08:54	-- interrupt/interrupt_common.sh@98 -- # uname -s
00:25:01.510   17:08:54	-- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]]
00:25:01.510   17:08:54	-- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000
00:25:01.510  5000+0 records in
00:25:01.510  5000+0 records out
00:25:01.510  10240000 bytes (10 MB, 9.8 MiB) copied, 0.0277736 s, 369 MB/s
00:25:01.510   17:08:54	-- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048
00:25:01.767  AIO0
00:25:01.767   17:08:54	-- interrupt/reactor_set_interrupt.sh@90 -- # reactor_set_mode_without_threads 142976
00:25:01.767   17:08:54	-- interrupt/reactor_set_interrupt.sh@76 -- # reactor_set_intr_mode 142976 without_thd
00:25:01.767   17:08:54	-- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=142976
00:25:01.767   17:08:54	-- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd=without_thd
00:25:01.767   17:08:54	-- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask))
00:25:01.767    17:08:54	-- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1
00:25:01.767    17:08:54	-- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x1
00:25:01.767    17:08:54	-- interrupt/interrupt_common.sh@79 -- # local grep_str
00:25:01.767    17:08:54	-- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=1
00:25:01.767    17:08:54	-- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id'
00:25:01.768     17:08:54	-- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats
00:25:01.768     17:08:54	-- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id'
00:25:02.333    17:08:54	-- interrupt/interrupt_common.sh@85 -- # echo 1
00:25:02.333   17:08:54	-- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask))
00:25:02.333    17:08:54	-- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4
00:25:02.333    17:08:54	-- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x4
00:25:02.333    17:08:54	-- interrupt/interrupt_common.sh@79 -- # local grep_str
00:25:02.333    17:08:54	-- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=4
00:25:02.333    17:08:54	-- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id'
00:25:02.333     17:08:54	-- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats
00:25:02.333     17:08:54	-- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id'
00:25:02.591    17:08:55	-- interrupt/interrupt_common.sh@85 -- # echo ''
00:25:02.591   17:08:55	-- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]]
00:25:02.591  spdk_thread ids are 1 on reactor0.
00:25:02.591   17:08:55	-- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.'
00:25:02.591   17:08:55	-- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2}
00:25:02.591   17:08:55	-- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 142976 0
00:25:02.591   17:08:55	-- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 142976 0 idle
00:25:02.591   17:08:55	-- interrupt/interrupt_common.sh@33 -- # local pid=142976
00:25:02.591   17:08:55	-- interrupt/interrupt_common.sh@34 -- # local idx=0
00:25:02.591   17:08:55	-- interrupt/interrupt_common.sh@35 -- # local state=idle
00:25:02.591   17:08:55	-- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]]
00:25:02.591   17:08:55	-- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]]
00:25:02.591   17:08:55	-- interrupt/interrupt_common.sh@41 -- # hash top
00:25:02.591   17:08:55	-- interrupt/interrupt_common.sh@46 -- # (( j = 10 ))
00:25:02.591   17:08:55	-- interrupt/interrupt_common.sh@46 -- # (( j != 0 ))
00:25:02.591    17:08:55	-- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 142976 -w 256
00:25:02.591    17:08:55	-- interrupt/interrupt_common.sh@47 -- # grep reactor_0
00:25:02.591   17:08:55	-- interrupt/interrupt_common.sh@47 -- # top_reactor=' 142976 root      20   0   20.1t  57956  25964 S   0.0   0.5   0:00.34 reactor_0'
00:25:02.591    17:08:55	-- interrupt/interrupt_common.sh@48 -- # echo 142976 root 20 0 20.1t 57956 25964 S 0.0 0.5 0:00.34 reactor_0
00:25:02.591    17:08:55	-- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g'
00:25:02.591    17:08:55	-- interrupt/interrupt_common.sh@48 -- # awk '{print $9}'
00:25:02.591   17:08:55	-- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0
00:25:02.591   17:08:55	-- interrupt/interrupt_common.sh@49 -- # cpu_rate=0
00:25:02.591   17:08:55	-- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]]
00:25:02.591   17:08:55	-- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]]
00:25:02.591   17:08:55	-- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]]
00:25:02.591   17:08:55	-- interrupt/interrupt_common.sh@56 -- # return 0
00:25:02.591   17:08:55	-- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2}
00:25:02.591   17:08:55	-- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 142976 1
00:25:02.591   17:08:55	-- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 142976 1 idle
00:25:02.591   17:08:55	-- interrupt/interrupt_common.sh@33 -- # local pid=142976
00:25:02.591   17:08:55	-- interrupt/interrupt_common.sh@34 -- # local idx=1
00:25:02.591   17:08:55	-- interrupt/interrupt_common.sh@35 -- # local state=idle
00:25:02.591   17:08:55	-- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]]
00:25:02.591   17:08:55	-- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]]
00:25:02.591   17:08:55	-- interrupt/interrupt_common.sh@41 -- # hash top
00:25:02.591   17:08:55	-- interrupt/interrupt_common.sh@46 -- # (( j = 10 ))
00:25:02.591   17:08:55	-- interrupt/interrupt_common.sh@46 -- # (( j != 0 ))
00:25:02.591    17:08:55	-- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 142976 -w 256
00:25:02.591    17:08:55	-- interrupt/interrupt_common.sh@47 -- # grep reactor_1
00:25:02.850   17:08:55	-- interrupt/interrupt_common.sh@47 -- # top_reactor=' 142979 root      20   0   20.1t  57956  25964 S   0.0   0.5   0:00.00 reactor_1'
00:25:02.850    17:08:55	-- interrupt/interrupt_common.sh@48 -- # echo 142979 root 20 0 20.1t 57956 25964 S 0.0 0.5 0:00.00 reactor_1
00:25:02.850    17:08:55	-- interrupt/interrupt_common.sh@48 -- # awk '{print $9}'
00:25:02.850    17:08:55	-- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g'
00:25:02.850   17:08:55	-- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0
00:25:02.850   17:08:55	-- interrupt/interrupt_common.sh@49 -- # cpu_rate=0
00:25:02.850   17:08:55	-- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]]
00:25:02.850   17:08:55	-- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]]
00:25:02.850   17:08:55	-- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]]
00:25:02.850   17:08:55	-- interrupt/interrupt_common.sh@56 -- # return 0
00:25:02.850   17:08:55	-- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2}
00:25:02.850   17:08:55	-- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 142976 2
00:25:02.850   17:08:55	-- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 142976 2 idle
00:25:02.850   17:08:55	-- interrupt/interrupt_common.sh@33 -- # local pid=142976
00:25:02.850   17:08:55	-- interrupt/interrupt_common.sh@34 -- # local idx=2
00:25:02.850   17:08:55	-- interrupt/interrupt_common.sh@35 -- # local state=idle
00:25:02.850   17:08:55	-- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]]
00:25:02.850   17:08:55	-- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]]
00:25:02.850   17:08:55	-- interrupt/interrupt_common.sh@41 -- # hash top
00:25:02.850   17:08:55	-- interrupt/interrupt_common.sh@46 -- # (( j = 10 ))
00:25:02.850   17:08:55	-- interrupt/interrupt_common.sh@46 -- # (( j != 0 ))
00:25:02.850    17:08:55	-- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 142976 -w 256
00:25:02.850    17:08:55	-- interrupt/interrupt_common.sh@47 -- # grep reactor_2
00:25:03.110   17:08:55	-- interrupt/interrupt_common.sh@47 -- # top_reactor=' 142980 root      20   0   20.1t  57956  25964 S   0.0   0.5   0:00.00 reactor_2'
00:25:03.110    17:08:55	-- interrupt/interrupt_common.sh@48 -- # echo 142980 root 20 0 20.1t 57956 25964 S 0.0 0.5 0:00.00 reactor_2
00:25:03.110    17:08:55	-- interrupt/interrupt_common.sh@48 -- # awk '{print $9}'
00:25:03.110    17:08:55	-- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g'
00:25:03.110   17:08:55	-- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0
00:25:03.110   17:08:55	-- interrupt/interrupt_common.sh@49 -- # cpu_rate=0
00:25:03.110   17:08:55	-- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]]
00:25:03.110   17:08:55	-- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]]
00:25:03.110   17:08:55	-- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]]
00:25:03.110   17:08:55	-- interrupt/interrupt_common.sh@56 -- # return 0
00:25:03.110   17:08:55	-- interrupt/reactor_set_interrupt.sh@33 -- # '[' without_thdx '!=' x ']'
00:25:03.110   17:08:55	-- interrupt/reactor_set_interrupt.sh@35 -- # for i in "${thd0_ids[@]}"
00:25:03.110   17:08:55	-- interrupt/reactor_set_interrupt.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x2
00:25:03.369  [2024-11-19 17:08:55.972149] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode.
00:25:03.370   17:08:55	-- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d
00:25:03.370  [2024-11-19 17:08:56.172253] interrupt_tgt.c:  61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0.
00:25:03.370  [2024-11-19 17:08:56.172924] interrupt_tgt.c:  32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch
00:25:03.370   17:08:56	-- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d
00:25:03.628  [2024-11-19 17:08:56.424071] interrupt_tgt.c:  61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2.
00:25:03.628  [2024-11-19 17:08:56.424605] interrupt_tgt.c:  32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch
00:25:03.628   17:08:56	-- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2
00:25:03.628   17:08:56	-- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 142976 0
00:25:03.628   17:08:56	-- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 142976 0 busy
00:25:03.628   17:08:56	-- interrupt/interrupt_common.sh@33 -- # local pid=142976
00:25:03.628   17:08:56	-- interrupt/interrupt_common.sh@34 -- # local idx=0
00:25:03.628   17:08:56	-- interrupt/interrupt_common.sh@35 -- # local state=busy
00:25:03.628   17:08:56	-- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]]
00:25:03.628   17:08:56	-- interrupt/interrupt_common.sh@41 -- # hash top
00:25:03.628   17:08:56	-- interrupt/interrupt_common.sh@46 -- # (( j = 10 ))
00:25:03.628   17:08:56	-- interrupt/interrupt_common.sh@46 -- # (( j != 0 ))
00:25:03.629    17:08:56	-- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 142976 -w 256
00:25:03.629    17:08:56	-- interrupt/interrupt_common.sh@47 -- # grep reactor_0
00:25:03.888   17:08:56	-- interrupt/interrupt_common.sh@47 -- # top_reactor=' 142976 root      20   0   20.1t  58116  25964 R  93.8   0.5   0:00.78 reactor_0'
00:25:03.888    17:08:56	-- interrupt/interrupt_common.sh@48 -- # echo 142976 root 20 0 20.1t 58116 25964 R 93.8 0.5 0:00.78 reactor_0
00:25:03.888    17:08:56	-- interrupt/interrupt_common.sh@48 -- # awk '{print $9}'
00:25:03.888    17:08:56	-- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g'
00:25:03.888   17:08:56	-- interrupt/interrupt_common.sh@48 -- # cpu_rate=93.8
00:25:03.888   17:08:56	-- interrupt/interrupt_common.sh@49 -- # cpu_rate=93
00:25:03.888   17:08:56	-- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]]
00:25:03.888   17:08:56	-- interrupt/interrupt_common.sh@51 -- # [[ 93 -lt 70 ]]
00:25:03.888   17:08:56	-- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]]
00:25:03.888   17:08:56	-- interrupt/interrupt_common.sh@56 -- # return 0
00:25:03.888   17:08:56	-- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2
00:25:03.888   17:08:56	-- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 142976 2
00:25:03.888   17:08:56	-- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 142976 2 busy
00:25:03.888   17:08:56	-- interrupt/interrupt_common.sh@33 -- # local pid=142976
00:25:03.888   17:08:56	-- interrupt/interrupt_common.sh@34 -- # local idx=2
00:25:03.888   17:08:56	-- interrupt/interrupt_common.sh@35 -- # local state=busy
00:25:03.888   17:08:56	-- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]]
00:25:03.888   17:08:56	-- interrupt/interrupt_common.sh@41 -- # hash top
00:25:03.888   17:08:56	-- interrupt/interrupt_common.sh@46 -- # (( j = 10 ))
00:25:03.888   17:08:56	-- interrupt/interrupt_common.sh@46 -- # (( j != 0 ))
00:25:03.888    17:08:56	-- interrupt/interrupt_common.sh@47 -- # grep reactor_2
00:25:03.888    17:08:56	-- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 142976 -w 256
00:25:04.146   17:08:56	-- interrupt/interrupt_common.sh@47 -- # top_reactor=' 142980 root      20   0   20.1t  58116  25964 R  99.9   0.5   0:00.35 reactor_2'
00:25:04.146    17:08:56	-- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g'
00:25:04.146    17:08:56	-- interrupt/interrupt_common.sh@48 -- # echo 142980 root 20 0 20.1t 58116 25964 R 99.9 0.5 0:00.35 reactor_2
00:25:04.146    17:08:56	-- interrupt/interrupt_common.sh@48 -- # awk '{print $9}'
00:25:04.146   17:08:56	-- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9
00:25:04.146   17:08:56	-- interrupt/interrupt_common.sh@49 -- # cpu_rate=99
00:25:04.146   17:08:56	-- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]]
00:25:04.146   17:08:56	-- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]]
00:25:04.146   17:08:56	-- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]]
00:25:04.146   17:08:56	-- interrupt/interrupt_common.sh@56 -- # return 0
00:25:04.146   17:08:56	-- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2
00:25:04.405  [2024-11-19 17:08:57.048078] interrupt_tgt.c:  61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2.
00:25:04.405  [2024-11-19 17:08:57.048563] interrupt_tgt.c:  32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch
00:25:04.405   17:08:57	-- interrupt/reactor_set_interrupt.sh@52 -- # '[' without_thdx '!=' x ']'
00:25:04.405   17:08:57	-- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 142976 2
00:25:04.405   17:08:57	-- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 142976 2 idle
00:25:04.405   17:08:57	-- interrupt/interrupt_common.sh@33 -- # local pid=142976
00:25:04.405   17:08:57	-- interrupt/interrupt_common.sh@34 -- # local idx=2
00:25:04.405   17:08:57	-- interrupt/interrupt_common.sh@35 -- # local state=idle
00:25:04.405   17:08:57	-- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]]
00:25:04.405   17:08:57	-- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]]
00:25:04.405   17:08:57	-- interrupt/interrupt_common.sh@41 -- # hash top
00:25:04.405   17:08:57	-- interrupt/interrupt_common.sh@46 -- # (( j = 10 ))
00:25:04.405   17:08:57	-- interrupt/interrupt_common.sh@46 -- # (( j != 0 ))
00:25:04.405    17:08:57	-- interrupt/interrupt_common.sh@47 -- # grep reactor_2
00:25:04.405    17:08:57	-- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 142976 -w 256
00:25:04.405   17:08:57	-- interrupt/interrupt_common.sh@47 -- # top_reactor=' 142980 root      20   0   20.1t  58164  25964 S   0.0   0.5   0:00.62 reactor_2'
00:25:04.405    17:08:57	-- interrupt/interrupt_common.sh@48 -- # echo 142980 root 20 0 20.1t 58164 25964 S 0.0 0.5 0:00.62 reactor_2
00:25:04.405    17:08:57	-- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g'
00:25:04.405    17:08:57	-- interrupt/interrupt_common.sh@48 -- # awk '{print $9}'
00:25:04.405   17:08:57	-- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0
00:25:04.405   17:08:57	-- interrupt/interrupt_common.sh@49 -- # cpu_rate=0
00:25:04.405   17:08:57	-- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]]
00:25:04.405   17:08:57	-- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]]
00:25:04.405   17:08:57	-- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]]
00:25:04.405   17:08:57	-- interrupt/interrupt_common.sh@56 -- # return 0
00:25:04.405   17:08:57	-- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0
00:25:04.663  [2024-11-19 17:08:57.492027] interrupt_tgt.c:  61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0.
00:25:04.663  [2024-11-19 17:08:57.492869] interrupt_tgt.c:  32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch
00:25:04.663   17:08:57	-- interrupt/reactor_set_interrupt.sh@63 -- # '[' without_thdx '!=' x ']'
00:25:04.663   17:08:57	-- interrupt/reactor_set_interrupt.sh@65 -- # for i in "${thd0_ids[@]}"
00:25:04.663   17:08:57	-- interrupt/reactor_set_interrupt.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x1
00:25:04.922  [2024-11-19 17:08:57.760538] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode.
00:25:05.182   17:08:57	-- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 142976 0
00:25:05.182   17:08:57	-- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 142976 0 idle
00:25:05.182   17:08:57	-- interrupt/interrupt_common.sh@33 -- # local pid=142976
00:25:05.182   17:08:57	-- interrupt/interrupt_common.sh@34 -- # local idx=0
00:25:05.182   17:08:57	-- interrupt/interrupt_common.sh@35 -- # local state=idle
00:25:05.182   17:08:57	-- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]]
00:25:05.182   17:08:57	-- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]]
00:25:05.182   17:08:57	-- interrupt/interrupt_common.sh@41 -- # hash top
00:25:05.182   17:08:57	-- interrupt/interrupt_common.sh@46 -- # (( j = 10 ))
00:25:05.182   17:08:57	-- interrupt/interrupt_common.sh@46 -- # (( j != 0 ))
00:25:05.182    17:08:57	-- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 142976 -w 256
00:25:05.182    17:08:57	-- interrupt/interrupt_common.sh@47 -- # grep reactor_0
00:25:05.182   17:08:57	-- interrupt/interrupt_common.sh@47 -- # top_reactor=' 142976 root      20   0   20.1t  58264  25964 S   0.0   0.5   0:01.67 reactor_0'
00:25:05.182    17:08:57	-- interrupt/interrupt_common.sh@48 -- # echo 142976 root 20 0 20.1t 58264 25964 S 0.0 0.5 0:01.67 reactor_0
00:25:05.182    17:08:57	-- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g'
00:25:05.182    17:08:57	-- interrupt/interrupt_common.sh@48 -- # awk '{print $9}'
00:25:05.182   17:08:57	-- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0
00:25:05.182   17:08:57	-- interrupt/interrupt_common.sh@49 -- # cpu_rate=0
00:25:05.182   17:08:57	-- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]]
00:25:05.182   17:08:57	-- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]]
00:25:05.182   17:08:57	-- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]]
00:25:05.182   17:08:57	-- interrupt/interrupt_common.sh@56 -- # return 0
00:25:05.182   17:08:57	-- interrupt/reactor_set_interrupt.sh@72 -- # return 0
00:25:05.182   17:08:57	-- interrupt/reactor_set_interrupt.sh@77 -- # return 0
00:25:05.182   17:08:57	-- interrupt/reactor_set_interrupt.sh@92 -- # trap - SIGINT SIGTERM EXIT
00:25:05.182   17:08:57	-- interrupt/reactor_set_interrupt.sh@93 -- # killprocess 142976
00:25:05.182   17:08:57	-- common/autotest_common.sh@936 -- # '[' -z 142976 ']'
00:25:05.182   17:08:57	-- common/autotest_common.sh@940 -- # kill -0 142976
00:25:05.182    17:08:57	-- common/autotest_common.sh@941 -- # uname
00:25:05.182   17:08:57	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:25:05.182    17:08:57	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 142976
00:25:05.182   17:08:57	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:25:05.182  killing process with pid 142976
00:25:05.182   17:08:57	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:25:05.182   17:08:57	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 142976'
00:25:05.182   17:08:57	-- common/autotest_common.sh@955 -- # kill 142976
00:25:05.182   17:08:57	-- common/autotest_common.sh@960 -- # wait 142976
00:25:05.442   17:08:58	-- interrupt/reactor_set_interrupt.sh@94 -- # cleanup
00:25:05.442   17:08:58	-- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile
00:25:05.442   17:08:58	-- interrupt/reactor_set_interrupt.sh@97 -- # start_intr_tgt
00:25:05.442   17:08:58	-- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock
00:25:05.442   17:08:58	-- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07
00:25:05.702   17:08:58	-- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=143121
00:25:05.702   17:08:58	-- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g
00:25:05.702   17:08:58	-- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT
00:25:05.702   17:08:58	-- interrupt/interrupt_common.sh@29 -- # waitforlisten 143121 /var/tmp/spdk.sock
00:25:05.702   17:08:58	-- common/autotest_common.sh@829 -- # '[' -z 143121 ']'
00:25:05.702   17:08:58	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:25:05.702   17:08:58	-- common/autotest_common.sh@834 -- # local max_retries=100
00:25:05.702  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:25:05.702   17:08:58	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:25:05.702   17:08:58	-- common/autotest_common.sh@838 -- # xtrace_disable
00:25:05.702   17:08:58	-- common/autotest_common.sh@10 -- # set +x
00:25:05.702  [2024-11-19 17:08:58.331115] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:25:05.702  [2024-11-19 17:08:58.331298] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143121 ]
00:25:05.702  [2024-11-19 17:08:58.481799] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3
00:25:05.702  [2024-11-19 17:08:58.529950] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:25:05.702  [2024-11-19 17:08:58.530496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:25:05.702  [2024-11-19 17:08:58.530502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:25:05.961  [2024-11-19 17:08:58.592282] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode.
00:25:06.610   17:08:59	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:25:06.610   17:08:59	-- common/autotest_common.sh@862 -- # return 0
00:25:06.610   17:08:59	-- interrupt/reactor_set_interrupt.sh@98 -- # setup_bdev_mem
00:25:06.610   17:08:59	-- interrupt/interrupt_common.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:25:06.868  Malloc0
00:25:06.868  Malloc1
00:25:06.868  Malloc2
00:25:06.868   17:08:59	-- interrupt/reactor_set_interrupt.sh@99 -- # setup_bdev_aio
00:25:06.868    17:08:59	-- interrupt/interrupt_common.sh@98 -- # uname -s
00:25:06.868   17:08:59	-- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]]
00:25:06.868   17:08:59	-- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000
00:25:06.868  5000+0 records in
00:25:06.868  5000+0 records out
00:25:06.868  10240000 bytes (10 MB, 9.8 MiB) copied, 0.0322958 s, 317 MB/s
00:25:06.868   17:08:59	-- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048
00:25:07.126  AIO0
00:25:07.126   17:08:59	-- interrupt/reactor_set_interrupt.sh@101 -- # reactor_set_mode_with_threads 143121
00:25:07.126   17:08:59	-- interrupt/reactor_set_interrupt.sh@81 -- # reactor_set_intr_mode 143121
00:25:07.126   17:08:59	-- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=143121
00:25:07.126   17:08:59	-- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd=
00:25:07.126   17:08:59	-- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask))
00:25:07.126    17:08:59	-- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1
00:25:07.126    17:08:59	-- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x1
00:25:07.126    17:08:59	-- interrupt/interrupt_common.sh@79 -- # local grep_str
00:25:07.126    17:08:59	-- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=1
00:25:07.126    17:08:59	-- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id'
00:25:07.126     17:08:59	-- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats
00:25:07.126     17:08:59	-- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id'
00:25:07.385    17:09:00	-- interrupt/interrupt_common.sh@85 -- # echo 1
00:25:07.385   17:09:00	-- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask))
00:25:07.385    17:09:00	-- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4
00:25:07.385    17:09:00	-- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x4
00:25:07.385    17:09:00	-- interrupt/interrupt_common.sh@79 -- # local grep_str
00:25:07.385    17:09:00	-- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=4
00:25:07.385    17:09:00	-- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id'
00:25:07.385     17:09:00	-- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id'
00:25:07.385     17:09:00	-- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats
00:25:07.644    17:09:00	-- interrupt/interrupt_common.sh@85 -- # echo ''
00:25:07.644   17:09:00	-- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]]
00:25:07.644  spdk_thread ids are 1 on reactor0.
00:25:07.644   17:09:00	-- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.'
00:25:07.644   17:09:00	-- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2}
00:25:07.644   17:09:00	-- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 143121 0
00:25:07.644   17:09:00	-- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 143121 0 idle
00:25:07.644   17:09:00	-- interrupt/interrupt_common.sh@33 -- # local pid=143121
00:25:07.644   17:09:00	-- interrupt/interrupt_common.sh@34 -- # local idx=0
00:25:07.644   17:09:00	-- interrupt/interrupt_common.sh@35 -- # local state=idle
00:25:07.644   17:09:00	-- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]]
00:25:07.644   17:09:00	-- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]]
00:25:07.644   17:09:00	-- interrupt/interrupt_common.sh@41 -- # hash top
00:25:07.644   17:09:00	-- interrupt/interrupt_common.sh@46 -- # (( j = 10 ))
00:25:07.644   17:09:00	-- interrupt/interrupt_common.sh@46 -- # (( j != 0 ))
00:25:07.644    17:09:00	-- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 143121 -w 256
00:25:07.644    17:09:00	-- interrupt/interrupt_common.sh@47 -- # grep reactor_0
00:25:07.903   17:09:00	-- interrupt/interrupt_common.sh@47 -- # top_reactor=' 143121 root      20   0   20.1t  58148  26100 R   0.0   0.5   0:00.28 reactor_0'
00:25:07.903    17:09:00	-- interrupt/interrupt_common.sh@48 -- # echo 143121 root 20 0 20.1t 58148 26100 R 0.0 0.5 0:00.28 reactor_0
00:25:07.903    17:09:00	-- interrupt/interrupt_common.sh@48 -- # awk '{print $9}'
00:25:07.903    17:09:00	-- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g'
00:25:07.903   17:09:00	-- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0
00:25:07.903   17:09:00	-- interrupt/interrupt_common.sh@49 -- # cpu_rate=0
00:25:07.903   17:09:00	-- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]]
00:25:07.903   17:09:00	-- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]]
00:25:07.903   17:09:00	-- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]]
00:25:07.903   17:09:00	-- interrupt/interrupt_common.sh@56 -- # return 0
00:25:07.903   17:09:00	-- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2}
00:25:07.903   17:09:00	-- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 143121 1
00:25:07.903   17:09:00	-- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 143121 1 idle
00:25:07.903   17:09:00	-- interrupt/interrupt_common.sh@33 -- # local pid=143121
00:25:07.903   17:09:00	-- interrupt/interrupt_common.sh@34 -- # local idx=1
00:25:07.903   17:09:00	-- interrupt/interrupt_common.sh@35 -- # local state=idle
00:25:07.903   17:09:00	-- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]]
00:25:07.903   17:09:00	-- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]]
00:25:07.903   17:09:00	-- interrupt/interrupt_common.sh@41 -- # hash top
00:25:07.903   17:09:00	-- interrupt/interrupt_common.sh@46 -- # (( j = 10 ))
00:25:07.903   17:09:00	-- interrupt/interrupt_common.sh@46 -- # (( j != 0 ))
00:25:07.903    17:09:00	-- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 143121 -w 256
00:25:07.903    17:09:00	-- interrupt/interrupt_common.sh@47 -- # grep reactor_1
00:25:07.903   17:09:00	-- interrupt/interrupt_common.sh@47 -- # top_reactor=' 143125 root      20   0   20.1t  58148  26100 S   0.0   0.5   0:00.00 reactor_1'
00:25:07.903    17:09:00	-- interrupt/interrupt_common.sh@48 -- # echo 143125 root 20 0 20.1t 58148 26100 S 0.0 0.5 0:00.00 reactor_1
00:25:07.903    17:09:00	-- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g'
00:25:07.903    17:09:00	-- interrupt/interrupt_common.sh@48 -- # awk '{print $9}'
00:25:07.903   17:09:00	-- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0
00:25:07.903   17:09:00	-- interrupt/interrupt_common.sh@49 -- # cpu_rate=0
00:25:07.903   17:09:00	-- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]]
00:25:07.903   17:09:00	-- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]]
00:25:07.903   17:09:00	-- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]]
00:25:07.903   17:09:00	-- interrupt/interrupt_common.sh@56 -- # return 0
00:25:07.903   17:09:00	-- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2}
00:25:07.903   17:09:00	-- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 143121 2
00:25:07.903   17:09:00	-- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 143121 2 idle
00:25:07.903   17:09:00	-- interrupt/interrupt_common.sh@33 -- # local pid=143121
00:25:07.903   17:09:00	-- interrupt/interrupt_common.sh@34 -- # local idx=2
00:25:07.903   17:09:00	-- interrupt/interrupt_common.sh@35 -- # local state=idle
00:25:07.903   17:09:00	-- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]]
00:25:07.903   17:09:00	-- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]]
00:25:07.903   17:09:00	-- interrupt/interrupt_common.sh@41 -- # hash top
00:25:07.903   17:09:00	-- interrupt/interrupt_common.sh@46 -- # (( j = 10 ))
00:25:07.903   17:09:00	-- interrupt/interrupt_common.sh@46 -- # (( j != 0 ))
00:25:07.903    17:09:00	-- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 143121 -w 256
00:25:07.903    17:09:00	-- interrupt/interrupt_common.sh@47 -- # grep reactor_2
00:25:08.163   17:09:00	-- interrupt/interrupt_common.sh@47 -- # top_reactor=' 143126 root      20   0   20.1t  58148  26100 S   0.0   0.5   0:00.00 reactor_2'
00:25:08.163    17:09:00	-- interrupt/interrupt_common.sh@48 -- # echo 143126 root 20 0 20.1t 58148 26100 S 0.0 0.5 0:00.00 reactor_2
00:25:08.163    17:09:00	-- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g'
00:25:08.163    17:09:00	-- interrupt/interrupt_common.sh@48 -- # awk '{print $9}'
00:25:08.163   17:09:00	-- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0
00:25:08.163   17:09:00	-- interrupt/interrupt_common.sh@49 -- # cpu_rate=0
00:25:08.163   17:09:00	-- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]]
00:25:08.163   17:09:00	-- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]]
00:25:08.163   17:09:00	-- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]]
00:25:08.163   17:09:00	-- interrupt/interrupt_common.sh@56 -- # return 0
00:25:08.163   17:09:00	-- interrupt/reactor_set_interrupt.sh@33 -- # '[' x '!=' x ']'
00:25:08.163   17:09:00	-- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d
00:25:08.422  [2024-11-19 17:09:01.100712] interrupt_tgt.c:  61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0.
00:25:08.422  [2024-11-19 17:09:01.101048] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to poll mode from intr mode.
00:25:08.422  [2024-11-19 17:09:01.101562] interrupt_tgt.c:  32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch
00:25:08.422   17:09:01	-- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d
00:25:08.681  [2024-11-19 17:09:01.332518] interrupt_tgt.c:  61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2.
00:25:08.681  [2024-11-19 17:09:01.333164] interrupt_tgt.c:  32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch
00:25:08.681   17:09:01	-- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2
00:25:08.681   17:09:01	-- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 143121 0
00:25:08.681   17:09:01	-- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 143121 0 busy
00:25:08.681   17:09:01	-- interrupt/interrupt_common.sh@33 -- # local pid=143121
00:25:08.681   17:09:01	-- interrupt/interrupt_common.sh@34 -- # local idx=0
00:25:08.681   17:09:01	-- interrupt/interrupt_common.sh@35 -- # local state=busy
00:25:08.681   17:09:01	-- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]]
00:25:08.681   17:09:01	-- interrupt/interrupt_common.sh@41 -- # hash top
00:25:08.681   17:09:01	-- interrupt/interrupt_common.sh@46 -- # (( j = 10 ))
00:25:08.681   17:09:01	-- interrupt/interrupt_common.sh@46 -- # (( j != 0 ))
00:25:08.681    17:09:01	-- interrupt/interrupt_common.sh@47 -- # grep reactor_0
00:25:08.681    17:09:01	-- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 143121 -w 256
00:25:08.681   17:09:01	-- interrupt/interrupt_common.sh@47 -- # top_reactor=' 143121 root      20   0   20.1t  58268  26100 R  99.9   0.5   0:00.71 reactor_0'
00:25:08.681    17:09:01	-- interrupt/interrupt_common.sh@48 -- # echo 143121 root 20 0 20.1t 58268 26100 R 99.9 0.5 0:00.71 reactor_0
00:25:08.681    17:09:01	-- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g'
00:25:08.681    17:09:01	-- interrupt/interrupt_common.sh@48 -- # awk '{print $9}'
00:25:08.681   17:09:01	-- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9
00:25:08.681   17:09:01	-- interrupt/interrupt_common.sh@49 -- # cpu_rate=99
00:25:08.681   17:09:01	-- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]]
00:25:08.681   17:09:01	-- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]]
00:25:08.681   17:09:01	-- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]]
00:25:08.681   17:09:01	-- interrupt/interrupt_common.sh@56 -- # return 0
00:25:08.681   17:09:01	-- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2
00:25:08.681   17:09:01	-- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 143121 2
00:25:08.681   17:09:01	-- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 143121 2 busy
00:25:08.681   17:09:01	-- interrupt/interrupt_common.sh@33 -- # local pid=143121
00:25:08.681   17:09:01	-- interrupt/interrupt_common.sh@34 -- # local idx=2
00:25:08.681   17:09:01	-- interrupt/interrupt_common.sh@35 -- # local state=busy
00:25:08.681   17:09:01	-- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]]
00:25:08.681   17:09:01	-- interrupt/interrupt_common.sh@41 -- # hash top
00:25:08.681   17:09:01	-- interrupt/interrupt_common.sh@46 -- # (( j = 10 ))
00:25:08.681   17:09:01	-- interrupt/interrupt_common.sh@46 -- # (( j != 0 ))
00:25:08.940    17:09:01	-- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 143121 -w 256
00:25:08.940    17:09:01	-- interrupt/interrupt_common.sh@47 -- # grep reactor_2
00:25:08.940   17:09:01	-- interrupt/interrupt_common.sh@47 -- # top_reactor=' 143126 root      20   0   20.1t  58268  26100 R  99.9   0.5   0:00.35 reactor_2'
00:25:08.940    17:09:01	-- interrupt/interrupt_common.sh@48 -- # echo 143126 root 20 0 20.1t 58268 26100 R 99.9 0.5 0:00.35 reactor_2
00:25:08.940    17:09:01	-- interrupt/interrupt_common.sh@48 -- # awk '{print $9}'
00:25:08.940    17:09:01	-- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g'
00:25:08.940   17:09:01	-- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9
00:25:08.940   17:09:01	-- interrupt/interrupt_common.sh@49 -- # cpu_rate=99
00:25:08.940   17:09:01	-- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]]
00:25:08.940   17:09:01	-- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]]
00:25:08.940   17:09:01	-- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]]
00:25:08.940   17:09:01	-- interrupt/interrupt_common.sh@56 -- # return 0
00:25:08.940   17:09:01	-- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2
00:25:09.198  [2024-11-19 17:09:01.960800] interrupt_tgt.c:  61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2.
00:25:09.198  [2024-11-19 17:09:01.961070] interrupt_tgt.c:  32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch
00:25:09.198   17:09:01	-- interrupt/reactor_set_interrupt.sh@52 -- # '[' x '!=' x ']'
00:25:09.198   17:09:01	-- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 143121 2
00:25:09.198   17:09:01	-- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 143121 2 idle
00:25:09.198   17:09:01	-- interrupt/interrupt_common.sh@33 -- # local pid=143121
00:25:09.198   17:09:01	-- interrupt/interrupt_common.sh@34 -- # local idx=2
00:25:09.198   17:09:01	-- interrupt/interrupt_common.sh@35 -- # local state=idle
00:25:09.198   17:09:01	-- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]]
00:25:09.198   17:09:01	-- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]]
00:25:09.198   17:09:01	-- interrupt/interrupt_common.sh@41 -- # hash top
00:25:09.198   17:09:01	-- interrupt/interrupt_common.sh@46 -- # (( j = 10 ))
00:25:09.198   17:09:01	-- interrupt/interrupt_common.sh@46 -- # (( j != 0 ))
00:25:09.198    17:09:01	-- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 143121 -w 256
00:25:09.198    17:09:01	-- interrupt/interrupt_common.sh@47 -- # grep reactor_2
00:25:09.457   17:09:02	-- interrupt/interrupt_common.sh@47 -- # top_reactor=' 143126 root      20   0   20.1t  58336  26100 S   0.0   0.5   0:00.62 reactor_2'
00:25:09.457    17:09:02	-- interrupt/interrupt_common.sh@48 -- # echo 143126 root 20 0 20.1t 58336 26100 S 0.0 0.5 0:00.62 reactor_2
00:25:09.457    17:09:02	-- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g'
00:25:09.457    17:09:02	-- interrupt/interrupt_common.sh@48 -- # awk '{print $9}'
00:25:09.457   17:09:02	-- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0
00:25:09.457   17:09:02	-- interrupt/interrupt_common.sh@49 -- # cpu_rate=0
00:25:09.457   17:09:02	-- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]]
00:25:09.457   17:09:02	-- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]]
00:25:09.457   17:09:02	-- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]]
00:25:09.457   17:09:02	-- interrupt/interrupt_common.sh@56 -- # return 0
00:25:09.458   17:09:02	-- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0
00:25:09.716  [2024-11-19 17:09:02.408856] interrupt_tgt.c:  61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0.
00:25:09.716  [2024-11-19 17:09:02.409422] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from poll mode.
00:25:09.716  [2024-11-19 17:09:02.409483] interrupt_tgt.c:  32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch
00:25:09.716   17:09:02	-- interrupt/reactor_set_interrupt.sh@63 -- # '[' x '!=' x ']'
00:25:09.716   17:09:02	-- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 143121 0
00:25:09.716   17:09:02	-- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 143121 0 idle
00:25:09.716   17:09:02	-- interrupt/interrupt_common.sh@33 -- # local pid=143121
00:25:09.716   17:09:02	-- interrupt/interrupt_common.sh@34 -- # local idx=0
00:25:09.716   17:09:02	-- interrupt/interrupt_common.sh@35 -- # local state=idle
00:25:09.716   17:09:02	-- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]]
00:25:09.716   17:09:02	-- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]]
00:25:09.716   17:09:02	-- interrupt/interrupt_common.sh@41 -- # hash top
00:25:09.716   17:09:02	-- interrupt/interrupt_common.sh@46 -- # (( j = 10 ))
00:25:09.716   17:09:02	-- interrupt/interrupt_common.sh@46 -- # (( j != 0 ))
00:25:09.716    17:09:02	-- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 143121 -w 256
00:25:09.716    17:09:02	-- interrupt/interrupt_common.sh@47 -- # grep reactor_0
00:25:09.975   17:09:02	-- interrupt/interrupt_common.sh@47 -- # top_reactor=' 143121 root      20   0   20.1t  58392  26100 S   6.7   0.5   0:01.61 reactor_0'
00:25:09.975    17:09:02	-- interrupt/interrupt_common.sh@48 -- # echo 143121 root 20 0 20.1t 58392 26100 S 6.7 0.5 0:01.61 reactor_0
00:25:09.975    17:09:02	-- interrupt/interrupt_common.sh@48 -- # awk '{print $9}'
00:25:09.975    17:09:02	-- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g'
00:25:09.975   17:09:02	-- interrupt/interrupt_common.sh@48 -- # cpu_rate=6.7
00:25:09.975   17:09:02	-- interrupt/interrupt_common.sh@49 -- # cpu_rate=6
00:25:09.975   17:09:02	-- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]]
00:25:09.975   17:09:02	-- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]]
00:25:09.975   17:09:02	-- interrupt/interrupt_common.sh@53 -- # [[ 6 -gt 30 ]]
00:25:09.975   17:09:02	-- interrupt/interrupt_common.sh@56 -- # return 0
00:25:09.975   17:09:02	-- interrupt/reactor_set_interrupt.sh@72 -- # return 0
00:25:09.975   17:09:02	-- interrupt/reactor_set_interrupt.sh@82 -- # return 0
00:25:09.975   17:09:02	-- interrupt/reactor_set_interrupt.sh@103 -- # trap - SIGINT SIGTERM EXIT
00:25:09.975   17:09:02	-- interrupt/reactor_set_interrupt.sh@104 -- # killprocess 143121
00:25:09.975   17:09:02	-- common/autotest_common.sh@936 -- # '[' -z 143121 ']'
00:25:09.975   17:09:02	-- common/autotest_common.sh@940 -- # kill -0 143121
00:25:09.975    17:09:02	-- common/autotest_common.sh@941 -- # uname
00:25:09.975   17:09:02	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:25:09.975    17:09:02	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 143121
00:25:09.975   17:09:02	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:25:09.975   17:09:02	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:25:09.975   17:09:02	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 143121'
00:25:09.975  killing process with pid 143121
00:25:09.975   17:09:02	-- common/autotest_common.sh@955 -- # kill 143121
00:25:09.975   17:09:02	-- common/autotest_common.sh@960 -- # wait 143121
00:25:10.233   17:09:02	-- interrupt/reactor_set_interrupt.sh@105 -- # cleanup
00:25:10.233   17:09:02	-- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile
00:25:10.233  
00:25:10.233  real	0m10.873s
00:25:10.233  user	0m10.651s
00:25:10.233  sys	0m1.805s
00:25:10.233   17:09:02	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:25:10.233   17:09:02	-- common/autotest_common.sh@10 -- # set +x
00:25:10.233  ************************************
00:25:10.233  END TEST reactor_set_interrupt
00:25:10.234  ************************************
00:25:10.234   17:09:03	-- spdk/autotest.sh@187 -- # run_test reap_unregistered_poller /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh
00:25:10.234   17:09:03	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:25:10.234   17:09:03	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:25:10.234   17:09:03	-- common/autotest_common.sh@10 -- # set +x
00:25:10.234  ************************************
00:25:10.234  START TEST reap_unregistered_poller
00:25:10.234  ************************************
00:25:10.234   17:09:03	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh
00:25:10.492  * Looking for test storage...
00:25:10.492  * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt
00:25:10.492    17:09:03	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:25:10.492     17:09:03	-- common/autotest_common.sh@1690 -- # lcov --version
00:25:10.492     17:09:03	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:25:10.492    17:09:03	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:25:10.492    17:09:03	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:25:10.492    17:09:03	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:25:10.492    17:09:03	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:25:10.492    17:09:03	-- scripts/common.sh@335 -- # IFS=.-:
00:25:10.492    17:09:03	-- scripts/common.sh@335 -- # read -ra ver1
00:25:10.492    17:09:03	-- scripts/common.sh@336 -- # IFS=.-:
00:25:10.492    17:09:03	-- scripts/common.sh@336 -- # read -ra ver2
00:25:10.492    17:09:03	-- scripts/common.sh@337 -- # local 'op=<'
00:25:10.492    17:09:03	-- scripts/common.sh@339 -- # ver1_l=2
00:25:10.492    17:09:03	-- scripts/common.sh@340 -- # ver2_l=1
00:25:10.492    17:09:03	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:25:10.492    17:09:03	-- scripts/common.sh@343 -- # case "$op" in
00:25:10.492    17:09:03	-- scripts/common.sh@344 -- # : 1
00:25:10.492    17:09:03	-- scripts/common.sh@363 -- # (( v = 0 ))
00:25:10.492    17:09:03	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:25:10.492     17:09:03	-- scripts/common.sh@364 -- # decimal 1
00:25:10.492     17:09:03	-- scripts/common.sh@352 -- # local d=1
00:25:10.492     17:09:03	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:25:10.492     17:09:03	-- scripts/common.sh@354 -- # echo 1
00:25:10.492    17:09:03	-- scripts/common.sh@364 -- # ver1[v]=1
00:25:10.492     17:09:03	-- scripts/common.sh@365 -- # decimal 2
00:25:10.492     17:09:03	-- scripts/common.sh@352 -- # local d=2
00:25:10.492     17:09:03	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:25:10.492     17:09:03	-- scripts/common.sh@354 -- # echo 2
00:25:10.492    17:09:03	-- scripts/common.sh@365 -- # ver2[v]=2
00:25:10.492    17:09:03	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:25:10.492    17:09:03	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:25:10.492    17:09:03	-- scripts/common.sh@367 -- # return 0
00:25:10.492    17:09:03	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:25:10.492    17:09:03	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:25:10.492  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:10.492  		--rc genhtml_branch_coverage=1
00:25:10.492  		--rc genhtml_function_coverage=1
00:25:10.492  		--rc genhtml_legend=1
00:25:10.492  		--rc geninfo_all_blocks=1
00:25:10.492  		--rc geninfo_unexecuted_blocks=1
00:25:10.492  		
00:25:10.492  		'
00:25:10.492    17:09:03	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:25:10.492  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:10.492  		--rc genhtml_branch_coverage=1
00:25:10.492  		--rc genhtml_function_coverage=1
00:25:10.492  		--rc genhtml_legend=1
00:25:10.492  		--rc geninfo_all_blocks=1
00:25:10.492  		--rc geninfo_unexecuted_blocks=1
00:25:10.492  		
00:25:10.492  		'
00:25:10.492    17:09:03	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:25:10.492  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:10.492  		--rc genhtml_branch_coverage=1
00:25:10.492  		--rc genhtml_function_coverage=1
00:25:10.492  		--rc genhtml_legend=1
00:25:10.492  		--rc geninfo_all_blocks=1
00:25:10.492  		--rc geninfo_unexecuted_blocks=1
00:25:10.492  		
00:25:10.492  		'
00:25:10.492    17:09:03	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:25:10.492  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:10.492  		--rc genhtml_branch_coverage=1
00:25:10.492  		--rc genhtml_function_coverage=1
00:25:10.492  		--rc genhtml_legend=1
00:25:10.492  		--rc geninfo_all_blocks=1
00:25:10.492  		--rc geninfo_unexecuted_blocks=1
00:25:10.492  		
00:25:10.492  		'
00:25:10.492   17:09:03	-- interrupt/reap_unregistered_poller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh
00:25:10.492      17:09:03	-- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh
00:25:10.492     17:09:03	-- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt
00:25:10.492    17:09:03	-- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt
00:25:10.492     17:09:03	-- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../..
00:25:10.492    17:09:03	-- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk
00:25:10.492    17:09:03	-- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh
00:25:10.492     17:09:03	-- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd
00:25:10.492     17:09:03	-- common/autotest_common.sh@34 -- # set -e
00:25:10.492     17:09:03	-- common/autotest_common.sh@35 -- # shopt -s nullglob
00:25:10.492     17:09:03	-- common/autotest_common.sh@36 -- # shopt -s extglob
00:25:10.492     17:09:03	-- common/autotest_common.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]]
00:25:10.492     17:09:03	-- common/autotest_common.sh@39 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh
00:25:10.492      17:09:03	-- common/build_config.sh@1 -- # CONFIG_WPDK_DIR=
00:25:10.492      17:09:03	-- common/build_config.sh@2 -- # CONFIG_ASAN=y
00:25:10.492      17:09:03	-- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n
00:25:10.492      17:09:03	-- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y
00:25:10.492      17:09:03	-- common/build_config.sh@5 -- # CONFIG_USDT=n
00:25:10.492      17:09:03	-- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n
00:25:10.492      17:09:03	-- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local
00:25:10.492      17:09:03	-- common/build_config.sh@8 -- # CONFIG_RBD=n
00:25:10.492      17:09:03	-- common/build_config.sh@9 -- # CONFIG_LIBDIR=
00:25:10.492      17:09:03	-- common/build_config.sh@10 -- # CONFIG_IDXD=y
00:25:10.492      17:09:03	-- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y
00:25:10.492      17:09:03	-- common/build_config.sh@12 -- # CONFIG_SMA=n
00:25:10.492      17:09:03	-- common/build_config.sh@13 -- # CONFIG_VTUNE=n
00:25:10.492      17:09:03	-- common/build_config.sh@14 -- # CONFIG_TSAN=n
00:25:10.492      17:09:03	-- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y
00:25:10.492      17:09:03	-- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR=
00:25:10.492      17:09:03	-- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n
00:25:10.492      17:09:03	-- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y
00:25:10.492      17:09:03	-- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk
00:25:10.492      17:09:03	-- common/build_config.sh@20 -- # CONFIG_LTO=n
00:25:10.492      17:09:03	-- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y
00:25:10.492      17:09:03	-- common/build_config.sh@22 -- # CONFIG_CET=n
00:25:10.492      17:09:03	-- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n
00:25:10.492      17:09:03	-- common/build_config.sh@24 -- # CONFIG_OCF_PATH=
00:25:10.492      17:09:03	-- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y
00:25:10.492      17:09:03	-- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=n
00:25:10.492      17:09:03	-- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n
00:25:10.492      17:09:03	-- common/build_config.sh@28 -- # CONFIG_UBLK=n
00:25:10.492      17:09:03	-- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y
00:25:10.492      17:09:03	-- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH=
00:25:10.492      17:09:03	-- common/build_config.sh@31 -- # CONFIG_OCF=n
00:25:10.492      17:09:03	-- common/build_config.sh@32 -- # CONFIG_FUSE=n
00:25:10.492      17:09:03	-- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR=
00:25:10.492      17:09:03	-- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB=
00:25:10.492      17:09:03	-- common/build_config.sh@35 -- # CONFIG_FUZZER=n
00:25:10.492      17:09:03	-- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build
00:25:10.492      17:09:03	-- common/build_config.sh@37 -- # CONFIG_CRYPTO=n
00:25:10.492      17:09:03	-- common/build_config.sh@38 -- # CONFIG_PGO_USE=n
00:25:10.492      17:09:03	-- common/build_config.sh@39 -- # CONFIG_VHOST=y
00:25:10.492      17:09:03	-- common/build_config.sh@40 -- # CONFIG_DAOS=n
00:25:10.492      17:09:03	-- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include
00:25:10.492      17:09:03	-- common/build_config.sh@42 -- # CONFIG_DAOS_DIR=
00:25:10.492      17:09:03	-- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=y
00:25:10.492      17:09:03	-- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y
00:25:10.493      17:09:03	-- common/build_config.sh@45 -- # CONFIG_VIRTIO=y
00:25:10.493      17:09:03	-- common/build_config.sh@46 -- # CONFIG_COVERAGE=y
00:25:10.493      17:09:03	-- common/build_config.sh@47 -- # CONFIG_RDMA=y
00:25:10.493      17:09:03	-- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio
00:25:10.493      17:09:03	-- common/build_config.sh@49 -- # CONFIG_URING_PATH=
00:25:10.493      17:09:03	-- common/build_config.sh@50 -- # CONFIG_XNVME=n
00:25:10.493      17:09:03	-- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n
00:25:10.493      17:09:03	-- common/build_config.sh@52 -- # CONFIG_ARCH=native
00:25:10.493      17:09:03	-- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n
00:25:10.493      17:09:03	-- common/build_config.sh@54 -- # CONFIG_WERROR=y
00:25:10.493      17:09:03	-- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n
00:25:10.493      17:09:03	-- common/build_config.sh@56 -- # CONFIG_UBSAN=y
00:25:10.493      17:09:03	-- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR=
00:25:10.493      17:09:03	-- common/build_config.sh@58 -- # CONFIG_GOLANG=n
00:25:10.493      17:09:03	-- common/build_config.sh@59 -- # CONFIG_ISAL=y
00:25:10.493      17:09:03	-- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=n
00:25:10.493      17:09:03	-- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib
00:25:10.493      17:09:03	-- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs
00:25:10.493      17:09:03	-- common/build_config.sh@63 -- # CONFIG_APPS=y
00:25:10.493      17:09:03	-- common/build_config.sh@64 -- # CONFIG_SHARED=n
00:25:10.493      17:09:03	-- common/build_config.sh@65 -- # CONFIG_FC_PATH=
00:25:10.493      17:09:03	-- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n
00:25:10.493      17:09:03	-- common/build_config.sh@67 -- # CONFIG_FC=n
00:25:10.493      17:09:03	-- common/build_config.sh@68 -- # CONFIG_AVAHI=n
00:25:10.493      17:09:03	-- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y
00:25:10.493      17:09:03	-- common/build_config.sh@70 -- # CONFIG_RAID5F=y
00:25:10.493      17:09:03	-- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y
00:25:10.493      17:09:03	-- common/build_config.sh@72 -- # CONFIG_TESTS=y
00:25:10.493      17:09:03	-- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n
00:25:10.493      17:09:03	-- common/build_config.sh@74 -- # CONFIG_MAX_LCORES=
00:25:10.493      17:09:03	-- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n
00:25:10.493      17:09:03	-- common/build_config.sh@76 -- # CONFIG_DEBUG=y
00:25:10.493      17:09:03	-- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n
00:25:10.493      17:09:03	-- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX=
00:25:10.493      17:09:03	-- common/build_config.sh@79 -- # CONFIG_URING=n
00:25:10.493     17:09:03	-- common/autotest_common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh
00:25:10.493        17:09:03	-- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh
00:25:10.493       17:09:03	-- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common
00:25:10.493      17:09:03	-- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common
00:25:10.493      17:09:03	-- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk
00:25:10.493      17:09:03	-- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin
00:25:10.493      17:09:03	-- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app
00:25:10.493      17:09:03	-- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples
00:25:10.493      17:09:03	-- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz")
00:25:10.493      17:09:03	-- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt")
00:25:10.493      17:09:03	-- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt")
00:25:10.493      17:09:03	-- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost")
00:25:10.493      17:09:03	-- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd")
00:25:10.493      17:09:03	-- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt")
00:25:10.493      17:09:03	-- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]]
00:25:10.493      17:09:03	-- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H
00:25:10.493  #define SPDK_CONFIG_H
00:25:10.493  #define SPDK_CONFIG_APPS 1
00:25:10.493  #define SPDK_CONFIG_ARCH native
00:25:10.493  #define SPDK_CONFIG_ASAN 1
00:25:10.493  #undef SPDK_CONFIG_AVAHI
00:25:10.493  #undef SPDK_CONFIG_CET
00:25:10.493  #define SPDK_CONFIG_COVERAGE 1
00:25:10.493  #define SPDK_CONFIG_CROSS_PREFIX 
00:25:10.493  #undef SPDK_CONFIG_CRYPTO
00:25:10.493  #undef SPDK_CONFIG_CRYPTO_MLX5
00:25:10.493  #undef SPDK_CONFIG_CUSTOMOCF
00:25:10.493  #undef SPDK_CONFIG_DAOS
00:25:10.493  #define SPDK_CONFIG_DAOS_DIR 
00:25:10.493  #define SPDK_CONFIG_DEBUG 1
00:25:10.493  #undef SPDK_CONFIG_DPDK_COMPRESSDEV
00:25:10.493  #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build
00:25:10.493  #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include
00:25:10.493  #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib
00:25:10.493  #undef SPDK_CONFIG_DPDK_PKG_CONFIG
00:25:10.493  #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk
00:25:10.493  #define SPDK_CONFIG_EXAMPLES 1
00:25:10.493  #undef SPDK_CONFIG_FC
00:25:10.493  #define SPDK_CONFIG_FC_PATH 
00:25:10.493  #define SPDK_CONFIG_FIO_PLUGIN 1
00:25:10.493  #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio
00:25:10.493  #undef SPDK_CONFIG_FUSE
00:25:10.493  #undef SPDK_CONFIG_FUZZER
00:25:10.493  #define SPDK_CONFIG_FUZZER_LIB 
00:25:10.493  #undef SPDK_CONFIG_GOLANG
00:25:10.493  #undef SPDK_CONFIG_HAVE_ARC4RANDOM
00:25:10.493  #define SPDK_CONFIG_HAVE_EXECINFO_H 1
00:25:10.493  #undef SPDK_CONFIG_HAVE_LIBARCHIVE
00:25:10.493  #undef SPDK_CONFIG_HAVE_LIBBSD
00:25:10.493  #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1
00:25:10.493  #define SPDK_CONFIG_IDXD 1
00:25:10.493  #undef SPDK_CONFIG_IDXD_KERNEL
00:25:10.493  #undef SPDK_CONFIG_IPSEC_MB
00:25:10.493  #define SPDK_CONFIG_IPSEC_MB_DIR 
00:25:10.493  #define SPDK_CONFIG_ISAL 1
00:25:10.493  #define SPDK_CONFIG_ISAL_CRYPTO 1
00:25:10.493  #define SPDK_CONFIG_ISCSI_INITIATOR 1
00:25:10.493  #define SPDK_CONFIG_LIBDIR 
00:25:10.493  #undef SPDK_CONFIG_LTO
00:25:10.493  #define SPDK_CONFIG_MAX_LCORES 
00:25:10.493  #define SPDK_CONFIG_NVME_CUSE 1
00:25:10.493  #undef SPDK_CONFIG_OCF
00:25:10.493  #define SPDK_CONFIG_OCF_PATH 
00:25:10.493  #define SPDK_CONFIG_OPENSSL_PATH 
00:25:10.493  #undef SPDK_CONFIG_PGO_CAPTURE
00:25:10.493  #undef SPDK_CONFIG_PGO_USE
00:25:10.493  #define SPDK_CONFIG_PREFIX /usr/local
00:25:10.493  #define SPDK_CONFIG_RAID5F 1
00:25:10.493  #undef SPDK_CONFIG_RBD
00:25:10.493  #define SPDK_CONFIG_RDMA 1
00:25:10.493  #define SPDK_CONFIG_RDMA_PROV verbs
00:25:10.493  #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1
00:25:10.493  #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1
00:25:10.493  #define SPDK_CONFIG_RDMA_SET_TOS 1
00:25:10.493  #undef SPDK_CONFIG_SHARED
00:25:10.493  #undef SPDK_CONFIG_SMA
00:25:10.493  #define SPDK_CONFIG_TESTS 1
00:25:10.493  #undef SPDK_CONFIG_TSAN
00:25:10.493  #undef SPDK_CONFIG_UBLK
00:25:10.493  #define SPDK_CONFIG_UBSAN 1
00:25:10.493  #define SPDK_CONFIG_UNIT_TESTS 1
00:25:10.493  #undef SPDK_CONFIG_URING
00:25:10.493  #define SPDK_CONFIG_URING_PATH 
00:25:10.493  #undef SPDK_CONFIG_URING_ZNS
00:25:10.493  #undef SPDK_CONFIG_USDT
00:25:10.493  #undef SPDK_CONFIG_VBDEV_COMPRESS
00:25:10.493  #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5
00:25:10.493  #undef SPDK_CONFIG_VFIO_USER
00:25:10.493  #define SPDK_CONFIG_VFIO_USER_DIR 
00:25:10.493  #define SPDK_CONFIG_VHOST 1
00:25:10.493  #define SPDK_CONFIG_VIRTIO 1
00:25:10.493  #undef SPDK_CONFIG_VTUNE
00:25:10.493  #define SPDK_CONFIG_VTUNE_DIR 
00:25:10.493  #define SPDK_CONFIG_WERROR 1
00:25:10.493  #define SPDK_CONFIG_WPDK_DIR 
00:25:10.493  #undef SPDK_CONFIG_XNVME
00:25:10.493  #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]]
00:25:10.493      17:09:03	-- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS ))
00:25:10.493     17:09:03	-- common/autotest_common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:25:10.493      17:09:03	-- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]]
00:25:10.493      17:09:03	-- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:25:10.493      17:09:03	-- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:25:10.493       17:09:03	-- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:25:10.493       17:09:03	-- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:25:10.493       17:09:03	-- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:25:10.493       17:09:03	-- paths/export.sh@5 -- # export PATH
00:25:10.493       17:09:03	-- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:25:10.493     17:09:03	-- common/autotest_common.sh@50 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common
00:25:10.493        17:09:03	-- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common
00:25:10.493       17:09:03	-- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm
00:25:10.493      17:09:03	-- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm
00:25:10.493       17:09:03	-- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../
00:25:10.493      17:09:03	-- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk
00:25:10.493      17:09:03	-- pm/common@16 -- # TEST_TAG=N/A
00:25:10.493      17:09:03	-- pm/common@17 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name
00:25:10.493     17:09:03	-- common/autotest_common.sh@52 -- # : 1
00:25:10.493     17:09:03	-- common/autotest_common.sh@53 -- # export RUN_NIGHTLY
00:25:10.493     17:09:03	-- common/autotest_common.sh@56 -- # : 0
00:25:10.493     17:09:03	-- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS
00:25:10.493     17:09:03	-- common/autotest_common.sh@58 -- # : 0
00:25:10.493     17:09:03	-- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND
00:25:10.493     17:09:03	-- common/autotest_common.sh@60 -- # : 1
00:25:10.493     17:09:03	-- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST
00:25:10.493     17:09:03	-- common/autotest_common.sh@62 -- # : 1
00:25:10.493     17:09:03	-- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST
00:25:10.493     17:09:03	-- common/autotest_common.sh@64 -- # :
00:25:10.493     17:09:03	-- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD
00:25:10.493     17:09:03	-- common/autotest_common.sh@66 -- # : 0
00:25:10.493     17:09:03	-- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD
00:25:10.493     17:09:03	-- common/autotest_common.sh@68 -- # : 0
00:25:10.493     17:09:03	-- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL
00:25:10.493     17:09:03	-- common/autotest_common.sh@70 -- # : 0
00:25:10.493     17:09:03	-- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI
00:25:10.493     17:09:03	-- common/autotest_common.sh@72 -- # : 0
00:25:10.493     17:09:03	-- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR
00:25:10.493     17:09:03	-- common/autotest_common.sh@74 -- # : 1
00:25:10.493     17:09:03	-- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME
00:25:10.493     17:09:03	-- common/autotest_common.sh@76 -- # : 0
00:25:10.493     17:09:03	-- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR
00:25:10.493     17:09:03	-- common/autotest_common.sh@78 -- # : 0
00:25:10.493     17:09:03	-- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP
00:25:10.493     17:09:03	-- common/autotest_common.sh@80 -- # : 0
00:25:10.493     17:09:03	-- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI
00:25:10.493     17:09:03	-- common/autotest_common.sh@82 -- # : 0
00:25:10.493     17:09:03	-- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE
00:25:10.493     17:09:03	-- common/autotest_common.sh@84 -- # : 0
00:25:10.493     17:09:03	-- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP
00:25:10.493     17:09:03	-- common/autotest_common.sh@86 -- # : 0
00:25:10.493     17:09:03	-- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF
00:25:10.493     17:09:03	-- common/autotest_common.sh@88 -- # : 0
00:25:10.493     17:09:03	-- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER
00:25:10.493     17:09:03	-- common/autotest_common.sh@90 -- # : 0
00:25:10.493     17:09:03	-- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU
00:25:10.493     17:09:03	-- common/autotest_common.sh@92 -- # : 0
00:25:10.493     17:09:03	-- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER
00:25:10.493     17:09:03	-- common/autotest_common.sh@94 -- # : 0
00:25:10.493     17:09:03	-- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT
00:25:10.493     17:09:03	-- common/autotest_common.sh@96 -- # : rdma
00:25:10.493     17:09:03	-- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT
00:25:10.493     17:09:03	-- common/autotest_common.sh@98 -- # : 0
00:25:10.493     17:09:03	-- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD
00:25:10.493     17:09:03	-- common/autotest_common.sh@100 -- # : 0
00:25:10.493     17:09:03	-- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST
00:25:10.493     17:09:03	-- common/autotest_common.sh@102 -- # : 1
00:25:10.493     17:09:03	-- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV
00:25:10.493     17:09:03	-- common/autotest_common.sh@104 -- # : 0
00:25:10.493     17:09:03	-- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT
00:25:10.493     17:09:03	-- common/autotest_common.sh@106 -- # : 0
00:25:10.493     17:09:03	-- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS
00:25:10.493     17:09:03	-- common/autotest_common.sh@108 -- # : 0
00:25:10.493     17:09:03	-- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT
00:25:10.493     17:09:03	-- common/autotest_common.sh@110 -- # : 0
00:25:10.493     17:09:03	-- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL
00:25:10.493     17:09:03	-- common/autotest_common.sh@112 -- # : 0
00:25:10.493     17:09:03	-- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS
00:25:10.493     17:09:03	-- common/autotest_common.sh@114 -- # : 1
00:25:10.493     17:09:03	-- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN
00:25:10.493     17:09:03	-- common/autotest_common.sh@116 -- # : 1
00:25:10.493     17:09:03	-- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN
00:25:10.493     17:09:03	-- common/autotest_common.sh@118 -- # : /home/vagrant/spdk_repo/dpdk/build
00:25:10.493     17:09:03	-- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK
00:25:10.493     17:09:03	-- common/autotest_common.sh@120 -- # : 0
00:25:10.493     17:09:03	-- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT
00:25:10.493     17:09:03	-- common/autotest_common.sh@122 -- # : 0
00:25:10.493     17:09:03	-- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO
00:25:10.493     17:09:03	-- common/autotest_common.sh@124 -- # : 0
00:25:10.493     17:09:03	-- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL
00:25:10.493     17:09:03	-- common/autotest_common.sh@126 -- # : 0
00:25:10.493     17:09:03	-- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF
00:25:10.493     17:09:03	-- common/autotest_common.sh@128 -- # : 0
00:25:10.493     17:09:03	-- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD
00:25:10.493     17:09:03	-- common/autotest_common.sh@130 -- # : 0
00:25:10.493     17:09:03	-- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL
00:25:10.493     17:09:03	-- common/autotest_common.sh@132 -- # : v22.11.4
00:25:10.493     17:09:03	-- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK
00:25:10.493     17:09:03	-- common/autotest_common.sh@134 -- # : true
00:25:10.493     17:09:03	-- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X
00:25:10.493     17:09:03	-- common/autotest_common.sh@136 -- # : 1
00:25:10.493     17:09:03	-- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5
00:25:10.493     17:09:03	-- common/autotest_common.sh@138 -- # : 0
00:25:10.493     17:09:03	-- common/autotest_common.sh@139 -- # export SPDK_TEST_URING
00:25:10.493     17:09:03	-- common/autotest_common.sh@140 -- # : 0
00:25:10.493     17:09:03	-- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT
00:25:10.493     17:09:03	-- common/autotest_common.sh@142 -- # : 0
00:25:10.493     17:09:03	-- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO
00:25:10.493     17:09:03	-- common/autotest_common.sh@144 -- # : 0
00:25:10.493     17:09:03	-- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER
00:25:10.493     17:09:03	-- common/autotest_common.sh@146 -- # : 0
00:25:10.493     17:09:03	-- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD
00:25:10.493     17:09:03	-- common/autotest_common.sh@148 -- # :
00:25:10.493     17:09:03	-- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS
00:25:10.493     17:09:03	-- common/autotest_common.sh@150 -- # : 0
00:25:10.493     17:09:03	-- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA
00:25:10.493     17:09:03	-- common/autotest_common.sh@152 -- # : 0
00:25:10.494     17:09:03	-- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS
00:25:10.494     17:09:03	-- common/autotest_common.sh@154 -- # : 0
00:25:10.494     17:09:03	-- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME
00:25:10.494     17:09:03	-- common/autotest_common.sh@156 -- # : 0
00:25:10.494     17:09:03	-- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA
00:25:10.494     17:09:03	-- common/autotest_common.sh@158 -- # : 0
00:25:10.494     17:09:03	-- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA
00:25:10.494     17:09:03	-- common/autotest_common.sh@160 -- # : 0
00:25:10.494     17:09:03	-- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT
00:25:10.494     17:09:03	-- common/autotest_common.sh@163 -- # :
00:25:10.494     17:09:03	-- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET
00:25:10.494     17:09:03	-- common/autotest_common.sh@165 -- # : 0
00:25:10.494     17:09:03	-- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS
00:25:10.494     17:09:03	-- common/autotest_common.sh@167 -- # : 0
00:25:10.494     17:09:03	-- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT
00:25:10.494     17:09:03	-- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib
00:25:10.494     17:09:03	-- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib
00:25:10.494     17:09:03	-- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib
00:25:10.494     17:09:03	-- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib
00:25:10.494     17:09:03	-- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib
00:25:10.494     17:09:03	-- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib
00:25:10.494     17:09:03	-- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib
00:25:10.494     17:09:03	-- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib
00:25:10.494     17:09:03	-- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes
00:25:10.494     17:09:03	-- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes
00:25:10.494     17:09:03	-- common/autotest_common.sh@181 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python
00:25:10.494     17:09:03	-- common/autotest_common.sh@181 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python
00:25:10.494     17:09:03	-- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1
00:25:10.494     17:09:03	-- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1
00:25:10.494     17:09:03	-- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0
00:25:10.494     17:09:03	-- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0
00:25:10.494     17:09:03	-- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134
00:25:10.494     17:09:03	-- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134
00:25:10.494     17:09:03	-- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file
00:25:10.494     17:09:03	-- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file
00:25:10.494     17:09:03	-- common/autotest_common.sh@196 -- # cat
00:25:10.494     17:09:03	-- common/autotest_common.sh@222 -- # echo leak:libfuse3.so
00:25:10.494     17:09:03	-- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file
00:25:10.494     17:09:03	-- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file
00:25:10.494     17:09:03	-- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock
00:25:10.494     17:09:03	-- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock
00:25:10.494     17:09:03	-- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']'
00:25:10.494     17:09:03	-- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR
00:25:10.494     17:09:03	-- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin
00:25:10.494     17:09:03	-- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin
00:25:10.494     17:09:03	-- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples
00:25:10.494     17:09:03	-- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples
00:25:10.494     17:09:03	-- common/autotest_common.sh@239 -- # export QEMU_BIN=
00:25:10.494     17:09:03	-- common/autotest_common.sh@239 -- # QEMU_BIN=
00:25:10.494     17:09:03	-- common/autotest_common.sh@240 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64'
00:25:10.494     17:09:03	-- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64'
00:25:10.494     17:09:03	-- common/autotest_common.sh@242 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer
00:25:10.494     17:09:03	-- common/autotest_common.sh@242 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer
00:25:10.494     17:09:03	-- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes
00:25:10.494     17:09:03	-- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes
00:25:10.494     17:09:03	-- common/autotest_common.sh@247 -- # _LCOV_MAIN=0
00:25:10.494     17:09:03	-- common/autotest_common.sh@248 -- # _LCOV_LLVM=1
00:25:10.494     17:09:03	-- common/autotest_common.sh@249 -- # _LCOV=
00:25:10.494     17:09:03	-- common/autotest_common.sh@250 -- # [[ '' == *clang* ]]
00:25:10.494     17:09:03	-- common/autotest_common.sh@250 -- # [[ 0 -eq 1 ]]
00:25:10.494     17:09:03	-- common/autotest_common.sh@252 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh'
00:25:10.494     17:09:03	-- common/autotest_common.sh@253 -- # _lcov_opt[_LCOV_MAIN]=
00:25:10.494     17:09:03	-- common/autotest_common.sh@255 -- # lcov_opt=
00:25:10.494     17:09:03	-- common/autotest_common.sh@258 -- # '[' 0 -eq 0 ']'
00:25:10.494     17:09:03	-- common/autotest_common.sh@259 -- # export valgrind=
00:25:10.494     17:09:03	-- common/autotest_common.sh@259 -- # valgrind=
00:25:10.494      17:09:03	-- common/autotest_common.sh@265 -- # uname -s
00:25:10.494     17:09:03	-- common/autotest_common.sh@265 -- # '[' Linux = Linux ']'
00:25:10.494     17:09:03	-- common/autotest_common.sh@266 -- # HUGEMEM=4096
00:25:10.494     17:09:03	-- common/autotest_common.sh@267 -- # export CLEAR_HUGE=yes
00:25:10.494     17:09:03	-- common/autotest_common.sh@267 -- # CLEAR_HUGE=yes
00:25:10.494     17:09:03	-- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]]
00:25:10.494     17:09:03	-- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]]
00:25:10.494     17:09:03	-- common/autotest_common.sh@275 -- # MAKE=make
00:25:10.494     17:09:03	-- common/autotest_common.sh@276 -- # MAKEFLAGS=-j10
00:25:10.494     17:09:03	-- common/autotest_common.sh@292 -- # export HUGEMEM=4096
00:25:10.494     17:09:03	-- common/autotest_common.sh@292 -- # HUGEMEM=4096
00:25:10.494     17:09:03	-- common/autotest_common.sh@294 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']'
00:25:10.494     17:09:03	-- common/autotest_common.sh@299 -- # NO_HUGE=()
00:25:10.494     17:09:03	-- common/autotest_common.sh@300 -- # TEST_MODE=
00:25:10.494     17:09:03	-- common/autotest_common.sh@319 -- # [[ -z 143291 ]]
00:25:10.494     17:09:03	-- common/autotest_common.sh@319 -- # kill -0 143291
00:25:10.494     17:09:03	-- common/autotest_common.sh@1675 -- # set_test_storage 2147483648
00:25:10.494     17:09:03	-- common/autotest_common.sh@329 -- # [[ -v testdir ]]
00:25:10.494     17:09:03	-- common/autotest_common.sh@331 -- # local requested_size=2147483648
00:25:10.494     17:09:03	-- common/autotest_common.sh@332 -- # local mount target_dir
00:25:10.494     17:09:03	-- common/autotest_common.sh@334 -- # local -A mounts fss sizes avails uses
00:25:10.494     17:09:03	-- common/autotest_common.sh@335 -- # local source fs size avail mount use
00:25:10.494     17:09:03	-- common/autotest_common.sh@337 -- # local storage_fallback storage_candidates
00:25:10.494      17:09:03	-- common/autotest_common.sh@339 -- # mktemp -udt spdk.XXXXXX
00:25:10.753     17:09:03	-- common/autotest_common.sh@339 -- # storage_fallback=/tmp/spdk.UWPa1d
00:25:10.753     17:09:03	-- common/autotest_common.sh@344 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback")
00:25:10.753     17:09:03	-- common/autotest_common.sh@346 -- # [[ -n '' ]]
00:25:10.753     17:09:03	-- common/autotest_common.sh@351 -- # [[ -n '' ]]
00:25:10.753     17:09:03	-- common/autotest_common.sh@356 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.UWPa1d/tests/interrupt /tmp/spdk.UWPa1d
00:25:10.753     17:09:03	-- common/autotest_common.sh@359 -- # requested_size=2214592512
00:25:10.753     17:09:03	-- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount
00:25:10.753      17:09:03	-- common/autotest_common.sh@328 -- # df -T
00:25:10.753      17:09:03	-- common/autotest_common.sh@328 -- # grep -v Filesystem
00:25:10.753     17:09:03	-- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs
00:25:10.753     17:09:03	-- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs
00:25:10.753     17:09:03	-- common/autotest_common.sh@363 -- # avails["$mount"]=1248956416
00:25:10.753     17:09:03	-- common/autotest_common.sh@363 -- # sizes["$mount"]=1253683200
00:25:10.753     17:09:03	-- common/autotest_common.sh@364 -- # uses["$mount"]=4726784
00:25:10.753     17:09:03	-- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount
00:25:10.753     17:09:03	-- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda1
00:25:10.753     17:09:03	-- common/autotest_common.sh@362 -- # fss["$mount"]=ext4
00:25:10.753     17:09:03	-- common/autotest_common.sh@363 -- # avails["$mount"]=9433808896
00:25:10.753     17:09:03	-- common/autotest_common.sh@363 -- # sizes["$mount"]=20616794112
00:25:10.753     17:09:03	-- common/autotest_common.sh@364 -- # uses["$mount"]=11166208000
00:25:10.753     17:09:03	-- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount
00:25:10.753     17:09:03	-- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs
00:25:10.753     17:09:03	-- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs
00:25:10.753     17:09:03	-- common/autotest_common.sh@363 -- # avails["$mount"]=6267142144
00:25:10.753     17:09:03	-- common/autotest_common.sh@363 -- # sizes["$mount"]=6268399616
00:25:10.753     17:09:03	-- common/autotest_common.sh@364 -- # uses["$mount"]=1257472
00:25:10.753     17:09:03	-- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount
00:25:10.753     17:09:03	-- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs
00:25:10.753     17:09:03	-- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs
00:25:10.753     17:09:03	-- common/autotest_common.sh@363 -- # avails["$mount"]=5242880
00:25:10.753     17:09:03	-- common/autotest_common.sh@363 -- # sizes["$mount"]=5242880
00:25:10.753     17:09:03	-- common/autotest_common.sh@364 -- # uses["$mount"]=0
00:25:10.753     17:09:03	-- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount
00:25:10.753     17:09:03	-- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda15
00:25:10.753     17:09:03	-- common/autotest_common.sh@362 -- # fss["$mount"]=vfat
00:25:10.753     17:09:03	-- common/autotest_common.sh@363 -- # avails["$mount"]=103061504
00:25:10.753     17:09:03	-- common/autotest_common.sh@363 -- # sizes["$mount"]=109395968
00:25:10.753     17:09:03	-- common/autotest_common.sh@364 -- # uses["$mount"]=6334464
00:25:10.753     17:09:03	-- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount
00:25:10.753     17:09:03	-- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs
00:25:10.753     17:09:03	-- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs
00:25:10.753     17:09:03	-- common/autotest_common.sh@363 -- # avails["$mount"]=1253675008
00:25:10.753     17:09:03	-- common/autotest_common.sh@363 -- # sizes["$mount"]=1253679104
00:25:10.753     17:09:03	-- common/autotest_common.sh@364 -- # uses["$mount"]=4096
00:25:10.753     17:09:03	-- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount
00:25:10.753     17:09:03	-- common/autotest_common.sh@362 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt/output
00:25:10.753     17:09:03	-- common/autotest_common.sh@362 -- # fss["$mount"]=fuse.sshfs
00:25:10.753     17:09:03	-- common/autotest_common.sh@363 -- # avails["$mount"]=92756692992
00:25:10.753     17:09:03	-- common/autotest_common.sh@363 -- # sizes["$mount"]=105088212992
00:25:10.753     17:09:03	-- common/autotest_common.sh@364 -- # uses["$mount"]=6946086912
00:25:10.753     17:09:03	-- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount
00:25:10.753     17:09:03	-- common/autotest_common.sh@367 -- # printf '* Looking for test storage...\n'
00:25:10.753  * Looking for test storage...
00:25:10.753     17:09:03	-- common/autotest_common.sh@369 -- # local target_space new_size
00:25:10.753     17:09:03	-- common/autotest_common.sh@370 -- # for target_dir in "${storage_candidates[@]}"
00:25:10.753      17:09:03	-- common/autotest_common.sh@373 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt
00:25:10.753      17:09:03	-- common/autotest_common.sh@373 -- # awk '$1 !~ /Filesystem/{print $6}'
00:25:10.753     17:09:03	-- common/autotest_common.sh@373 -- # mount=/
00:25:10.753     17:09:03	-- common/autotest_common.sh@375 -- # target_space=9433808896
00:25:10.753     17:09:03	-- common/autotest_common.sh@376 -- # (( target_space == 0 || target_space < requested_size ))
00:25:10.753     17:09:03	-- common/autotest_common.sh@379 -- # (( target_space >= requested_size ))
00:25:10.753     17:09:03	-- common/autotest_common.sh@381 -- # [[ ext4 == tmpfs ]]
00:25:10.753     17:09:03	-- common/autotest_common.sh@381 -- # [[ ext4 == ramfs ]]
00:25:10.753     17:09:03	-- common/autotest_common.sh@381 -- # [[ / == / ]]
00:25:10.753     17:09:03	-- common/autotest_common.sh@382 -- # new_size=13380800512
00:25:10.753     17:09:03	-- common/autotest_common.sh@383 -- # (( new_size * 100 / sizes[/] > 95 ))
00:25:10.753     17:09:03	-- common/autotest_common.sh@388 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt
00:25:10.753     17:09:03	-- common/autotest_common.sh@388 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt
00:25:10.753     17:09:03	-- common/autotest_common.sh@389 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt
00:25:10.753  * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt
00:25:10.753     17:09:03	-- common/autotest_common.sh@390 -- # return 0
00:25:10.753     17:09:03	-- common/autotest_common.sh@1677 -- # set -o errtrace
00:25:10.753     17:09:03	-- common/autotest_common.sh@1678 -- # shopt -s extdebug
00:25:10.753     17:09:03	-- common/autotest_common.sh@1679 -- # trap 'trap - ERR; print_backtrace >&2' ERR
00:25:10.753     17:09:03	-- common/autotest_common.sh@1681 -- # PS4=' \t	-- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ '
00:25:10.753     17:09:03	-- common/autotest_common.sh@1682 -- # true
00:25:10.754     17:09:03	-- common/autotest_common.sh@1684 -- # xtrace_fd
00:25:10.754     17:09:03	-- common/autotest_common.sh@25 -- # [[ -n 13 ]]
00:25:10.754     17:09:03	-- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]]
00:25:10.754     17:09:03	-- common/autotest_common.sh@27 -- # exec
00:25:10.754     17:09:03	-- common/autotest_common.sh@29 -- # exec
00:25:10.754     17:09:03	-- common/autotest_common.sh@31 -- # xtrace_restore
00:25:10.754     17:09:03	-- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]'
00:25:10.754     17:09:03	-- common/autotest_common.sh@17 -- # (( 0 == 0 ))
00:25:10.754     17:09:03	-- common/autotest_common.sh@18 -- # set -x
00:25:10.754     17:09:03	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:25:10.754      17:09:03	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:25:10.754      17:09:03	-- common/autotest_common.sh@1690 -- # lcov --version
00:25:10.754     17:09:03	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:25:10.754     17:09:03	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:25:10.754     17:09:03	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:25:10.754     17:09:03	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:25:10.754     17:09:03	-- scripts/common.sh@335 -- # IFS=.-:
00:25:10.754     17:09:03	-- scripts/common.sh@335 -- # read -ra ver1
00:25:10.754     17:09:03	-- scripts/common.sh@336 -- # IFS=.-:
00:25:10.754     17:09:03	-- scripts/common.sh@336 -- # read -ra ver2
00:25:10.754     17:09:03	-- scripts/common.sh@337 -- # local 'op=<'
00:25:10.754     17:09:03	-- scripts/common.sh@339 -- # ver1_l=2
00:25:10.754     17:09:03	-- scripts/common.sh@340 -- # ver2_l=1
00:25:10.754     17:09:03	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:25:10.754     17:09:03	-- scripts/common.sh@343 -- # case "$op" in
00:25:10.754     17:09:03	-- scripts/common.sh@344 -- # : 1
00:25:10.754     17:09:03	-- scripts/common.sh@363 -- # (( v = 0 ))
00:25:10.754     17:09:03	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:25:10.754      17:09:03	-- scripts/common.sh@364 -- # decimal 1
00:25:10.754      17:09:03	-- scripts/common.sh@352 -- # local d=1
00:25:10.754      17:09:03	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:25:10.754      17:09:03	-- scripts/common.sh@354 -- # echo 1
00:25:10.754     17:09:03	-- scripts/common.sh@364 -- # ver1[v]=1
00:25:10.754      17:09:03	-- scripts/common.sh@365 -- # decimal 2
00:25:10.754      17:09:03	-- scripts/common.sh@352 -- # local d=2
00:25:10.754      17:09:03	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:25:10.754      17:09:03	-- scripts/common.sh@354 -- # echo 2
00:25:10.754     17:09:03	-- scripts/common.sh@365 -- # ver2[v]=2
00:25:10.754     17:09:03	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:25:10.754     17:09:03	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:25:10.754     17:09:03	-- scripts/common.sh@367 -- # return 0
00:25:10.754     17:09:03	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:25:10.754     17:09:03	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:25:10.754  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:10.754  		--rc genhtml_branch_coverage=1
00:25:10.754  		--rc genhtml_function_coverage=1
00:25:10.754  		--rc genhtml_legend=1
00:25:10.754  		--rc geninfo_all_blocks=1
00:25:10.754  		--rc geninfo_unexecuted_blocks=1
00:25:10.754  		
00:25:10.754  		'
00:25:10.754     17:09:03	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:25:10.754  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:10.754  		--rc genhtml_branch_coverage=1
00:25:10.754  		--rc genhtml_function_coverage=1
00:25:10.754  		--rc genhtml_legend=1
00:25:10.754  		--rc geninfo_all_blocks=1
00:25:10.754  		--rc geninfo_unexecuted_blocks=1
00:25:10.754  		
00:25:10.754  		'
00:25:10.754     17:09:03	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:25:10.754  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:10.754  		--rc genhtml_branch_coverage=1
00:25:10.754  		--rc genhtml_function_coverage=1
00:25:10.754  		--rc genhtml_legend=1
00:25:10.754  		--rc geninfo_all_blocks=1
00:25:10.754  		--rc geninfo_unexecuted_blocks=1
00:25:10.754  		
00:25:10.754  		'
00:25:10.754     17:09:03	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:25:10.754  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:10.754  		--rc genhtml_branch_coverage=1
00:25:10.754  		--rc genhtml_function_coverage=1
00:25:10.754  		--rc genhtml_legend=1
00:25:10.754  		--rc geninfo_all_blocks=1
00:25:10.754  		--rc geninfo_unexecuted_blocks=1
00:25:10.754  		
00:25:10.754  		'
00:25:10.754    17:09:03	-- interrupt/interrupt_common.sh@9 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:25:10.754    17:09:03	-- interrupt/interrupt_common.sh@11 -- # r0_mask=0x1
00:25:10.754    17:09:03	-- interrupt/interrupt_common.sh@12 -- # r1_mask=0x2
00:25:10.754    17:09:03	-- interrupt/interrupt_common.sh@13 -- # r2_mask=0x4
00:25:10.754    17:09:03	-- interrupt/interrupt_common.sh@15 -- # cpu_server_mask=0x07
00:25:10.754    17:09:03	-- interrupt/interrupt_common.sh@16 -- # rpc_server_addr=/var/tmp/spdk.sock
00:25:10.754   17:09:03	-- interrupt/reap_unregistered_poller.sh@14 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt
00:25:10.754   17:09:03	-- interrupt/reap_unregistered_poller.sh@14 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt
00:25:10.754   17:09:03	-- interrupt/reap_unregistered_poller.sh@17 -- # start_intr_tgt
00:25:10.754   17:09:03	-- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock
00:25:10.754   17:09:03	-- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07
00:25:10.754   17:09:03	-- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=143348
00:25:10.754   17:09:03	-- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT
00:25:10.754   17:09:03	-- interrupt/interrupt_common.sh@29 -- # waitforlisten 143348 /var/tmp/spdk.sock
00:25:10.754   17:09:03	-- common/autotest_common.sh@829 -- # '[' -z 143348 ']'
00:25:10.754   17:09:03	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:25:10.754  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:25:10.754   17:09:03	-- common/autotest_common.sh@834 -- # local max_retries=100
00:25:10.754   17:09:03	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:25:10.754   17:09:03	-- common/autotest_common.sh@838 -- # xtrace_disable
00:25:10.754   17:09:03	-- common/autotest_common.sh@10 -- # set +x
00:25:10.754   17:09:03	-- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g
00:25:10.754  [2024-11-19 17:09:03.536625] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:25:10.754  [2024-11-19 17:09:03.536889] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143348 ]
00:25:11.013  [2024-11-19 17:09:03.706227] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3
00:25:11.013  [2024-11-19 17:09:03.763199] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:25:11.013  [2024-11-19 17:09:03.763338] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:25:11.013  [2024-11-19 17:09:03.763339] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:25:11.013  [2024-11-19 17:09:03.834885] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode.
00:25:11.947   17:09:04	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:25:11.947   17:09:04	-- common/autotest_common.sh@862 -- # return 0
00:25:11.947    17:09:04	-- interrupt/reap_unregistered_poller.sh@20 -- # rpc_cmd thread_get_pollers
00:25:11.947    17:09:04	-- common/autotest_common.sh@561 -- # xtrace_disable
00:25:11.947    17:09:04	-- interrupt/reap_unregistered_poller.sh@20 -- # jq -r '.threads[0]'
00:25:11.947    17:09:04	-- common/autotest_common.sh@10 -- # set +x
00:25:11.947    17:09:04	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:25:11.947   17:09:04	-- interrupt/reap_unregistered_poller.sh@20 -- # app_thread='{
00:25:11.947    "name": "app_thread",
00:25:11.947    "id": 1,
00:25:11.947    "active_pollers": [],
00:25:11.947    "timed_pollers": [
00:25:11.947      {
00:25:11.947        "name": "rpc_subsystem_poll",
00:25:11.947        "id": 1,
00:25:11.947        "state": "waiting",
00:25:11.947        "run_count": 0,
00:25:11.947        "busy_count": 0,
00:25:11.947        "period_ticks": 8400000
00:25:11.947      }
00:25:11.947    ],
00:25:11.947    "paused_pollers": []
00:25:11.947  }'
00:25:11.947    17:09:04	-- interrupt/reap_unregistered_poller.sh@21 -- # jq -r '.active_pollers[].name'
00:25:11.947   17:09:04	-- interrupt/reap_unregistered_poller.sh@21 -- # native_pollers=
00:25:11.947   17:09:04	-- interrupt/reap_unregistered_poller.sh@22 -- # native_pollers+=' '
00:25:11.947    17:09:04	-- interrupt/reap_unregistered_poller.sh@23 -- # jq -r '.timed_pollers[].name'
00:25:11.947   17:09:04	-- interrupt/reap_unregistered_poller.sh@23 -- # native_pollers+=rpc_subsystem_poll
00:25:11.947   17:09:04	-- interrupt/reap_unregistered_poller.sh@28 -- # setup_bdev_aio
00:25:11.947    17:09:04	-- interrupt/interrupt_common.sh@98 -- # uname -s
00:25:11.947   17:09:04	-- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]]
00:25:11.947   17:09:04	-- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000
00:25:11.947  5000+0 records in
00:25:11.947  5000+0 records out
00:25:11.947  10240000 bytes (10 MB, 9.8 MiB) copied, 0.0361023 s, 284 MB/s
00:25:11.947   17:09:04	-- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048
00:25:12.205  AIO0
00:25:12.205   17:09:04	-- interrupt/reap_unregistered_poller.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine
00:25:12.463   17:09:05	-- interrupt/reap_unregistered_poller.sh@34 -- # sleep 0.1
00:25:12.740    17:09:05	-- interrupt/reap_unregistered_poller.sh@37 -- # rpc_cmd thread_get_pollers
00:25:12.740    17:09:05	-- interrupt/reap_unregistered_poller.sh@37 -- # jq -r '.threads[0]'
00:25:12.740    17:09:05	-- common/autotest_common.sh@561 -- # xtrace_disable
00:25:12.740    17:09:05	-- common/autotest_common.sh@10 -- # set +x
00:25:12.740    17:09:05	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:25:12.740   17:09:05	-- interrupt/reap_unregistered_poller.sh@37 -- # app_thread='{
00:25:12.740    "name": "app_thread",
00:25:12.740    "id": 1,
00:25:12.740    "active_pollers": [],
00:25:12.740    "timed_pollers": [
00:25:12.740      {
00:25:12.740        "name": "rpc_subsystem_poll",
00:25:12.740        "id": 1,
00:25:12.740        "state": "waiting",
00:25:12.740        "run_count": 0,
00:25:12.740        "busy_count": 0,
00:25:12.740        "period_ticks": 8400000
00:25:12.740      }
00:25:12.740    ],
00:25:12.740    "paused_pollers": []
00:25:12.740  }'
00:25:12.740    17:09:05	-- interrupt/reap_unregistered_poller.sh@38 -- # jq -r '.active_pollers[].name'
00:25:12.740   17:09:05	-- interrupt/reap_unregistered_poller.sh@38 -- # remaining_pollers=
00:25:12.740   17:09:05	-- interrupt/reap_unregistered_poller.sh@39 -- # remaining_pollers+=' '
00:25:12.740    17:09:05	-- interrupt/reap_unregistered_poller.sh@40 -- # jq -r '.timed_pollers[].name'
00:25:12.740   17:09:05	-- interrupt/reap_unregistered_poller.sh@40 -- # remaining_pollers+=rpc_subsystem_poll
00:25:12.740   17:09:05	-- interrupt/reap_unregistered_poller.sh@44 -- # [[  rpc_subsystem_poll == \ \r\p\c\_\s\u\b\s\y\s\t\e\m\_\p\o\l\l ]]
00:25:12.740   17:09:05	-- interrupt/reap_unregistered_poller.sh@46 -- # trap - SIGINT SIGTERM EXIT
00:25:12.740   17:09:05	-- interrupt/reap_unregistered_poller.sh@47 -- # killprocess 143348
00:25:12.740   17:09:05	-- common/autotest_common.sh@936 -- # '[' -z 143348 ']'
00:25:12.740   17:09:05	-- common/autotest_common.sh@940 -- # kill -0 143348
00:25:12.740    17:09:05	-- common/autotest_common.sh@941 -- # uname
00:25:12.740   17:09:05	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:25:12.740    17:09:05	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 143348
00:25:12.740   17:09:05	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:25:12.740   17:09:05	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:25:12.740   17:09:05	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 143348'
00:25:12.740  killing process with pid 143348
00:25:12.740   17:09:05	-- common/autotest_common.sh@955 -- # kill 143348
00:25:12.740   17:09:05	-- common/autotest_common.sh@960 -- # wait 143348
00:25:13.308   17:09:05	-- interrupt/reap_unregistered_poller.sh@48 -- # cleanup
00:25:13.308   17:09:05	-- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile
00:25:13.308  ************************************
00:25:13.308  END TEST reap_unregistered_poller
00:25:13.308  ************************************
00:25:13.308  
00:25:13.308  real	0m2.838s
00:25:13.308  user	0m1.920s
00:25:13.308  sys	0m0.629s
00:25:13.308   17:09:05	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:25:13.308   17:09:05	-- common/autotest_common.sh@10 -- # set +x
00:25:13.308    17:09:05	-- spdk/autotest.sh@191 -- # uname -s
00:25:13.308   17:09:05	-- spdk/autotest.sh@191 -- # [[ Linux == Linux ]]
00:25:13.308   17:09:05	-- spdk/autotest.sh@192 -- # [[ 1 -eq 1 ]]
00:25:13.308   17:09:05	-- spdk/autotest.sh@198 -- # [[ 0 -eq 0 ]]
00:25:13.308   17:09:05	-- spdk/autotest.sh@199 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh
00:25:13.308   17:09:05	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:25:13.308   17:09:05	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:25:13.308   17:09:05	-- common/autotest_common.sh@10 -- # set +x
00:25:13.308  ************************************
00:25:13.308  START TEST spdk_dd
00:25:13.308  ************************************
00:25:13.308   17:09:05	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh
00:25:13.308  * Looking for test storage...
00:25:13.308  * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd
00:25:13.308     17:09:06	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:25:13.308      17:09:06	-- common/autotest_common.sh@1690 -- # lcov --version
00:25:13.308      17:09:06	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:25:13.308     17:09:06	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:25:13.308     17:09:06	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:25:13.308     17:09:06	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:25:13.308     17:09:06	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:25:13.308     17:09:06	-- scripts/common.sh@335 -- # IFS=.-:
00:25:13.308     17:09:06	-- scripts/common.sh@335 -- # read -ra ver1
00:25:13.308     17:09:06	-- scripts/common.sh@336 -- # IFS=.-:
00:25:13.308     17:09:06	-- scripts/common.sh@336 -- # read -ra ver2
00:25:13.308     17:09:06	-- scripts/common.sh@337 -- # local 'op=<'
00:25:13.308     17:09:06	-- scripts/common.sh@339 -- # ver1_l=2
00:25:13.308     17:09:06	-- scripts/common.sh@340 -- # ver2_l=1
00:25:13.308     17:09:06	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:25:13.308     17:09:06	-- scripts/common.sh@343 -- # case "$op" in
00:25:13.308     17:09:06	-- scripts/common.sh@344 -- # : 1
00:25:13.308     17:09:06	-- scripts/common.sh@363 -- # (( v = 0 ))
00:25:13.308     17:09:06	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:25:13.308      17:09:06	-- scripts/common.sh@364 -- # decimal 1
00:25:13.308      17:09:06	-- scripts/common.sh@352 -- # local d=1
00:25:13.308      17:09:06	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:25:13.308      17:09:06	-- scripts/common.sh@354 -- # echo 1
00:25:13.308     17:09:06	-- scripts/common.sh@364 -- # ver1[v]=1
00:25:13.308      17:09:06	-- scripts/common.sh@365 -- # decimal 2
00:25:13.308      17:09:06	-- scripts/common.sh@352 -- # local d=2
00:25:13.308      17:09:06	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:25:13.308      17:09:06	-- scripts/common.sh@354 -- # echo 2
00:25:13.308     17:09:06	-- scripts/common.sh@365 -- # ver2[v]=2
00:25:13.308     17:09:06	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:25:13.308     17:09:06	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:25:13.308     17:09:06	-- scripts/common.sh@367 -- # return 0
00:25:13.308     17:09:06	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:25:13.308     17:09:06	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:25:13.308  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:13.308  		--rc genhtml_branch_coverage=1
00:25:13.308  		--rc genhtml_function_coverage=1
00:25:13.308  		--rc genhtml_legend=1
00:25:13.308  		--rc geninfo_all_blocks=1
00:25:13.308  		--rc geninfo_unexecuted_blocks=1
00:25:13.308  		
00:25:13.308  		'
00:25:13.308     17:09:06	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:25:13.309  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:13.309  		--rc genhtml_branch_coverage=1
00:25:13.309  		--rc genhtml_function_coverage=1
00:25:13.309  		--rc genhtml_legend=1
00:25:13.309  		--rc geninfo_all_blocks=1
00:25:13.309  		--rc geninfo_unexecuted_blocks=1
00:25:13.309  		
00:25:13.309  		'
00:25:13.309     17:09:06	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:25:13.309  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:13.309  		--rc genhtml_branch_coverage=1
00:25:13.309  		--rc genhtml_function_coverage=1
00:25:13.309  		--rc genhtml_legend=1
00:25:13.309  		--rc geninfo_all_blocks=1
00:25:13.309  		--rc geninfo_unexecuted_blocks=1
00:25:13.309  		
00:25:13.309  		'
00:25:13.309     17:09:06	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:25:13.309  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:13.309  		--rc genhtml_branch_coverage=1
00:25:13.309  		--rc genhtml_function_coverage=1
00:25:13.309  		--rc genhtml_legend=1
00:25:13.309  		--rc geninfo_all_blocks=1
00:25:13.309  		--rc geninfo_unexecuted_blocks=1
00:25:13.309  		
00:25:13.309  		'
00:25:13.309    17:09:06	-- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:25:13.309     17:09:06	-- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]]
00:25:13.309     17:09:06	-- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:25:13.309     17:09:06	-- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:25:13.309      17:09:06	-- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:25:13.309      17:09:06	-- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:25:13.309      17:09:06	-- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:25:13.309      17:09:06	-- paths/export.sh@5 -- # export PATH
00:25:13.309      17:09:06	-- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:25:13.309   17:09:06	-- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:25:13.878  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev
00:25:13.878  0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver
00:25:14.815   17:09:07	-- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace))
00:25:14.815    17:09:07	-- dd/dd.sh@11 -- # nvme_in_userspace
00:25:14.815    17:09:07	-- scripts/common.sh@311 -- # local bdf bdfs
00:25:14.815    17:09:07	-- scripts/common.sh@312 -- # local nvmes
00:25:14.815    17:09:07	-- scripts/common.sh@314 -- # [[ -n '' ]]
00:25:14.815    17:09:07	-- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02))
00:25:14.815     17:09:07	-- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02
00:25:14.815     17:09:07	-- scripts/common.sh@297 -- # local bdf=
00:25:14.815      17:09:07	-- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02
00:25:14.815      17:09:07	-- scripts/common.sh@232 -- # local class
00:25:14.815      17:09:07	-- scripts/common.sh@233 -- # local subclass
00:25:14.815      17:09:07	-- scripts/common.sh@234 -- # local progif
00:25:14.815       17:09:07	-- scripts/common.sh@235 -- # printf %02x 1
00:25:14.815      17:09:07	-- scripts/common.sh@235 -- # class=01
00:25:14.815       17:09:07	-- scripts/common.sh@236 -- # printf %02x 8
00:25:14.815      17:09:07	-- scripts/common.sh@236 -- # subclass=08
00:25:14.815       17:09:07	-- scripts/common.sh@237 -- # printf %02x 2
00:25:14.815      17:09:07	-- scripts/common.sh@237 -- # progif=02
00:25:14.815      17:09:07	-- scripts/common.sh@239 -- # hash lspci
00:25:14.815      17:09:07	-- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']'
00:25:14.815      17:09:07	-- scripts/common.sh@242 -- # grep -i -- -p02
00:25:14.815      17:09:07	-- scripts/common.sh@241 -- # lspci -mm -n -D
00:25:14.815      17:09:07	-- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}'
00:25:14.815      17:09:07	-- scripts/common.sh@244 -- # tr -d '"'
00:25:14.815     17:09:07	-- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@")
00:25:14.815     17:09:07	-- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0
00:25:14.815     17:09:07	-- scripts/common.sh@15 -- # local i
00:25:14.815     17:09:07	-- scripts/common.sh@18 -- # [[    =~  0000:00:06.0  ]]
00:25:14.815     17:09:07	-- scripts/common.sh@22 -- # [[ -z '' ]]
00:25:14.815     17:09:07	-- scripts/common.sh@24 -- # return 0
00:25:14.815     17:09:07	-- scripts/common.sh@301 -- # echo 0000:00:06.0
00:25:14.815    17:09:07	-- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}"
00:25:14.815    17:09:07	-- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]]
00:25:14.815     17:09:07	-- scripts/common.sh@322 -- # uname -s
00:25:14.815    17:09:07	-- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]]
00:25:14.815    17:09:07	-- scripts/common.sh@325 -- # bdfs+=("$bdf")
00:25:14.815    17:09:07	-- scripts/common.sh@327 -- # (( 1 ))
00:25:14.815    17:09:07	-- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0
00:25:14.815   17:09:07	-- dd/dd.sh@13 -- # check_liburing
00:25:14.815   17:09:07	-- dd/common.sh@139 -- # local lib so
00:25:14.815   17:09:07	-- dd/common.sh@140 -- # local -g liburing_in_use=0
00:25:14.815   17:09:07	-- dd/common.sh@142 -- # read -r lib _ so _
00:25:14.815    17:09:07	-- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1
00:25:14.815    17:09:07	-- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:25:14.815   17:09:07	-- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]]
00:25:14.815   17:09:07	-- dd/common.sh@142 -- # read -r lib _ so _
00:25:14.815   17:09:07	-- dd/common.sh@143 -- # [[ libasan.so.6 == liburing.so.* ]]
00:25:14.815   17:09:07	-- dd/common.sh@142 -- # read -r lib _ so _
00:25:14.815   17:09:07	-- dd/common.sh@143 -- # [[ libnuma.so.1 == liburing.so.* ]]
00:25:14.815   17:09:07	-- dd/common.sh@142 -- # read -r lib _ so _
00:25:14.815   17:09:07	-- dd/common.sh@143 -- # [[ libibverbs.so.1 == liburing.so.* ]]
00:25:14.815   17:09:07	-- dd/common.sh@142 -- # read -r lib _ so _
00:25:14.815   17:09:07	-- dd/common.sh@143 -- # [[ librdmacm.so.1 == liburing.so.* ]]
00:25:14.815   17:09:07	-- dd/common.sh@142 -- # read -r lib _ so _
00:25:14.815   17:09:07	-- dd/common.sh@143 -- # [[ libuuid.so.1 == liburing.so.* ]]
00:25:14.815   17:09:07	-- dd/common.sh@142 -- # read -r lib _ so _
00:25:14.815   17:09:07	-- dd/common.sh@143 -- # [[ libssl.so.3 == liburing.so.* ]]
00:25:14.815   17:09:07	-- dd/common.sh@142 -- # read -r lib _ so _
00:25:14.815   17:09:07	-- dd/common.sh@143 -- # [[ libcrypto.so.3 == liburing.so.* ]]
00:25:14.815   17:09:07	-- dd/common.sh@142 -- # read -r lib _ so _
00:25:14.815   17:09:07	-- dd/common.sh@143 -- # [[ libm.so.6 == liburing.so.* ]]
00:25:14.815   17:09:07	-- dd/common.sh@142 -- # read -r lib _ so _
00:25:14.815   17:09:07	-- dd/common.sh@143 -- # [[ libfuse3.so.3 == liburing.so.* ]]
00:25:14.815   17:09:07	-- dd/common.sh@142 -- # read -r lib _ so _
00:25:14.815   17:09:07	-- dd/common.sh@143 -- # [[ libaio.so.1 == liburing.so.* ]]
00:25:14.815   17:09:07	-- dd/common.sh@142 -- # read -r lib _ so _
00:25:14.815   17:09:07	-- dd/common.sh@143 -- # [[ libiscsi.so.7 == liburing.so.* ]]
00:25:14.815   17:09:07	-- dd/common.sh@142 -- # read -r lib _ so _
00:25:14.815   17:09:07	-- dd/common.sh@143 -- # [[ libubsan.so.1 == liburing.so.* ]]
00:25:14.815   17:09:07	-- dd/common.sh@142 -- # read -r lib _ so _
00:25:14.815   17:09:07	-- dd/common.sh@143 -- # [[ libc.so.6 == liburing.so.* ]]
00:25:14.815   17:09:07	-- dd/common.sh@142 -- # read -r lib _ so _
00:25:14.815   17:09:07	-- dd/common.sh@143 -- # [[ libgcc_s.so.1 == liburing.so.* ]]
00:25:14.815   17:09:07	-- dd/common.sh@142 -- # read -r lib _ so _
00:25:14.815   17:09:07	-- dd/common.sh@143 -- # [[ /lib64/ld-linux-x86-64.so.2 == liburing.so.* ]]
00:25:14.815   17:09:07	-- dd/common.sh@142 -- # read -r lib _ so _
00:25:14.815   17:09:07	-- dd/common.sh@143 -- # [[ libnl-route-3.so.200 == liburing.so.* ]]
00:25:14.815   17:09:07	-- dd/common.sh@142 -- # read -r lib _ so _
00:25:14.815   17:09:07	-- dd/common.sh@143 -- # [[ libnl-3.so.200 == liburing.so.* ]]
00:25:14.815   17:09:07	-- dd/common.sh@142 -- # read -r lib _ so _
00:25:14.815   17:09:07	-- dd/common.sh@143 -- # [[ libstdc++.so.6 == liburing.so.* ]]
00:25:14.815   17:09:07	-- dd/common.sh@142 -- # read -r lib _ so _
00:25:14.815   17:09:07	-- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 ))
00:25:14.815   17:09:07	-- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0
00:25:14.815   17:09:07	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:25:14.815   17:09:07	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:25:14.815   17:09:07	-- common/autotest_common.sh@10 -- # set +x
00:25:14.815  ************************************
00:25:14.815  START TEST spdk_dd_basic_rw
00:25:14.815  ************************************
00:25:14.815   17:09:07	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0
00:25:14.815  * Looking for test storage...
00:25:14.815  * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd
00:25:14.815     17:09:07	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:25:14.816      17:09:07	-- common/autotest_common.sh@1690 -- # lcov --version
00:25:14.816      17:09:07	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:25:15.074     17:09:07	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:25:15.074     17:09:07	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:25:15.074     17:09:07	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:25:15.074     17:09:07	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:25:15.074     17:09:07	-- scripts/common.sh@335 -- # IFS=.-:
00:25:15.074     17:09:07	-- scripts/common.sh@335 -- # read -ra ver1
00:25:15.075     17:09:07	-- scripts/common.sh@336 -- # IFS=.-:
00:25:15.075     17:09:07	-- scripts/common.sh@336 -- # read -ra ver2
00:25:15.075     17:09:07	-- scripts/common.sh@337 -- # local 'op=<'
00:25:15.075     17:09:07	-- scripts/common.sh@339 -- # ver1_l=2
00:25:15.075     17:09:07	-- scripts/common.sh@340 -- # ver2_l=1
00:25:15.075     17:09:07	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:25:15.075     17:09:07	-- scripts/common.sh@343 -- # case "$op" in
00:25:15.075     17:09:07	-- scripts/common.sh@344 -- # : 1
00:25:15.075     17:09:07	-- scripts/common.sh@363 -- # (( v = 0 ))
00:25:15.075     17:09:07	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:25:15.075      17:09:07	-- scripts/common.sh@364 -- # decimal 1
00:25:15.075      17:09:07	-- scripts/common.sh@352 -- # local d=1
00:25:15.075      17:09:07	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:25:15.075      17:09:07	-- scripts/common.sh@354 -- # echo 1
00:25:15.075     17:09:07	-- scripts/common.sh@364 -- # ver1[v]=1
00:25:15.075      17:09:07	-- scripts/common.sh@365 -- # decimal 2
00:25:15.075      17:09:07	-- scripts/common.sh@352 -- # local d=2
00:25:15.075      17:09:07	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:25:15.075      17:09:07	-- scripts/common.sh@354 -- # echo 2
00:25:15.075     17:09:07	-- scripts/common.sh@365 -- # ver2[v]=2
00:25:15.075     17:09:07	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:25:15.075     17:09:07	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:25:15.075     17:09:07	-- scripts/common.sh@367 -- # return 0
00:25:15.075     17:09:07	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:25:15.075     17:09:07	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:25:15.075  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:15.075  		--rc genhtml_branch_coverage=1
00:25:15.075  		--rc genhtml_function_coverage=1
00:25:15.075  		--rc genhtml_legend=1
00:25:15.075  		--rc geninfo_all_blocks=1
00:25:15.075  		--rc geninfo_unexecuted_blocks=1
00:25:15.075  		
00:25:15.075  		'
00:25:15.075     17:09:07	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:25:15.075  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:15.075  		--rc genhtml_branch_coverage=1
00:25:15.075  		--rc genhtml_function_coverage=1
00:25:15.075  		--rc genhtml_legend=1
00:25:15.075  		--rc geninfo_all_blocks=1
00:25:15.075  		--rc geninfo_unexecuted_blocks=1
00:25:15.075  		
00:25:15.075  		'
00:25:15.075     17:09:07	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:25:15.075  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:15.075  		--rc genhtml_branch_coverage=1
00:25:15.075  		--rc genhtml_function_coverage=1
00:25:15.075  		--rc genhtml_legend=1
00:25:15.075  		--rc geninfo_all_blocks=1
00:25:15.075  		--rc geninfo_unexecuted_blocks=1
00:25:15.075  		
00:25:15.075  		'
00:25:15.075     17:09:07	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:25:15.075  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:15.075  		--rc genhtml_branch_coverage=1
00:25:15.075  		--rc genhtml_function_coverage=1
00:25:15.075  		--rc genhtml_legend=1
00:25:15.075  		--rc geninfo_all_blocks=1
00:25:15.075  		--rc geninfo_unexecuted_blocks=1
00:25:15.075  		
00:25:15.075  		'
00:25:15.075    17:09:07	-- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:25:15.075     17:09:07	-- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]]
00:25:15.075     17:09:07	-- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:25:15.075     17:09:07	-- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:25:15.075      17:09:07	-- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:25:15.075      17:09:07	-- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:25:15.075      17:09:07	-- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:25:15.075      17:09:07	-- paths/export.sh@5 -- # export PATH
00:25:15.075      17:09:07	-- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:25:15.075   17:09:07	-- dd/basic_rw.sh@80 -- # trap cleanup EXIT
00:25:15.075   17:09:07	-- dd/basic_rw.sh@82 -- # nvmes=("$@")
00:25:15.075   17:09:07	-- dd/basic_rw.sh@83 -- # nvme0=Nvme0
00:25:15.075   17:09:07	-- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:06.0
00:25:15.075   17:09:07	-- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1
00:25:15.075   17:09:07	-- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:06.0' ['trtype']='pcie')
00:25:15.075   17:09:07	-- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0
00:25:15.075   17:09:07	-- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:25:15.075   17:09:07	-- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:25:15.075    17:09:07	-- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:06.0
00:25:15.075    17:09:07	-- dd/common.sh@124 -- # local pci=0000:00:06.0 lbaf id
00:25:15.075    17:09:07	-- dd/common.sh@126 -- # mapfile -t id
00:25:15.075     17:09:07	-- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:06.0'
00:25:15.336    17:09:08	-- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID:                             1b36 Subsystem Vendor ID:                   1af4 Serial Number:                         12340 Model Number:                          QEMU NVMe Ctrl Firmware Version:                      8.0.0 Recommended Arb Burst:                 6 IEEE OUI Identifier:                   00 54 52 Multi-path I/O   May have multiple subsystem ports:   No   May have multiple controllers:       No   Associated with SR-IOV VF:           No Max Data Transfer Size:                524288 Max Number of Namespaces:              256 Max Number of I/O Queues:              64 NVMe Specification Version (VS):       1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries:                 2048 Contiguous Queues Required:            Yes Arbitration Mechanisms Supported   Weighted Round Robin:                Not Supported   Vendor Specific:                     Not Supported Reset Timeout:                         7500 ms Doorbell Stride:                       4 bytes NVM Subsystem Reset:                   Not Supported Command Sets Supported   NVM Command Set:                     Supported Boot Partition:                        Not Supported Memory Page Size Minimum:              4096 bytes Memory Page Size Maximum:              65536 bytes Persistent Memory Region:              Not Supported Optional Asynchronous Events Supported   Namespace Attribute Notices:         Supported   Firmware Activation Notices:         Not Supported   ANA Change Notices:                  Not Supported   PLE Aggregate Log Change Notices:    Not Supported   LBA Status Info Alert Notices:       Not Supported   EGE Aggregate Log Change Notices:    Not Supported   Normal NVM Subsystem Shutdown event: Not Supported   Zone Descriptor Change Notices:      Not Supported   Discovery Log Change Notices:        Not Supported Controller Attributes   128-bit Host Identifier:             Not Supported   Non-Operational Permissive Mode:     Not Supported   NVM Sets:                            Not Supported   Read Recovery Levels:                Not Supported   Endurance Groups:                    Not Supported   Predictable Latency Mode:            Not Supported   Traffic Based Keep ALive:            Not Supported   Namespace Granularity:               Not Supported   SQ Associations:                     Not Supported   UUID List:                           Not Supported   Multi-Domain Subsystem:              Not Supported   Fixed Capacity Management:           Not Supported   Variable Capacity Management:        Not Supported   Delete Endurance Group:              Not Supported   Delete NVM Set:                      Not Supported   Extended LBA Formats Supported:      Supported   Flexible Data Placement Supported:   Not Supported  Controller Memory Buffer Support ================================ Supported:                             No  Persistent Memory Region Support ================================ Supported:                             No  Admin Command Set Attributes ============================ Security Send/Receive:                 Not Supported Format NVM:                            Supported Firmware Activate/Download:            Not Supported Namespace Management:                  Supported Device Self-Test:                      Not Supported Directives:                            Supported NVMe-MI:                               Not Supported Virtualization Management:             Not Supported Doorbell Buffer Config:                Supported Get LBA Status Capability:             Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit:                   4 Async Event Request Limit:             4 Number of Firmware Slots:              N/A Firmware Slot 1 Read-Only:             N/A Firmware Activation Without Reset:     N/A Multiple Update Detection Support:     N/A Firmware Update Granularity:           No Information Provided Per-Namespace SMART Log:               Yes Asymmetric Namespace Access Log Page:  Not Supported Subsystem NQN:                         nqn.2019-08.org.qemu:12340 Command Effects Log Page:              Supported Get Log Page Extended Data:            Supported Telemetry Log Pages:                   Not Supported Persistent Event Log Pages:            Not Supported Supported Log Pages Log Page:          May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page:   May Support Data Area 4 for Telemetry Log:         Not Supported Error Log Page Entries Supported:      1 Keep Alive:                            Not Supported  NVM Command Set Attributes ========================== Submission Queue Entry Size   Max:                       64   Min:                       64 Completion Queue Entry Size   Max:                       16   Min:                       16 Number of Namespaces:        256 Compare Command:             Supported Write Uncorrectable Command: Not Supported Dataset Management Command:  Supported Write Zeroes Command:        Supported Set Features Save Field:     Supported Reservations:                Not Supported Timestamp:                   Supported Copy:                        Supported Volatile Write Cache:        Present Atomic Write Unit (Normal):  1 Atomic Write Unit (PFail):   1 Atomic Compare & Write Unit: 1 Fused Compare & Write:       Not Supported Scatter-Gather List   SGL Command Set:           Supported   SGL Keyed:                 Not Supported   SGL Bit Bucket Descriptor: Not Supported   SGL Metadata Pointer:      Not Supported   Oversized SGL:             Not Supported   SGL Metadata Address:      Not Supported   SGL Offset:                Not Supported   Transport SGL Data Block:  Not Supported Replay Protected Memory Block:  Not Supported  Firmware Slot Information ========================= Active slot:                 1 Slot 1 Firmware Revision:    1.0   Commands Supported and Effects ============================== Admin Commands --------------    Delete I/O Submission Queue (00h): Supported     Create I/O Submission Queue (01h): Supported                    Get Log Page (02h): Supported     Delete I/O Completion Queue (04h): Supported     Create I/O Completion Queue (05h): Supported                        Identify (06h): Supported                           Abort (08h): Supported                    Set Features (09h): Supported                    Get Features (0Ah): Supported      Asynchronous Event Request (0Ch): Supported            Namespace Attachment (15h): Supported NS-Inventory-Change                  Directive Send (19h): Supported               Directive Receive (1Ah): Supported       Virtualization Management (1Ch): Supported          Doorbell Buffer Config (7Ch): Supported                      Format NVM (80h): Supported LBA-Change  I/O Commands ------------                          Flush (00h): Supported LBA-Change                           Write (01h): Supported LBA-Change                            Read (02h): Supported                         Compare (05h): Supported                    Write Zeroes (08h): Supported LBA-Change              Dataset Management (09h): Supported LBA-Change                         Unknown (0Ch): Supported                         Unknown (12h): Supported                            Copy (19h): Supported LBA-Change                         Unknown (1Dh): Supported LBA-Change   Error Log =========  Arbitration =========== Arbitration Burst:           no limit  Power Management ================ Number of Power States:          1 Current Power State:             Power State #0 Power State #0:   Max Power:                     25.00 W   Non-Operational State:         Operational   Entry Latency:                 16 microseconds   Exit Latency:                  4 microseconds   Relative Read Throughput:      0   Relative Read Latency:         0   Relative Write Throughput:     0   Relative Write Latency:        0   Idle Power:                     Not Reported   Active Power:                   Not Reported Non-Operational Permissive Mode: Not Supported  Health Information ================== Critical Warnings:   Available Spare Space:     OK   Temperature:               OK   Device Reliability:        OK   Read Only:                 No   Volatile Memory Backup:    OK Current Temperature:         323 Kelvin (50 Celsius) Temperature Threshold:       343 Kelvin (70 Celsius) Available Spare:             0% Available Spare Threshold:   0% Life Percentage Used:        0% Data Units Read:             97 Data Units Written:          7 Host Read Commands:          2101 Host Write Commands:         110 Controller Busy Time:        0 minutes Power Cycles:                0 Power On Hours:              0 hours Unsafe Shutdowns:            0 Unrecoverable Media Errors:  0 Lifetime Error Log Entries:  0 Warning Temperature Time:    0 minutes Critical Temperature Time:   0 minutes  Number of Queues ================ Number of I/O Submission Queues:      64 Number of I/O Completion Queues:      64  ZNS Specific Controller Data ============================ Zone Append Size Limit:      0   Active Namespaces ================= Namespace ID:1 Error Recovery Timeout:                Unlimited Command Set Identifier:                NVM (00h) Deallocate:                            Supported Deallocated/Unwritten Error:           Supported Deallocated Read Value:                All 0x00 Deallocate in Write Zeroes:            Not Supported Deallocated Guard Field:               0xFFFF Flush:                                 Supported Reservation:                           Not Supported Namespace Sharing Capabilities:        Private Size (in LBAs):                        1310720 (5GiB) Capacity (in LBAs):                    1310720 (5GiB) Utilization (in LBAs):                 1310720 (5GiB) Thin Provisioning:                     Not Supported Per-NS Atomic Units:                   No Maximum Single Source Range Length:    128 Maximum Copy Length:                   128 Maximum Source Range Count:            128 NGUID/EUI64 Never Reused:              No Namespace Write Protected:             No Number of LBA Formats:                 8 Current LBA Format:                    LBA Format #04 LBA Format #00: Data Size:   512  Metadata Size:     0 LBA Format #01: Data Size:   512  Metadata Size:     8 LBA Format #02: Data Size:   512  Metadata Size:    16 LBA Format #03: Data Size:   512  Metadata Size:    64 LBA Format #04: Data Size:  4096  Metadata Size:     0 LBA Format #05: Data Size:  4096  Metadata Size:     8 LBA Format #06: Data Size:  4096  Metadata Size:    16 LBA Format #07: Data Size:  4096  Metadata Size:    64  =~ Current LBA Format: *LBA Format #([0-9]+) ]]
00:25:15.336    17:09:08	-- dd/common.sh@130 -- # lbaf=04
00:25:15.336    17:09:08	-- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID:                             1b36 Subsystem Vendor ID:                   1af4 Serial Number:                         12340 Model Number:                          QEMU NVMe Ctrl Firmware Version:                      8.0.0 Recommended Arb Burst:                 6 IEEE OUI Identifier:                   00 54 52 Multi-path I/O   May have multiple subsystem ports:   No   May have multiple controllers:       No   Associated with SR-IOV VF:           No Max Data Transfer Size:                524288 Max Number of Namespaces:              256 Max Number of I/O Queues:              64 NVMe Specification Version (VS):       1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries:                 2048 Contiguous Queues Required:            Yes Arbitration Mechanisms Supported   Weighted Round Robin:                Not Supported   Vendor Specific:                     Not Supported Reset Timeout:                         7500 ms Doorbell Stride:                       4 bytes NVM Subsystem Reset:                   Not Supported Command Sets Supported   NVM Command Set:                     Supported Boot Partition:                        Not Supported Memory Page Size Minimum:              4096 bytes Memory Page Size Maximum:              65536 bytes Persistent Memory Region:              Not Supported Optional Asynchronous Events Supported   Namespace Attribute Notices:         Supported   Firmware Activation Notices:         Not Supported   ANA Change Notices:                  Not Supported   PLE Aggregate Log Change Notices:    Not Supported   LBA Status Info Alert Notices:       Not Supported   EGE Aggregate Log Change Notices:    Not Supported   Normal NVM Subsystem Shutdown event: Not Supported   Zone Descriptor Change Notices:      Not Supported   Discovery Log Change Notices:        Not Supported Controller Attributes   128-bit Host Identifier:             Not Supported   Non-Operational Permissive Mode:     Not Supported   NVM Sets:                            Not Supported   Read Recovery Levels:                Not Supported   Endurance Groups:                    Not Supported   Predictable Latency Mode:            Not Supported   Traffic Based Keep ALive:            Not Supported   Namespace Granularity:               Not Supported   SQ Associations:                     Not Supported   UUID List:                           Not Supported   Multi-Domain Subsystem:              Not Supported   Fixed Capacity Management:           Not Supported   Variable Capacity Management:        Not Supported   Delete Endurance Group:              Not Supported   Delete NVM Set:                      Not Supported   Extended LBA Formats Supported:      Supported   Flexible Data Placement Supported:   Not Supported  Controller Memory Buffer Support ================================ Supported:                             No  Persistent Memory Region Support ================================ Supported:                             No  Admin Command Set Attributes ============================ Security Send/Receive:                 Not Supported Format NVM:                            Supported Firmware Activate/Download:            Not Supported Namespace Management:                  Supported Device Self-Test:                      Not Supported Directives:                            Supported NVMe-MI:                               Not Supported Virtualization Management:             Not Supported Doorbell Buffer Config:                Supported Get LBA Status Capability:             Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit:                   4 Async Event Request Limit:             4 Number of Firmware Slots:              N/A Firmware Slot 1 Read-Only:             N/A Firmware Activation Without Reset:     N/A Multiple Update Detection Support:     N/A Firmware Update Granularity:           No Information Provided Per-Namespace SMART Log:               Yes Asymmetric Namespace Access Log Page:  Not Supported Subsystem NQN:                         nqn.2019-08.org.qemu:12340 Command Effects Log Page:              Supported Get Log Page Extended Data:            Supported Telemetry Log Pages:                   Not Supported Persistent Event Log Pages:            Not Supported Supported Log Pages Log Page:          May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page:   May Support Data Area 4 for Telemetry Log:         Not Supported Error Log Page Entries Supported:      1 Keep Alive:                            Not Supported  NVM Command Set Attributes ========================== Submission Queue Entry Size   Max:                       64   Min:                       64 Completion Queue Entry Size   Max:                       16   Min:                       16 Number of Namespaces:        256 Compare Command:             Supported Write Uncorrectable Command: Not Supported Dataset Management Command:  Supported Write Zeroes Command:        Supported Set Features Save Field:     Supported Reservations:                Not Supported Timestamp:                   Supported Copy:                        Supported Volatile Write Cache:        Present Atomic Write Unit (Normal):  1 Atomic Write Unit (PFail):   1 Atomic Compare & Write Unit: 1 Fused Compare & Write:       Not Supported Scatter-Gather List   SGL Command Set:           Supported   SGL Keyed:                 Not Supported   SGL Bit Bucket Descriptor: Not Supported   SGL Metadata Pointer:      Not Supported   Oversized SGL:             Not Supported   SGL Metadata Address:      Not Supported   SGL Offset:                Not Supported   Transport SGL Data Block:  Not Supported Replay Protected Memory Block:  Not Supported  Firmware Slot Information ========================= Active slot:                 1 Slot 1 Firmware Revision:    1.0   Commands Supported and Effects ============================== Admin Commands --------------    Delete I/O Submission Queue (00h): Supported     Create I/O Submission Queue (01h): Supported                    Get Log Page (02h): Supported     Delete I/O Completion Queue (04h): Supported     Create I/O Completion Queue (05h): Supported                        Identify (06h): Supported                           Abort (08h): Supported                    Set Features (09h): Supported                    Get Features (0Ah): Supported      Asynchronous Event Request (0Ch): Supported            Namespace Attachment (15h): Supported NS-Inventory-Change                  Directive Send (19h): Supported               Directive Receive (1Ah): Supported       Virtualization Management (1Ch): Supported          Doorbell Buffer Config (7Ch): Supported                      Format NVM (80h): Supported LBA-Change  I/O Commands ------------                          Flush (00h): Supported LBA-Change                           Write (01h): Supported LBA-Change                            Read (02h): Supported                         Compare (05h): Supported                    Write Zeroes (08h): Supported LBA-Change              Dataset Management (09h): Supported LBA-Change                         Unknown (0Ch): Supported                         Unknown (12h): Supported                            Copy (19h): Supported LBA-Change                         Unknown (1Dh): Supported LBA-Change   Error Log =========  Arbitration =========== Arbitration Burst:           no limit  Power Management ================ Number of Power States:          1 Current Power State:             Power State #0 Power State #0:   Max Power:                     25.00 W   Non-Operational State:         Operational   Entry Latency:                 16 microseconds   Exit Latency:                  4 microseconds   Relative Read Throughput:      0   Relative Read Latency:         0   Relative Write Throughput:     0   Relative Write Latency:        0   Idle Power:                     Not Reported   Active Power:                   Not Reported Non-Operational Permissive Mode: Not Supported  Health Information ================== Critical Warnings:   Available Spare Space:     OK   Temperature:               OK   Device Reliability:        OK   Read Only:                 No   Volatile Memory Backup:    OK Current Temperature:         323 Kelvin (50 Celsius) Temperature Threshold:       343 Kelvin (70 Celsius) Available Spare:             0% Available Spare Threshold:   0% Life Percentage Used:        0% Data Units Read:             97 Data Units Written:          7 Host Read Commands:          2101 Host Write Commands:         110 Controller Busy Time:        0 minutes Power Cycles:                0 Power On Hours:              0 hours Unsafe Shutdowns:            0 Unrecoverable Media Errors:  0 Lifetime Error Log Entries:  0 Warning Temperature Time:    0 minutes Critical Temperature Time:   0 minutes  Number of Queues ================ Number of I/O Submission Queues:      64 Number of I/O Completion Queues:      64  ZNS Specific Controller Data ============================ Zone Append Size Limit:      0   Active Namespaces ================= Namespace ID:1 Error Recovery Timeout:                Unlimited Command Set Identifier:                NVM (00h) Deallocate:                            Supported Deallocated/Unwritten Error:           Supported Deallocated Read Value:                All 0x00 Deallocate in Write Zeroes:            Not Supported Deallocated Guard Field:               0xFFFF Flush:                                 Supported Reservation:                           Not Supported Namespace Sharing Capabilities:        Private Size (in LBAs):                        1310720 (5GiB) Capacity (in LBAs):                    1310720 (5GiB) Utilization (in LBAs):                 1310720 (5GiB) Thin Provisioning:                     Not Supported Per-NS Atomic Units:                   No Maximum Single Source Range Length:    128 Maximum Copy Length:                   128 Maximum Source Range Count:            128 NGUID/EUI64 Never Reused:              No Namespace Write Protected:             No Number of LBA Formats:                 8 Current LBA Format:                    LBA Format #04 LBA Format #00: Data Size:   512  Metadata Size:     0 LBA Format #01: Data Size:   512  Metadata Size:     8 LBA Format #02: Data Size:   512  Metadata Size:    16 LBA Format #03: Data Size:   512  Metadata Size:    64 LBA Format #04: Data Size:  4096  Metadata Size:     0 LBA Format #05: Data Size:  4096  Metadata Size:     8 LBA Format #06: Data Size:  4096  Metadata Size:    16 LBA Format #07: Data Size:  4096  Metadata Size:    64  =~ LBA Format #04: Data Size: *([0-9]+) ]]
00:25:15.336    17:09:08	-- dd/common.sh@132 -- # lbaf=4096
00:25:15.336    17:09:08	-- dd/common.sh@134 -- # echo 4096
00:25:15.336   17:09:08	-- dd/basic_rw.sh@93 -- # native_bs=4096
00:25:15.336   17:09:08	-- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61
00:25:15.336   17:09:08	-- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']'
00:25:15.336   17:09:08	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:25:15.336   17:09:08	-- common/autotest_common.sh@10 -- # set +x
00:25:15.336    17:09:08	-- dd/basic_rw.sh@96 -- # gen_conf
00:25:15.336    17:09:08	-- dd/basic_rw.sh@96 -- # :
00:25:15.336    17:09:08	-- dd/common.sh@31 -- # xtrace_disable
00:25:15.336    17:09:08	-- common/autotest_common.sh@10 -- # set +x
00:25:15.336  ************************************
00:25:15.336  START TEST dd_bs_lt_native_bs
00:25:15.336  ************************************
00:25:15.336   17:09:08	-- common/autotest_common.sh@1114 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61
00:25:15.336   17:09:08	-- common/autotest_common.sh@650 -- # local es=0
00:25:15.336   17:09:08	-- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61
00:25:15.336   17:09:08	-- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:25:15.336   17:09:08	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:25:15.336    17:09:08	-- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:25:15.336   17:09:08	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:25:15.336    17:09:08	-- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:25:15.336   17:09:08	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:25:15.337   17:09:08	-- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:25:15.337   17:09:08	-- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:25:15.337   17:09:08	-- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61
00:25:15.337  {
00:25:15.337    "subsystems": [
00:25:15.337      {
00:25:15.337        "subsystem": "bdev",
00:25:15.337        "config": [
00:25:15.337          {
00:25:15.337            "params": {
00:25:15.337              "trtype": "pcie",
00:25:15.337              "traddr": "0000:00:06.0",
00:25:15.337              "name": "Nvme0"
00:25:15.337            },
00:25:15.337            "method": "bdev_nvme_attach_controller"
00:25:15.337          },
00:25:15.337          {
00:25:15.337            "method": "bdev_wait_for_examine"
00:25:15.337          }
00:25:15.337        ]
00:25:15.337      }
00:25:15.337    ]
00:25:15.337  }
00:25:15.337  [2024-11-19 17:09:08.118233] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:25:15.337  [2024-11-19 17:09:08.118464] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143679 ]
00:25:15.595  [2024-11-19 17:09:08.272465] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:25:15.595  [2024-11-19 17:09:08.320591] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:25:15.853  [2024-11-19 17:09:08.465598] spdk_dd.c:1145:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size
00:25:15.853  [2024-11-19 17:09:08.465743] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:25:15.853  [2024-11-19 17:09:08.580612] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy
00:25:16.112  ************************************
00:25:16.112  END TEST dd_bs_lt_native_bs
00:25:16.112  ************************************
00:25:16.112   17:09:08	-- common/autotest_common.sh@653 -- # es=234
00:25:16.112   17:09:08	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:25:16.112   17:09:08	-- common/autotest_common.sh@662 -- # es=106
00:25:16.112   17:09:08	-- common/autotest_common.sh@663 -- # case "$es" in
00:25:16.112   17:09:08	-- common/autotest_common.sh@670 -- # es=1
00:25:16.112   17:09:08	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:25:16.112  
00:25:16.112  real	0m0.689s
00:25:16.112  user	0m0.418s
00:25:16.112  sys	0m0.235s
00:25:16.112   17:09:08	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:25:16.112   17:09:08	-- common/autotest_common.sh@10 -- # set +x
00:25:16.112   17:09:08	-- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096
00:25:16.112   17:09:08	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:25:16.112   17:09:08	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:25:16.112   17:09:08	-- common/autotest_common.sh@10 -- # set +x
00:25:16.112  ************************************
00:25:16.112  START TEST dd_rw
00:25:16.112  ************************************
00:25:16.112   17:09:08	-- common/autotest_common.sh@1114 -- # basic_rw 4096
00:25:16.112   17:09:08	-- dd/basic_rw.sh@11 -- # local native_bs=4096
00:25:16.112   17:09:08	-- dd/basic_rw.sh@12 -- # local count size
00:25:16.112   17:09:08	-- dd/basic_rw.sh@13 -- # local qds bss
00:25:16.112   17:09:08	-- dd/basic_rw.sh@15 -- # qds=(1 64)
00:25:16.112   17:09:08	-- dd/basic_rw.sh@17 -- # for bs in {0..2}
00:25:16.112   17:09:08	-- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs)))
00:25:16.112   17:09:08	-- dd/basic_rw.sh@17 -- # for bs in {0..2}
00:25:16.112   17:09:08	-- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs)))
00:25:16.112   17:09:08	-- dd/basic_rw.sh@17 -- # for bs in {0..2}
00:25:16.112   17:09:08	-- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs)))
00:25:16.112   17:09:08	-- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}"
00:25:16.112   17:09:08	-- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}"
00:25:16.112   17:09:08	-- dd/basic_rw.sh@23 -- # count=15
00:25:16.112   17:09:08	-- dd/basic_rw.sh@24 -- # count=15
00:25:16.112   17:09:08	-- dd/basic_rw.sh@25 -- # size=61440
00:25:16.112   17:09:08	-- dd/basic_rw.sh@27 -- # gen_bytes 61440
00:25:16.112   17:09:08	-- dd/common.sh@98 -- # xtrace_disable
00:25:16.112   17:09:08	-- common/autotest_common.sh@10 -- # set +x
00:25:16.677   17:09:09	-- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62
00:25:16.677    17:09:09	-- dd/basic_rw.sh@30 -- # gen_conf
00:25:16.677    17:09:09	-- dd/common.sh@31 -- # xtrace_disable
00:25:16.677    17:09:09	-- common/autotest_common.sh@10 -- # set +x
00:25:16.677  [2024-11-19 17:09:09.445437] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:25:16.677  [2024-11-19 17:09:09.445764] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143718 ]
00:25:16.677  {
00:25:16.677    "subsystems": [
00:25:16.677      {
00:25:16.677        "subsystem": "bdev",
00:25:16.677        "config": [
00:25:16.677          {
00:25:16.677            "params": {
00:25:16.677              "trtype": "pcie",
00:25:16.677              "traddr": "0000:00:06.0",
00:25:16.677              "name": "Nvme0"
00:25:16.677            },
00:25:16.677            "method": "bdev_nvme_attach_controller"
00:25:16.677          },
00:25:16.677          {
00:25:16.677            "method": "bdev_wait_for_examine"
00:25:16.677          }
00:25:16.677        ]
00:25:16.677      }
00:25:16.677    ]
00:25:16.677  }
00:25:16.934  [2024-11-19 17:09:09.587635] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:25:16.934  [2024-11-19 17:09:09.628890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:25:16.934  
[2024-11-19T17:09:10.056Z] Copying: 60/60 [kB] (average 19 MBps)
00:25:17.192  
00:25:17.450   17:09:10	-- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62
00:25:17.450    17:09:10	-- dd/basic_rw.sh@37 -- # gen_conf
00:25:17.450    17:09:10	-- dd/common.sh@31 -- # xtrace_disable
00:25:17.450    17:09:10	-- common/autotest_common.sh@10 -- # set +x
00:25:17.450  [2024-11-19 17:09:10.106998] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:25:17.450  [2024-11-19 17:09:10.107718] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143736 ]
00:25:17.450  {
00:25:17.450    "subsystems": [
00:25:17.450      {
00:25:17.450        "subsystem": "bdev",
00:25:17.450        "config": [
00:25:17.450          {
00:25:17.450            "params": {
00:25:17.450              "trtype": "pcie",
00:25:17.450              "traddr": "0000:00:06.0",
00:25:17.450              "name": "Nvme0"
00:25:17.450            },
00:25:17.450            "method": "bdev_nvme_attach_controller"
00:25:17.450          },
00:25:17.450          {
00:25:17.450            "method": "bdev_wait_for_examine"
00:25:17.450          }
00:25:17.450        ]
00:25:17.450      }
00:25:17.450    ]
00:25:17.450  }
00:25:17.450  [2024-11-19 17:09:10.250493] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:25:17.450  [2024-11-19 17:09:10.295528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:25:17.709  
[2024-11-19T17:09:10.831Z] Copying: 60/60 [kB] (average 29 MBps)
00:25:17.967  
00:25:17.967   17:09:10	-- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:25:17.967   17:09:10	-- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440
00:25:17.967   17:09:10	-- dd/common.sh@10 -- # local bdev=Nvme0n1
00:25:17.967   17:09:10	-- dd/common.sh@11 -- # local nvme_ref=
00:25:17.967   17:09:10	-- dd/common.sh@12 -- # local size=61440
00:25:17.967   17:09:10	-- dd/common.sh@14 -- # local bs=1048576
00:25:17.967   17:09:10	-- dd/common.sh@15 -- # local count=1
00:25:17.967   17:09:10	-- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62
00:25:17.967    17:09:10	-- dd/common.sh@18 -- # gen_conf
00:25:17.967    17:09:10	-- dd/common.sh@31 -- # xtrace_disable
00:25:17.967    17:09:10	-- common/autotest_common.sh@10 -- # set +x
00:25:17.967  {
00:25:17.967    "subsystems": [
00:25:17.967      {
00:25:17.967        "subsystem": "bdev",
00:25:17.967        "config": [
00:25:17.967          {
00:25:17.967            "params": {
00:25:17.967              "trtype": "pcie",
00:25:17.967              "traddr": "0000:00:06.0",
00:25:17.967              "name": "Nvme0"
00:25:17.967            },
00:25:17.967            "method": "bdev_nvme_attach_controller"
00:25:17.967          },
00:25:17.967          {
00:25:17.967            "method": "bdev_wait_for_examine"
00:25:17.967          }
00:25:17.967        ]
00:25:17.967      }
00:25:17.967    ]
00:25:17.967  }
00:25:17.967  [2024-11-19 17:09:10.807846] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:25:17.967  [2024-11-19 17:09:10.808074] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143750 ]
00:25:18.225  [2024-11-19 17:09:10.963073] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:25:18.225  [2024-11-19 17:09:11.005014] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:25:18.489  
[2024-11-19T17:09:11.623Z] Copying: 1024/1024 [kB] (average 500 MBps)
00:25:18.759  
00:25:18.759   17:09:11	-- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}"
00:25:18.759   17:09:11	-- dd/basic_rw.sh@23 -- # count=15
00:25:18.759   17:09:11	-- dd/basic_rw.sh@24 -- # count=15
00:25:18.759   17:09:11	-- dd/basic_rw.sh@25 -- # size=61440
00:25:18.759   17:09:11	-- dd/basic_rw.sh@27 -- # gen_bytes 61440
00:25:18.759   17:09:11	-- dd/common.sh@98 -- # xtrace_disable
00:25:18.760   17:09:11	-- common/autotest_common.sh@10 -- # set +x
00:25:19.326   17:09:11	-- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62
00:25:19.326    17:09:11	-- dd/basic_rw.sh@30 -- # gen_conf
00:25:19.326    17:09:11	-- dd/common.sh@31 -- # xtrace_disable
00:25:19.326    17:09:11	-- common/autotest_common.sh@10 -- # set +x
00:25:19.326  [2024-11-19 17:09:12.002057] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:25:19.326  [2024-11-19 17:09:12.002401] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143777 ]
00:25:19.326  {
00:25:19.326    "subsystems": [
00:25:19.326      {
00:25:19.326        "subsystem": "bdev",
00:25:19.326        "config": [
00:25:19.326          {
00:25:19.326            "params": {
00:25:19.326              "trtype": "pcie",
00:25:19.326              "traddr": "0000:00:06.0",
00:25:19.326              "name": "Nvme0"
00:25:19.326            },
00:25:19.326            "method": "bdev_nvme_attach_controller"
00:25:19.326          },
00:25:19.326          {
00:25:19.326            "method": "bdev_wait_for_examine"
00:25:19.326          }
00:25:19.326        ]
00:25:19.326      }
00:25:19.326    ]
00:25:19.326  }
00:25:19.326  [2024-11-19 17:09:12.140763] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:25:19.585  [2024-11-19 17:09:12.183295] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:25:19.585  
[2024-11-19T17:09:12.708Z] Copying: 60/60 [kB] (average 58 MBps)
00:25:19.844  
00:25:19.844   17:09:12	-- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62
00:25:19.844    17:09:12	-- dd/basic_rw.sh@37 -- # gen_conf
00:25:19.844    17:09:12	-- dd/common.sh@31 -- # xtrace_disable
00:25:19.844    17:09:12	-- common/autotest_common.sh@10 -- # set +x
00:25:19.844  {
00:25:19.844    "subsystems": [
00:25:19.844      {
00:25:19.844        "subsystem": "bdev",
00:25:19.844        "config": [
00:25:19.844          {
00:25:19.844            "params": {
00:25:19.844              "trtype": "pcie",
00:25:19.844              "traddr": "0000:00:06.0",
00:25:19.844              "name": "Nvme0"
00:25:19.844            },
00:25:19.844            "method": "bdev_nvme_attach_controller"
00:25:19.844          },
00:25:19.844          {
00:25:19.844            "method": "bdev_wait_for_examine"
00:25:19.844          }
00:25:19.844        ]
00:25:19.844      }
00:25:19.844    ]
00:25:19.844  }
00:25:19.844  [2024-11-19 17:09:12.677353] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:25:19.844  [2024-11-19 17:09:12.678222] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143785 ]
00:25:20.102  [2024-11-19 17:09:12.834818] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:25:20.102  [2024-11-19 17:09:12.884373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:25:20.361  
[2024-11-19T17:09:13.483Z] Copying: 60/60 [kB] (average 58 MBps)
00:25:20.620  
00:25:20.620   17:09:13	-- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:25:20.620   17:09:13	-- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440
00:25:20.620   17:09:13	-- dd/common.sh@10 -- # local bdev=Nvme0n1
00:25:20.620   17:09:13	-- dd/common.sh@11 -- # local nvme_ref=
00:25:20.620   17:09:13	-- dd/common.sh@12 -- # local size=61440
00:25:20.620   17:09:13	-- dd/common.sh@14 -- # local bs=1048576
00:25:20.620   17:09:13	-- dd/common.sh@15 -- # local count=1
00:25:20.620   17:09:13	-- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62
00:25:20.620    17:09:13	-- dd/common.sh@18 -- # gen_conf
00:25:20.620    17:09:13	-- dd/common.sh@31 -- # xtrace_disable
00:25:20.620    17:09:13	-- common/autotest_common.sh@10 -- # set +x
00:25:20.620  [2024-11-19 17:09:13.362918] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:25:20.620  [2024-11-19 17:09:13.363297] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143806 ]
00:25:20.620  {
00:25:20.620    "subsystems": [
00:25:20.620      {
00:25:20.620        "subsystem": "bdev",
00:25:20.620        "config": [
00:25:20.620          {
00:25:20.620            "params": {
00:25:20.620              "trtype": "pcie",
00:25:20.620              "traddr": "0000:00:06.0",
00:25:20.620              "name": "Nvme0"
00:25:20.620            },
00:25:20.620            "method": "bdev_nvme_attach_controller"
00:25:20.620          },
00:25:20.620          {
00:25:20.620            "method": "bdev_wait_for_examine"
00:25:20.620          }
00:25:20.620        ]
00:25:20.620      }
00:25:20.620    ]
00:25:20.620  }
00:25:20.878  [2024-11-19 17:09:13.506218] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:25:20.878  [2024-11-19 17:09:13.552194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:25:20.878  
[2024-11-19T17:09:14.000Z] Copying: 1024/1024 [kB] (average 1000 MBps)
00:25:21.136  
00:25:21.136   17:09:13	-- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}"
00:25:21.136   17:09:13	-- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}"
00:25:21.136   17:09:13	-- dd/basic_rw.sh@23 -- # count=7
00:25:21.136   17:09:13	-- dd/basic_rw.sh@24 -- # count=7
00:25:21.136   17:09:13	-- dd/basic_rw.sh@25 -- # size=57344
00:25:21.136   17:09:13	-- dd/basic_rw.sh@27 -- # gen_bytes 57344
00:25:21.136   17:09:13	-- dd/common.sh@98 -- # xtrace_disable
00:25:21.136   17:09:13	-- common/autotest_common.sh@10 -- # set +x
00:25:21.702   17:09:14	-- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62
00:25:21.702    17:09:14	-- dd/basic_rw.sh@30 -- # gen_conf
00:25:21.702    17:09:14	-- dd/common.sh@31 -- # xtrace_disable
00:25:21.702    17:09:14	-- common/autotest_common.sh@10 -- # set +x
00:25:21.961  {
00:25:21.961    "subsystems": [
00:25:21.961      {
00:25:21.961        "subsystem": "bdev",
00:25:21.961        "config": [
00:25:21.961          {
00:25:21.961            "params": {
00:25:21.961              "trtype": "pcie",
00:25:21.961              "traddr": "0000:00:06.0",
00:25:21.961              "name": "Nvme0"
00:25:21.961            },
00:25:21.961            "method": "bdev_nvme_attach_controller"
00:25:21.961          },
00:25:21.961          {
00:25:21.961            "method": "bdev_wait_for_examine"
00:25:21.961          }
00:25:21.961        ]
00:25:21.961      }
00:25:21.961    ]
00:25:21.961  }
00:25:21.961  [2024-11-19 17:09:14.570347] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:25:21.961  [2024-11-19 17:09:14.570819] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143826 ]
00:25:21.961  [2024-11-19 17:09:14.726981] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:25:21.961  [2024-11-19 17:09:14.771131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:25:22.219  
[2024-11-19T17:09:15.342Z] Copying: 56/56 [kB] (average 27 MBps)
00:25:22.478  
00:25:22.478   17:09:15	-- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62
00:25:22.478    17:09:15	-- dd/basic_rw.sh@37 -- # gen_conf
00:25:22.478    17:09:15	-- dd/common.sh@31 -- # xtrace_disable
00:25:22.478    17:09:15	-- common/autotest_common.sh@10 -- # set +x
00:25:22.478  {
00:25:22.478    "subsystems": [
00:25:22.478      {
00:25:22.478        "subsystem": "bdev",
00:25:22.478        "config": [
00:25:22.478          {
00:25:22.478            "params": {
00:25:22.478              "trtype": "pcie",
00:25:22.478              "traddr": "0000:00:06.0",
00:25:22.478              "name": "Nvme0"
00:25:22.478            },
00:25:22.478            "method": "bdev_nvme_attach_controller"
00:25:22.478          },
00:25:22.478          {
00:25:22.478            "method": "bdev_wait_for_examine"
00:25:22.478          }
00:25:22.478        ]
00:25:22.478      }
00:25:22.478    ]
00:25:22.478  }
00:25:22.478  [2024-11-19 17:09:15.245855] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:25:22.478  [2024-11-19 17:09:15.246494] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143841 ]
00:25:22.736  [2024-11-19 17:09:15.387840] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:25:22.736  [2024-11-19 17:09:15.432204] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:25:22.736  
[2024-11-19T17:09:16.166Z] Copying: 56/56 [kB] (average 27 MBps)
00:25:23.302  
00:25:23.302   17:09:15	-- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:25:23.302   17:09:15	-- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344
00:25:23.302   17:09:15	-- dd/common.sh@10 -- # local bdev=Nvme0n1
00:25:23.302   17:09:15	-- dd/common.sh@11 -- # local nvme_ref=
00:25:23.302   17:09:15	-- dd/common.sh@12 -- # local size=57344
00:25:23.302   17:09:15	-- dd/common.sh@14 -- # local bs=1048576
00:25:23.302   17:09:15	-- dd/common.sh@15 -- # local count=1
00:25:23.302   17:09:15	-- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62
00:25:23.302    17:09:15	-- dd/common.sh@18 -- # gen_conf
00:25:23.302    17:09:15	-- dd/common.sh@31 -- # xtrace_disable
00:25:23.302    17:09:15	-- common/autotest_common.sh@10 -- # set +x
00:25:23.302  {
00:25:23.302    "subsystems": [
00:25:23.302      {
00:25:23.302        "subsystem": "bdev",
00:25:23.302        "config": [
00:25:23.302          {
00:25:23.302            "params": {
00:25:23.302              "trtype": "pcie",
00:25:23.302              "traddr": "0000:00:06.0",
00:25:23.302              "name": "Nvme0"
00:25:23.302            },
00:25:23.302            "method": "bdev_nvme_attach_controller"
00:25:23.302          },
00:25:23.302          {
00:25:23.302            "method": "bdev_wait_for_examine"
00:25:23.302          }
00:25:23.302        ]
00:25:23.302      }
00:25:23.302    ]
00:25:23.302  }
00:25:23.303  [2024-11-19 17:09:15.926352] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:25:23.303  [2024-11-19 17:09:15.927071] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143855 ]
00:25:23.303  [2024-11-19 17:09:16.066527] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:25:23.303  [2024-11-19 17:09:16.119376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:25:23.561  
[2024-11-19T17:09:16.683Z] Copying: 1024/1024 [kB] (average 1000 MBps)
00:25:23.819  
00:25:23.819   17:09:16	-- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}"
00:25:23.819   17:09:16	-- dd/basic_rw.sh@23 -- # count=7
00:25:23.819   17:09:16	-- dd/basic_rw.sh@24 -- # count=7
00:25:23.819   17:09:16	-- dd/basic_rw.sh@25 -- # size=57344
00:25:23.819   17:09:16	-- dd/basic_rw.sh@27 -- # gen_bytes 57344
00:25:23.819   17:09:16	-- dd/common.sh@98 -- # xtrace_disable
00:25:23.819   17:09:16	-- common/autotest_common.sh@10 -- # set +x
00:25:24.384   17:09:17	-- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62
00:25:24.384    17:09:17	-- dd/basic_rw.sh@30 -- # gen_conf
00:25:24.384    17:09:17	-- dd/common.sh@31 -- # xtrace_disable
00:25:24.384    17:09:17	-- common/autotest_common.sh@10 -- # set +x
00:25:24.384  {
00:25:24.384    "subsystems": [
00:25:24.384      {
00:25:24.384        "subsystem": "bdev",
00:25:24.384        "config": [
00:25:24.384          {
00:25:24.384            "params": {
00:25:24.384              "trtype": "pcie",
00:25:24.384              "traddr": "0000:00:06.0",
00:25:24.384              "name": "Nvme0"
00:25:24.384            },
00:25:24.384            "method": "bdev_nvme_attach_controller"
00:25:24.384          },
00:25:24.384          {
00:25:24.384            "method": "bdev_wait_for_examine"
00:25:24.384          }
00:25:24.384        ]
00:25:24.384      }
00:25:24.384    ]
00:25:24.384  }
00:25:24.384  [2024-11-19 17:09:17.104957] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:25:24.384  [2024-11-19 17:09:17.105387] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143882 ]
00:25:24.642  [2024-11-19 17:09:17.259594] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:25:24.642  [2024-11-19 17:09:17.308229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:25:24.642  
[2024-11-19T17:09:17.764Z] Copying: 56/56 [kB] (average 54 MBps)
00:25:24.900  
00:25:24.900   17:09:17	-- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62
00:25:24.900    17:09:17	-- dd/basic_rw.sh@37 -- # gen_conf
00:25:24.900    17:09:17	-- dd/common.sh@31 -- # xtrace_disable
00:25:24.900    17:09:17	-- common/autotest_common.sh@10 -- # set +x
00:25:25.159  {
00:25:25.159    "subsystems": [
00:25:25.159      {
00:25:25.159        "subsystem": "bdev",
00:25:25.159        "config": [
00:25:25.159          {
00:25:25.159            "params": {
00:25:25.159              "trtype": "pcie",
00:25:25.159              "traddr": "0000:00:06.0",
00:25:25.159              "name": "Nvme0"
00:25:25.159            },
00:25:25.159            "method": "bdev_nvme_attach_controller"
00:25:25.159          },
00:25:25.159          {
00:25:25.159            "method": "bdev_wait_for_examine"
00:25:25.159          }
00:25:25.159        ]
00:25:25.159      }
00:25:25.159    ]
00:25:25.159  }
00:25:25.159  [2024-11-19 17:09:17.796747] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:25:25.160  [2024-11-19 17:09:17.797171] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143894 ]
00:25:25.160  [2024-11-19 17:09:17.949661] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:25:25.160  [2024-11-19 17:09:17.998532] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:25:25.418  
[2024-11-19T17:09:18.541Z] Copying: 56/56 [kB] (average 54 MBps)
00:25:25.677  
00:25:25.677   17:09:18	-- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:25:25.677   17:09:18	-- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344
00:25:25.677   17:09:18	-- dd/common.sh@10 -- # local bdev=Nvme0n1
00:25:25.677   17:09:18	-- dd/common.sh@11 -- # local nvme_ref=
00:25:25.677   17:09:18	-- dd/common.sh@12 -- # local size=57344
00:25:25.677   17:09:18	-- dd/common.sh@14 -- # local bs=1048576
00:25:25.677   17:09:18	-- dd/common.sh@15 -- # local count=1
00:25:25.677   17:09:18	-- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62
00:25:25.677    17:09:18	-- dd/common.sh@18 -- # gen_conf
00:25:25.677    17:09:18	-- dd/common.sh@31 -- # xtrace_disable
00:25:25.677    17:09:18	-- common/autotest_common.sh@10 -- # set +x
00:25:25.677  {
00:25:25.677    "subsystems": [
00:25:25.677      {
00:25:25.677        "subsystem": "bdev",
00:25:25.677        "config": [
00:25:25.677          {
00:25:25.677            "params": {
00:25:25.677              "trtype": "pcie",
00:25:25.677              "traddr": "0000:00:06.0",
00:25:25.677              "name": "Nvme0"
00:25:25.677            },
00:25:25.677            "method": "bdev_nvme_attach_controller"
00:25:25.677          },
00:25:25.677          {
00:25:25.677            "method": "bdev_wait_for_examine"
00:25:25.677          }
00:25:25.677        ]
00:25:25.677      }
00:25:25.677    ]
00:25:25.677  }
00:25:25.677  [2024-11-19 17:09:18.486698] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:25:25.677  [2024-11-19 17:09:18.487111] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143913 ]
00:25:25.936  [2024-11-19 17:09:18.642996] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:25:25.936  [2024-11-19 17:09:18.688000] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:25:26.194  
[2024-11-19T17:09:19.317Z] Copying: 1024/1024 [kB] (average 500 MBps)
00:25:26.453  
00:25:26.453   17:09:19	-- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}"
00:25:26.453   17:09:19	-- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}"
00:25:26.453   17:09:19	-- dd/basic_rw.sh@23 -- # count=3
00:25:26.453   17:09:19	-- dd/basic_rw.sh@24 -- # count=3
00:25:26.453   17:09:19	-- dd/basic_rw.sh@25 -- # size=49152
00:25:26.453   17:09:19	-- dd/basic_rw.sh@27 -- # gen_bytes 49152
00:25:26.453   17:09:19	-- dd/common.sh@98 -- # xtrace_disable
00:25:26.453   17:09:19	-- common/autotest_common.sh@10 -- # set +x
00:25:27.021   17:09:19	-- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62
00:25:27.021    17:09:19	-- dd/basic_rw.sh@30 -- # gen_conf
00:25:27.021    17:09:19	-- dd/common.sh@31 -- # xtrace_disable
00:25:27.021    17:09:19	-- common/autotest_common.sh@10 -- # set +x
00:25:27.021  [2024-11-19 17:09:19.655703] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:25:27.021  [2024-11-19 17:09:19.656664] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143933 ]
00:25:27.021  {
00:25:27.021    "subsystems": [
00:25:27.021      {
00:25:27.021        "subsystem": "bdev",
00:25:27.021        "config": [
00:25:27.021          {
00:25:27.021            "params": {
00:25:27.021              "trtype": "pcie",
00:25:27.021              "traddr": "0000:00:06.0",
00:25:27.021              "name": "Nvme0"
00:25:27.021            },
00:25:27.021            "method": "bdev_nvme_attach_controller"
00:25:27.021          },
00:25:27.021          {
00:25:27.021            "method": "bdev_wait_for_examine"
00:25:27.021          }
00:25:27.021        ]
00:25:27.021      }
00:25:27.021    ]
00:25:27.021  }
00:25:27.021  [2024-11-19 17:09:19.799594] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:25:27.021  [2024-11-19 17:09:19.844145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:25:27.279  
[2024-11-19T17:09:20.401Z] Copying: 48/48 [kB] (average 46 MBps)
00:25:27.537  
00:25:27.537   17:09:20	-- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62
00:25:27.537    17:09:20	-- dd/basic_rw.sh@37 -- # gen_conf
00:25:27.537    17:09:20	-- dd/common.sh@31 -- # xtrace_disable
00:25:27.537    17:09:20	-- common/autotest_common.sh@10 -- # set +x
00:25:27.537  {
00:25:27.537    "subsystems": [
00:25:27.537      {
00:25:27.537        "subsystem": "bdev",
00:25:27.537        "config": [
00:25:27.537          {
00:25:27.537            "params": {
00:25:27.537              "trtype": "pcie",
00:25:27.537              "traddr": "0000:00:06.0",
00:25:27.537              "name": "Nvme0"
00:25:27.537            },
00:25:27.537            "method": "bdev_nvme_attach_controller"
00:25:27.537          },
00:25:27.537          {
00:25:27.537            "method": "bdev_wait_for_examine"
00:25:27.537          }
00:25:27.537        ]
00:25:27.537      }
00:25:27.537    ]
00:25:27.537  }
00:25:27.537  [2024-11-19 17:09:20.325136] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:25:27.537  [2024-11-19 17:09:20.325621] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143952 ]
00:25:27.796  [2024-11-19 17:09:20.479467] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:25:27.796  [2024-11-19 17:09:20.531688] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:25:28.055  
[2024-11-19T17:09:21.178Z] Copying: 48/48 [kB] (average 46 MBps)
00:25:28.314  
00:25:28.314   17:09:20	-- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:25:28.314   17:09:20	-- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152
00:25:28.314   17:09:20	-- dd/common.sh@10 -- # local bdev=Nvme0n1
00:25:28.314   17:09:20	-- dd/common.sh@11 -- # local nvme_ref=
00:25:28.314   17:09:20	-- dd/common.sh@12 -- # local size=49152
00:25:28.314   17:09:20	-- dd/common.sh@14 -- # local bs=1048576
00:25:28.314   17:09:20	-- dd/common.sh@15 -- # local count=1
00:25:28.314   17:09:20	-- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62
00:25:28.314    17:09:20	-- dd/common.sh@18 -- # gen_conf
00:25:28.314    17:09:20	-- dd/common.sh@31 -- # xtrace_disable
00:25:28.314    17:09:20	-- common/autotest_common.sh@10 -- # set +x
00:25:28.314  {
00:25:28.314    "subsystems": [
00:25:28.314      {
00:25:28.314        "subsystem": "bdev",
00:25:28.314        "config": [
00:25:28.314          {
00:25:28.314            "params": {
00:25:28.314              "trtype": "pcie",
00:25:28.314              "traddr": "0000:00:06.0",
00:25:28.314              "name": "Nvme0"
00:25:28.314            },
00:25:28.314            "method": "bdev_nvme_attach_controller"
00:25:28.314          },
00:25:28.314          {
00:25:28.314            "method": "bdev_wait_for_examine"
00:25:28.314          }
00:25:28.314        ]
00:25:28.314      }
00:25:28.314    ]
00:25:28.314  }
00:25:28.314  [2024-11-19 17:09:21.044936] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:25:28.314  [2024-11-19 17:09:21.045359] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143962 ]
00:25:28.574  [2024-11-19 17:09:21.200607] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:25:28.574  [2024-11-19 17:09:21.251670] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:25:28.574  
[2024-11-19T17:09:21.697Z] Copying: 1024/1024 [kB] (average 1000 MBps)
00:25:28.833  
00:25:28.833   17:09:21	-- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}"
00:25:28.833   17:09:21	-- dd/basic_rw.sh@23 -- # count=3
00:25:28.833   17:09:21	-- dd/basic_rw.sh@24 -- # count=3
00:25:28.833   17:09:21	-- dd/basic_rw.sh@25 -- # size=49152
00:25:28.833   17:09:21	-- dd/basic_rw.sh@27 -- # gen_bytes 49152
00:25:28.833   17:09:21	-- dd/common.sh@98 -- # xtrace_disable
00:25:28.833   17:09:21	-- common/autotest_common.sh@10 -- # set +x
00:25:29.401   17:09:22	-- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62
00:25:29.401    17:09:22	-- dd/basic_rw.sh@30 -- # gen_conf
00:25:29.401    17:09:22	-- dd/common.sh@31 -- # xtrace_disable
00:25:29.401    17:09:22	-- common/autotest_common.sh@10 -- # set +x
00:25:29.401  [2024-11-19 17:09:22.196284] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:25:29.401  [2024-11-19 17:09:22.196961] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143989 ]
00:25:29.401  {
00:25:29.401    "subsystems": [
00:25:29.401      {
00:25:29.401        "subsystem": "bdev",
00:25:29.401        "config": [
00:25:29.401          {
00:25:29.401            "params": {
00:25:29.401              "trtype": "pcie",
00:25:29.401              "traddr": "0000:00:06.0",
00:25:29.401              "name": "Nvme0"
00:25:29.401            },
00:25:29.401            "method": "bdev_nvme_attach_controller"
00:25:29.401          },
00:25:29.401          {
00:25:29.401            "method": "bdev_wait_for_examine"
00:25:29.401          }
00:25:29.401        ]
00:25:29.401      }
00:25:29.401    ]
00:25:29.401  }
00:25:29.660  [2024-11-19 17:09:22.337170] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:25:29.660  [2024-11-19 17:09:22.382608] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:25:29.919  
[2024-11-19T17:09:23.042Z] Copying: 48/48 [kB] (average 46 MBps)
00:25:30.178  
00:25:30.178   17:09:22	-- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62
00:25:30.178    17:09:22	-- dd/basic_rw.sh@37 -- # gen_conf
00:25:30.178    17:09:22	-- dd/common.sh@31 -- # xtrace_disable
00:25:30.178    17:09:22	-- common/autotest_common.sh@10 -- # set +x
00:25:30.178  {
00:25:30.178    "subsystems": [
00:25:30.178      {
00:25:30.178        "subsystem": "bdev",
00:25:30.178        "config": [
00:25:30.178          {
00:25:30.178            "params": {
00:25:30.178              "trtype": "pcie",
00:25:30.178              "traddr": "0000:00:06.0",
00:25:30.178              "name": "Nvme0"
00:25:30.178            },
00:25:30.178            "method": "bdev_nvme_attach_controller"
00:25:30.178          },
00:25:30.178          {
00:25:30.179            "method": "bdev_wait_for_examine"
00:25:30.179          }
00:25:30.179        ]
00:25:30.179      }
00:25:30.179    ]
00:25:30.179  }
00:25:30.179  [2024-11-19 17:09:22.903470] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:25:30.179  [2024-11-19 17:09:22.903896] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144002 ]
00:25:30.438  [2024-11-19 17:09:23.050691] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:25:30.438  [2024-11-19 17:09:23.111878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:25:30.438  
[2024-11-19T17:09:23.561Z] Copying: 48/48 [kB] (average 46 MBps)
00:25:30.697  
00:25:30.697   17:09:23	-- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:25:30.957   17:09:23	-- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152
00:25:30.957   17:09:23	-- dd/common.sh@10 -- # local bdev=Nvme0n1
00:25:30.958   17:09:23	-- dd/common.sh@11 -- # local nvme_ref=
00:25:30.958   17:09:23	-- dd/common.sh@12 -- # local size=49152
00:25:30.958   17:09:23	-- dd/common.sh@14 -- # local bs=1048576
00:25:30.958   17:09:23	-- dd/common.sh@15 -- # local count=1
00:25:30.958   17:09:23	-- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62
00:25:30.958    17:09:23	-- dd/common.sh@18 -- # gen_conf
00:25:30.958    17:09:23	-- dd/common.sh@31 -- # xtrace_disable
00:25:30.958    17:09:23	-- common/autotest_common.sh@10 -- # set +x
00:25:30.958  {
00:25:30.958    "subsystems": [
00:25:30.958      {
00:25:30.958        "subsystem": "bdev",
00:25:30.958        "config": [
00:25:30.958          {
00:25:30.958            "params": {
00:25:30.958              "trtype": "pcie",
00:25:30.958              "traddr": "0000:00:06.0",
00:25:30.958              "name": "Nvme0"
00:25:30.958            },
00:25:30.958            "method": "bdev_nvme_attach_controller"
00:25:30.958          },
00:25:30.958          {
00:25:30.958            "method": "bdev_wait_for_examine"
00:25:30.958          }
00:25:30.958        ]
00:25:30.958      }
00:25:30.958    ]
00:25:30.958  }
00:25:30.958  [2024-11-19 17:09:23.615913] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:25:30.958  [2024-11-19 17:09:23.616336] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144018 ]
00:25:30.958  [2024-11-19 17:09:23.771209] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:25:31.217  [2024-11-19 17:09:23.814818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:25:31.217  
[2024-11-19T17:09:24.340Z] Copying: 1024/1024 [kB] (average 1000 MBps)
00:25:31.476  
00:25:31.476  
00:25:31.476  real	0m15.449s
00:25:31.476  user	0m10.117s
00:25:31.476  sys	0m3.935s
00:25:31.476   17:09:24	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:25:31.476  ************************************
00:25:31.476  END TEST dd_rw
00:25:31.476  ************************************
00:25:31.476   17:09:24	-- common/autotest_common.sh@10 -- # set +x
00:25:31.476   17:09:24	-- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset
00:25:31.476   17:09:24	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:25:31.476   17:09:24	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:25:31.476   17:09:24	-- common/autotest_common.sh@10 -- # set +x
00:25:31.476  ************************************
00:25:31.476  START TEST dd_rw_offset
00:25:31.476  ************************************
00:25:31.476   17:09:24	-- common/autotest_common.sh@1114 -- # basic_offset
00:25:31.476   17:09:24	-- dd/basic_rw.sh@52 -- # local count seek skip data data_check
00:25:31.476   17:09:24	-- dd/basic_rw.sh@54 -- # gen_bytes 4096
00:25:31.476   17:09:24	-- dd/common.sh@98 -- # xtrace_disable
00:25:31.476   17:09:24	-- common/autotest_common.sh@10 -- # set +x
00:25:31.736   17:09:24	-- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 ))
00:25:31.737   17:09:24	-- dd/basic_rw.sh@56 -- # data=z9midb8pe1nu0ib88jx39z3bhy3s5qc0bj3cbya0zh0l2hq2z9e7j61c8er6jbsg1o8aoh12n5hv0ywds78h25rzznixh2su2jjean8hb2n4542ahz20f7ntcwxl63ziads0fqoxljfbdjj2hi35hzusjs3ffcxvljl3xmwey88kq1yxhryg32x6bk4sjn47z6ye4c25rlxgxr920f5h1wsd61h1mwywhm9j49ej0dlijown6z95ume186w68gs5eh3arok3cc2kkvtve6ett6rz34jfzvybf17267hth3b1yzw5e1bvxucsonu5m1y7ogx50y8vhfu7df5ioqf3q9wgkvk8hm1funbgiodiucjmu28p8soes1ufiearzs5hhuxbk71ng7iw3972iw9yuld2juscsvlugdxzfxqw6eymfnclchy06xxjwa36fdj66o7zu2b8lgnhv2krlp9pbux827h9dfzc6dtmenpzvy9hjo6tnu9x3reysyfu18ca394ru5wh7fx0cerh5u90rwoilzqa4xshvcq75d4q00ne32entbwimm4l4yds1to9chpqdclnl7w89z623j5oyaow5t21d7zdi4kh7u70ijza90uzs5awanmz5wh22jpxcbff2tb62jetthzvlaj97ey827ckg39drpvt1w9xi8u9sg43rm7fz2h6tw4365pyhuu7g4sspt5xyrsh9ttpc1upzo54ee19y425sqnubnoqyyxgkh21uyibf49e07ob5pkrs0m5j86zsehghpxvm9k8kyj7z8csvdaoe77bzca2lmrpdp5172fpva7nu1qveg1pd5mrsgnv5pu30d7zi209dw13k7ups9x8o7xz9uz24904icnxxb84eywqld3etn4fymi1g0xd05ayymudg1miqom24axo9q3olm00bd86bxqt1q0bp6nq7auvzb5n6eats89t1uhbydm5y9j7kbhbri945izqtkcmxsd1wqgao9oflkzjptx2gcwggosjstck5rql3w6a0fkzdkj86luyhcjj3wa9twd8yn60jd8jdde2qxnyvbyrnr2dv9eix5t7rt4eroslpnuoc2frt667sgaumrn9a052ogedyajkgmskyjdieq9i3q71q2f3dyuoonagewjrzseildvxdhh3tn1gcc3cakw56ag06zvx2a6vwqb95zhdep2bi3fv195or59c0ybr3i3r3ytwa26k7ppwv6moij0l62xjmnx946zcmausvw9g8kvk3sjli5j586awx7yq7fhjoj30srliszlsq6o6cpyasnfn31ey45j7e0rrlok5676kjo739zg62lneqdz5gtc1mpoyxngbq4rz9ux0es37vb7e549flrhj7vqkpi7wjzz0por5rhd89nie0xzoio1nc90baurp2hq9pchhberta9zhyfg7bnp4od7e8x6l4n6lfbap0pleixnzlbq6dstmui12td1xc6839b4uuneqlrg6lsnfrvpaqdeik5yji4vlrx1o5za113siex0z4wscsbb6vat610nfjzhb8bkzrp1uuh0m3vzgevvqpnwsp9hylgy8ne387q5i7bwt0nj3otstrif6t99oelvlpaa45xu0g239228jcb4gvqcnmh973spg7nyzrtljs48x0jm9xekauu9m9r2ai1spdp5jz2ldkv80fbuxovnurdx7x7h3kjq68vc7xpoej5ru0m7lun9fbvr76r9ble89vmnjmtsao2jq6jtmk3iig2xi8c1ezcr2lmesw8rsklmln2is0rl822zzjmp3z2te44p6vl8lpqlmmqamt8nwwh5zy2nqw1m0heczhhv4kepdh8jqt0n8owf3zl8bxh2wf17fwnrgjuz1jo2fdub0olevbsmqe54z2zljtw4xhkl3giwa9wzfj0kxe2nutchp2h24n8zecsu01wxvay3gry70rjc7e257z0ry2b4nonw23ar80gcxgbhto21y2itkmapcf07l4scc9otw1tjqvi3nd0baxy173y8raazbl0o95osajmpt97eovlbnrmrwmr6fk1eoxdf18smeiu7tbhufnvo10evlodttlej8z8y843nsnz43hjegqsvs3offro8cgv36swzodqpkp4kqppaxra9kx0l58a58stzctzmrf9brvltugga75olgk045s00o0ak6fewr0fed1kbenjdbr5aj0w3nbofeibyjtmio0hci1df5hp4vsz8mqbc5o993b8p563x8kjiqpjvt1jv3yd3ap9fmtc6kjlgxd603fjs91dxdilzsd9k6n3hhsw2bf1opjlntjwq5ci4knogplhx0uklmnq8ae3f65v7u1in1l5bgir0zoazbz57qcqko76d1u8j8yippnk666qeuv3dywzjku4y4veuog7f4m4qgl5r4fv6g63ec0k7yryvr7fqa5b0f59h6uw7sdj3a8wqcu4xmuxk48qop5q01xu1y6i1761071ox7laozbn8wurxaf21byjqlg8ff21j7y15vlfcqz0nhglqemderzglah69hw6jl8o9iph4bjhwqfqsz7tlbft89nsxxnpz85ug1hzf9zfqz9ijvgdpqaaxq3cp1jqjibglw3gbgnffrilra5lkzt94iphv8b3lsr6hv16lbh6v5lsddduv1dx21b2i1c8saxztvtoq0ybvdu8gv6xezdrefbc8ftmf5wts5y30utr1pwojtibzqrrvd0sxbupw5gjvetpp7ttguceobqsum3omnvmtauiey7kp6vgpt1vpd81jye4s44pxv3pcl7acwu00384lli80y3p8iq08dl3mg912cofs08e9amqd4l0i7xvgi46trccb3w353jlm0dg0onjmb95ckz2yno9va8yieia5v9nwfttjss9egezg0hgkw2aicig714y4zuoeag8ffzcl7ozb40pdvdauitnydpoq4jb126yiulohy9m22055ri2gao4967ux6ziac78ke0aftfkas2cxmlxhggueatca5vp7e278ssff7trzb2087f86dx5zjbcbc2gc9amxpi9bajgosqakw188wir3m6ft5we34ycr1koo69j4ueodpbheuvlpcjjxf0ucrwnlmp31kcsnumk7liosn987pzrmawjj4tldn73pu4mmqpg2n2au9izie8ia1aku8c02xsxtfxfjmrp4nwv8cl872nj55tddz6vknq0om5e6mw3zxqwd54wz457rukflnld646gjqk31clwpq8xztexmvhyjzsq6hjwc09f7npa58pk3k9x4hnzr8bmj7j2o0ywfwr7a9rbb4puc6ayb9dh6jitgppig2p7uischtloay9p5kwhmxlbnl7ccj5kp8y3b0epmrjg2xms561tvknewamngs31rq732t6mkq4yiol8pyg3h3wzunxaowxrfg5bferwzj418h74cly95r1j16jq7azgqfqxoeh22js3dnftdrz34zti4nigofjkswlypi5m2q9tkh8427rv3uwxhot9nf2fmazz3saesof1idc1vbw1d9uyw6lrc4fat65r7k7gxeldlegdsa86sa08xyqx0f3qvuy6wfiizb27f0lvh0fp78j9dn04xbwgn0mntneu17ijow7ickat2vzgevb4udpp592q07pztuxoooydsohvnbbazhllybeneqlpj1qtp6eb07xlc1ecdo9g1rphemocg1p4r5figys1c7zyn5ov31yvv6qrhqpz4tml89rwkgwoui4a570ms6qc62t9d4u9o3wevgjlh50wiapoiessrxkwfywhesbad0siqc3vzr89qb8uescy7jty5vy5ze1beqwfv8viwpaw1ma90n0py9w7l1kz8ndyn0n684skyaorqju1rjl9lrbwtrjkw0oyq0m5ebfm5ew420j88amjnxucwk98vz4tj0vwc710z7dahmss6e1hrxtdz5cz9lmxzvsf0ocue535yvt2n3kx72mo22ondtbgy16t7t85fpg4u36lfqg86ao3jckvns3s50flzc2z0y
00:25:31.737   17:09:24	-- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62
00:25:31.737    17:09:24	-- dd/basic_rw.sh@59 -- # gen_conf
00:25:31.737    17:09:24	-- dd/common.sh@31 -- # xtrace_disable
00:25:31.737    17:09:24	-- common/autotest_common.sh@10 -- # set +x
00:25:31.737  [2024-11-19 17:09:24.397169] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:25:31.737  [2024-11-19 17:09:24.397541] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144058 ]
00:25:31.737  {
00:25:31.737    "subsystems": [
00:25:31.737      {
00:25:31.737        "subsystem": "bdev",
00:25:31.737        "config": [
00:25:31.737          {
00:25:31.737            "params": {
00:25:31.737              "trtype": "pcie",
00:25:31.737              "traddr": "0000:00:06.0",
00:25:31.737              "name": "Nvme0"
00:25:31.737            },
00:25:31.737            "method": "bdev_nvme_attach_controller"
00:25:31.737          },
00:25:31.737          {
00:25:31.737            "method": "bdev_wait_for_examine"
00:25:31.737          }
00:25:31.737        ]
00:25:31.737      }
00:25:31.737    ]
00:25:31.737  }
00:25:31.737  [2024-11-19 17:09:24.541012] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:25:31.737  [2024-11-19 17:09:24.585724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:25:31.996  
[2024-11-19T17:09:25.119Z] Copying: 4096/4096 [B] (average 4000 kBps)
00:25:32.255  
00:25:32.255   17:09:25	-- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62
00:25:32.255    17:09:25	-- dd/basic_rw.sh@65 -- # gen_conf
00:25:32.255    17:09:25	-- dd/common.sh@31 -- # xtrace_disable
00:25:32.255    17:09:25	-- common/autotest_common.sh@10 -- # set +x
00:25:32.255  {
00:25:32.255    "subsystems": [
00:25:32.255      {
00:25:32.255        "subsystem": "bdev",
00:25:32.255        "config": [
00:25:32.255          {
00:25:32.255            "params": {
00:25:32.255              "trtype": "pcie",
00:25:32.255              "traddr": "0000:00:06.0",
00:25:32.255              "name": "Nvme0"
00:25:32.255            },
00:25:32.255            "method": "bdev_nvme_attach_controller"
00:25:32.255          },
00:25:32.255          {
00:25:32.255            "method": "bdev_wait_for_examine"
00:25:32.255          }
00:25:32.255        ]
00:25:32.255      }
00:25:32.255    ]
00:25:32.255  }
00:25:32.255  [2024-11-19 17:09:25.067756] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:25:32.255  [2024-11-19 17:09:25.068187] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144066 ]
00:25:32.513  [2024-11-19 17:09:25.222649] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:25:32.513  [2024-11-19 17:09:25.266369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:25:32.772  
[2024-11-19T17:09:25.896Z] Copying: 4096/4096 [B] (average 4000 kBps)
00:25:33.032  
00:25:33.032   17:09:25	-- dd/basic_rw.sh@71 -- # read -rn4096 data_check
00:25:33.033   17:09:25	-- dd/basic_rw.sh@72 -- # [[ z9midb8pe1nu0ib88jx39z3bhy3s5qc0bj3cbya0zh0l2hq2z9e7j61c8er6jbsg1o8aoh12n5hv0ywds78h25rzznixh2su2jjean8hb2n4542ahz20f7ntcwxl63ziads0fqoxljfbdjj2hi35hzusjs3ffcxvljl3xmwey88kq1yxhryg32x6bk4sjn47z6ye4c25rlxgxr920f5h1wsd61h1mwywhm9j49ej0dlijown6z95ume186w68gs5eh3arok3cc2kkvtve6ett6rz34jfzvybf17267hth3b1yzw5e1bvxucsonu5m1y7ogx50y8vhfu7df5ioqf3q9wgkvk8hm1funbgiodiucjmu28p8soes1ufiearzs5hhuxbk71ng7iw3972iw9yuld2juscsvlugdxzfxqw6eymfnclchy06xxjwa36fdj66o7zu2b8lgnhv2krlp9pbux827h9dfzc6dtmenpzvy9hjo6tnu9x3reysyfu18ca394ru5wh7fx0cerh5u90rwoilzqa4xshvcq75d4q00ne32entbwimm4l4yds1to9chpqdclnl7w89z623j5oyaow5t21d7zdi4kh7u70ijza90uzs5awanmz5wh22jpxcbff2tb62jetthzvlaj97ey827ckg39drpvt1w9xi8u9sg43rm7fz2h6tw4365pyhuu7g4sspt5xyrsh9ttpc1upzo54ee19y425sqnubnoqyyxgkh21uyibf49e07ob5pkrs0m5j86zsehghpxvm9k8kyj7z8csvdaoe77bzca2lmrpdp5172fpva7nu1qveg1pd5mrsgnv5pu30d7zi209dw13k7ups9x8o7xz9uz24904icnxxb84eywqld3etn4fymi1g0xd05ayymudg1miqom24axo9q3olm00bd86bxqt1q0bp6nq7auvzb5n6eats89t1uhbydm5y9j7kbhbri945izqtkcmxsd1wqgao9oflkzjptx2gcwggosjstck5rql3w6a0fkzdkj86luyhcjj3wa9twd8yn60jd8jdde2qxnyvbyrnr2dv9eix5t7rt4eroslpnuoc2frt667sgaumrn9a052ogedyajkgmskyjdieq9i3q71q2f3dyuoonagewjrzseildvxdhh3tn1gcc3cakw56ag06zvx2a6vwqb95zhdep2bi3fv195or59c0ybr3i3r3ytwa26k7ppwv6moij0l62xjmnx946zcmausvw9g8kvk3sjli5j586awx7yq7fhjoj30srliszlsq6o6cpyasnfn31ey45j7e0rrlok5676kjo739zg62lneqdz5gtc1mpoyxngbq4rz9ux0es37vb7e549flrhj7vqkpi7wjzz0por5rhd89nie0xzoio1nc90baurp2hq9pchhberta9zhyfg7bnp4od7e8x6l4n6lfbap0pleixnzlbq6dstmui12td1xc6839b4uuneqlrg6lsnfrvpaqdeik5yji4vlrx1o5za113siex0z4wscsbb6vat610nfjzhb8bkzrp1uuh0m3vzgevvqpnwsp9hylgy8ne387q5i7bwt0nj3otstrif6t99oelvlpaa45xu0g239228jcb4gvqcnmh973spg7nyzrtljs48x0jm9xekauu9m9r2ai1spdp5jz2ldkv80fbuxovnurdx7x7h3kjq68vc7xpoej5ru0m7lun9fbvr76r9ble89vmnjmtsao2jq6jtmk3iig2xi8c1ezcr2lmesw8rsklmln2is0rl822zzjmp3z2te44p6vl8lpqlmmqamt8nwwh5zy2nqw1m0heczhhv4kepdh8jqt0n8owf3zl8bxh2wf17fwnrgjuz1jo2fdub0olevbsmqe54z2zljtw4xhkl3giwa9wzfj0kxe2nutchp2h24n8zecsu01wxvay3gry70rjc7e257z0ry2b4nonw23ar80gcxgbhto21y2itkmapcf07l4scc9otw1tjqvi3nd0baxy173y8raazbl0o95osajmpt97eovlbnrmrwmr6fk1eoxdf18smeiu7tbhufnvo10evlodttlej8z8y843nsnz43hjegqsvs3offro8cgv36swzodqpkp4kqppaxra9kx0l58a58stzctzmrf9brvltugga75olgk045s00o0ak6fewr0fed1kbenjdbr5aj0w3nbofeibyjtmio0hci1df5hp4vsz8mqbc5o993b8p563x8kjiqpjvt1jv3yd3ap9fmtc6kjlgxd603fjs91dxdilzsd9k6n3hhsw2bf1opjlntjwq5ci4knogplhx0uklmnq8ae3f65v7u1in1l5bgir0zoazbz57qcqko76d1u8j8yippnk666qeuv3dywzjku4y4veuog7f4m4qgl5r4fv6g63ec0k7yryvr7fqa5b0f59h6uw7sdj3a8wqcu4xmuxk48qop5q01xu1y6i1761071ox7laozbn8wurxaf21byjqlg8ff21j7y15vlfcqz0nhglqemderzglah69hw6jl8o9iph4bjhwqfqsz7tlbft89nsxxnpz85ug1hzf9zfqz9ijvgdpqaaxq3cp1jqjibglw3gbgnffrilra5lkzt94iphv8b3lsr6hv16lbh6v5lsddduv1dx21b2i1c8saxztvtoq0ybvdu8gv6xezdrefbc8ftmf5wts5y30utr1pwojtibzqrrvd0sxbupw5gjvetpp7ttguceobqsum3omnvmtauiey7kp6vgpt1vpd81jye4s44pxv3pcl7acwu00384lli80y3p8iq08dl3mg912cofs08e9amqd4l0i7xvgi46trccb3w353jlm0dg0onjmb95ckz2yno9va8yieia5v9nwfttjss9egezg0hgkw2aicig714y4zuoeag8ffzcl7ozb40pdvdauitnydpoq4jb126yiulohy9m22055ri2gao4967ux6ziac78ke0aftfkas2cxmlxhggueatca5vp7e278ssff7trzb2087f86dx5zjbcbc2gc9amxpi9bajgosqakw188wir3m6ft5we34ycr1koo69j4ueodpbheuvlpcjjxf0ucrwnlmp31kcsnumk7liosn987pzrmawjj4tldn73pu4mmqpg2n2au9izie8ia1aku8c02xsxtfxfjmrp4nwv8cl872nj55tddz6vknq0om5e6mw3zxqwd54wz457rukflnld646gjqk31clwpq8xztexmvhyjzsq6hjwc09f7npa58pk3k9x4hnzr8bmj7j2o0ywfwr7a9rbb4puc6ayb9dh6jitgppig2p7uischtloay9p5kwhmxlbnl7ccj5kp8y3b0epmrjg2xms561tvknewamngs31rq732t6mkq4yiol8pyg3h3wzunxaowxrfg5bferwzj418h74cly95r1j16jq7azgqfqxoeh22js3dnftdrz34zti4nigofjkswlypi5m2q9tkh8427rv3uwxhot9nf2fmazz3saesof1idc1vbw1d9uyw6lrc4fat65r7k7gxeldlegdsa86sa08xyqx0f3qvuy6wfiizb27f0lvh0fp78j9dn04xbwgn0mntneu17ijow7ickat2vzgevb4udpp592q07pztuxoooydsohvnbbazhllybeneqlpj1qtp6eb07xlc1ecdo9g1rphemocg1p4r5figys1c7zyn5ov31yvv6qrhqpz4tml89rwkgwoui4a570ms6qc62t9d4u9o3wevgjlh50wiapoiessrxkwfywhesbad0siqc3vzr89qb8uescy7jty5vy5ze1beqwfv8viwpaw1ma90n0py9w7l1kz8ndyn0n684skyaorqju1rjl9lrbwtrjkw0oyq0m5ebfm5ew420j88amjnxucwk98vz4tj0vwc710z7dahmss6e1hrxtdz5cz9lmxzvsf0ocue535yvt2n3kx72mo22ondtbgy16t7t85fpg4u36lfqg86ao3jckvns3s50flzc2z0y == \z\9\m\i\d\b\8\p\e\1\n\u\0\i\b\8\8\j\x\3\9\z\3\b\h\y\3\s\5\q\c\0\b\j\3\c\b\y\a\0\z\h\0\l\2\h\q\2\z\9\e\7\j\6\1\c\8\e\r\6\j\b\s\g\1\o\8\a\o\h\1\2\n\5\h\v\0\y\w\d\s\7\8\h\2\5\r\z\z\n\i\x\h\2\s\u\2\j\j\e\a\n\8\h\b\2\n\4\5\4\2\a\h\z\2\0\f\7\n\t\c\w\x\l\6\3\z\i\a\d\s\0\f\q\o\x\l\j\f\b\d\j\j\2\h\i\3\5\h\z\u\s\j\s\3\f\f\c\x\v\l\j\l\3\x\m\w\e\y\8\8\k\q\1\y\x\h\r\y\g\3\2\x\6\b\k\4\s\j\n\4\7\z\6\y\e\4\c\2\5\r\l\x\g\x\r\9\2\0\f\5\h\1\w\s\d\6\1\h\1\m\w\y\w\h\m\9\j\4\9\e\j\0\d\l\i\j\o\w\n\6\z\9\5\u\m\e\1\8\6\w\6\8\g\s\5\e\h\3\a\r\o\k\3\c\c\2\k\k\v\t\v\e\6\e\t\t\6\r\z\3\4\j\f\z\v\y\b\f\1\7\2\6\7\h\t\h\3\b\1\y\z\w\5\e\1\b\v\x\u\c\s\o\n\u\5\m\1\y\7\o\g\x\5\0\y\8\v\h\f\u\7\d\f\5\i\o\q\f\3\q\9\w\g\k\v\k\8\h\m\1\f\u\n\b\g\i\o\d\i\u\c\j\m\u\2\8\p\8\s\o\e\s\1\u\f\i\e\a\r\z\s\5\h\h\u\x\b\k\7\1\n\g\7\i\w\3\9\7\2\i\w\9\y\u\l\d\2\j\u\s\c\s\v\l\u\g\d\x\z\f\x\q\w\6\e\y\m\f\n\c\l\c\h\y\0\6\x\x\j\w\a\3\6\f\d\j\6\6\o\7\z\u\2\b\8\l\g\n\h\v\2\k\r\l\p\9\p\b\u\x\8\2\7\h\9\d\f\z\c\6\d\t\m\e\n\p\z\v\y\9\h\j\o\6\t\n\u\9\x\3\r\e\y\s\y\f\u\1\8\c\a\3\9\4\r\u\5\w\h\7\f\x\0\c\e\r\h\5\u\9\0\r\w\o\i\l\z\q\a\4\x\s\h\v\c\q\7\5\d\4\q\0\0\n\e\3\2\e\n\t\b\w\i\m\m\4\l\4\y\d\s\1\t\o\9\c\h\p\q\d\c\l\n\l\7\w\8\9\z\6\2\3\j\5\o\y\a\o\w\5\t\2\1\d\7\z\d\i\4\k\h\7\u\7\0\i\j\z\a\9\0\u\z\s\5\a\w\a\n\m\z\5\w\h\2\2\j\p\x\c\b\f\f\2\t\b\6\2\j\e\t\t\h\z\v\l\a\j\9\7\e\y\8\2\7\c\k\g\3\9\d\r\p\v\t\1\w\9\x\i\8\u\9\s\g\4\3\r\m\7\f\z\2\h\6\t\w\4\3\6\5\p\y\h\u\u\7\g\4\s\s\p\t\5\x\y\r\s\h\9\t\t\p\c\1\u\p\z\o\5\4\e\e\1\9\y\4\2\5\s\q\n\u\b\n\o\q\y\y\x\g\k\h\2\1\u\y\i\b\f\4\9\e\0\7\o\b\5\p\k\r\s\0\m\5\j\8\6\z\s\e\h\g\h\p\x\v\m\9\k\8\k\y\j\7\z\8\c\s\v\d\a\o\e\7\7\b\z\c\a\2\l\m\r\p\d\p\5\1\7\2\f\p\v\a\7\n\u\1\q\v\e\g\1\p\d\5\m\r\s\g\n\v\5\p\u\3\0\d\7\z\i\2\0\9\d\w\1\3\k\7\u\p\s\9\x\8\o\7\x\z\9\u\z\2\4\9\0\4\i\c\n\x\x\b\8\4\e\y\w\q\l\d\3\e\t\n\4\f\y\m\i\1\g\0\x\d\0\5\a\y\y\m\u\d\g\1\m\i\q\o\m\2\4\a\x\o\9\q\3\o\l\m\0\0\b\d\8\6\b\x\q\t\1\q\0\b\p\6\n\q\7\a\u\v\z\b\5\n\6\e\a\t\s\8\9\t\1\u\h\b\y\d\m\5\y\9\j\7\k\b\h\b\r\i\9\4\5\i\z\q\t\k\c\m\x\s\d\1\w\q\g\a\o\9\o\f\l\k\z\j\p\t\x\2\g\c\w\g\g\o\s\j\s\t\c\k\5\r\q\l\3\w\6\a\0\f\k\z\d\k\j\8\6\l\u\y\h\c\j\j\3\w\a\9\t\w\d\8\y\n\6\0\j\d\8\j\d\d\e\2\q\x\n\y\v\b\y\r\n\r\2\d\v\9\e\i\x\5\t\7\r\t\4\e\r\o\s\l\p\n\u\o\c\2\f\r\t\6\6\7\s\g\a\u\m\r\n\9\a\0\5\2\o\g\e\d\y\a\j\k\g\m\s\k\y\j\d\i\e\q\9\i\3\q\7\1\q\2\f\3\d\y\u\o\o\n\a\g\e\w\j\r\z\s\e\i\l\d\v\x\d\h\h\3\t\n\1\g\c\c\3\c\a\k\w\5\6\a\g\0\6\z\v\x\2\a\6\v\w\q\b\9\5\z\h\d\e\p\2\b\i\3\f\v\1\9\5\o\r\5\9\c\0\y\b\r\3\i\3\r\3\y\t\w\a\2\6\k\7\p\p\w\v\6\m\o\i\j\0\l\6\2\x\j\m\n\x\9\4\6\z\c\m\a\u\s\v\w\9\g\8\k\v\k\3\s\j\l\i\5\j\5\8\6\a\w\x\7\y\q\7\f\h\j\o\j\3\0\s\r\l\i\s\z\l\s\q\6\o\6\c\p\y\a\s\n\f\n\3\1\e\y\4\5\j\7\e\0\r\r\l\o\k\5\6\7\6\k\j\o\7\3\9\z\g\6\2\l\n\e\q\d\z\5\g\t\c\1\m\p\o\y\x\n\g\b\q\4\r\z\9\u\x\0\e\s\3\7\v\b\7\e\5\4\9\f\l\r\h\j\7\v\q\k\p\i\7\w\j\z\z\0\p\o\r\5\r\h\d\8\9\n\i\e\0\x\z\o\i\o\1\n\c\9\0\b\a\u\r\p\2\h\q\9\p\c\h\h\b\e\r\t\a\9\z\h\y\f\g\7\b\n\p\4\o\d\7\e\8\x\6\l\4\n\6\l\f\b\a\p\0\p\l\e\i\x\n\z\l\b\q\6\d\s\t\m\u\i\1\2\t\d\1\x\c\6\8\3\9\b\4\u\u\n\e\q\l\r\g\6\l\s\n\f\r\v\p\a\q\d\e\i\k\5\y\j\i\4\v\l\r\x\1\o\5\z\a\1\1\3\s\i\e\x\0\z\4\w\s\c\s\b\b\6\v\a\t\6\1\0\n\f\j\z\h\b\8\b\k\z\r\p\1\u\u\h\0\m\3\v\z\g\e\v\v\q\p\n\w\s\p\9\h\y\l\g\y\8\n\e\3\8\7\q\5\i\7\b\w\t\0\n\j\3\o\t\s\t\r\i\f\6\t\9\9\o\e\l\v\l\p\a\a\4\5\x\u\0\g\2\3\9\2\2\8\j\c\b\4\g\v\q\c\n\m\h\9\7\3\s\p\g\7\n\y\z\r\t\l\j\s\4\8\x\0\j\m\9\x\e\k\a\u\u\9\m\9\r\2\a\i\1\s\p\d\p\5\j\z\2\l\d\k\v\8\0\f\b\u\x\o\v\n\u\r\d\x\7\x\7\h\3\k\j\q\6\8\v\c\7\x\p\o\e\j\5\r\u\0\m\7\l\u\n\9\f\b\v\r\7\6\r\9\b\l\e\8\9\v\m\n\j\m\t\s\a\o\2\j\q\6\j\t\m\k\3\i\i\g\2\x\i\8\c\1\e\z\c\r\2\l\m\e\s\w\8\r\s\k\l\m\l\n\2\i\s\0\r\l\8\2\2\z\z\j\m\p\3\z\2\t\e\4\4\p\6\v\l\8\l\p\q\l\m\m\q\a\m\t\8\n\w\w\h\5\z\y\2\n\q\w\1\m\0\h\e\c\z\h\h\v\4\k\e\p\d\h\8\j\q\t\0\n\8\o\w\f\3\z\l\8\b\x\h\2\w\f\1\7\f\w\n\r\g\j\u\z\1\j\o\2\f\d\u\b\0\o\l\e\v\b\s\m\q\e\5\4\z\2\z\l\j\t\w\4\x\h\k\l\3\g\i\w\a\9\w\z\f\j\0\k\x\e\2\n\u\t\c\h\p\2\h\2\4\n\8\z\e\c\s\u\0\1\w\x\v\a\y\3\g\r\y\7\0\r\j\c\7\e\2\5\7\z\0\r\y\2\b\4\n\o\n\w\2\3\a\r\8\0\g\c\x\g\b\h\t\o\2\1\y\2\i\t\k\m\a\p\c\f\0\7\l\4\s\c\c\9\o\t\w\1\t\j\q\v\i\3\n\d\0\b\a\x\y\1\7\3\y\8\r\a\a\z\b\l\0\o\9\5\o\s\a\j\m\p\t\9\7\e\o\v\l\b\n\r\m\r\w\m\r\6\f\k\1\e\o\x\d\f\1\8\s\m\e\i\u\7\t\b\h\u\f\n\v\o\1\0\e\v\l\o\d\t\t\l\e\j\8\z\8\y\8\4\3\n\s\n\z\4\3\h\j\e\g\q\s\v\s\3\o\f\f\r\o\8\c\g\v\3\6\s\w\z\o\d\q\p\k\p\4\k\q\p\p\a\x\r\a\9\k\x\0\l\5\8\a\5\8\s\t\z\c\t\z\m\r\f\9\b\r\v\l\t\u\g\g\a\7\5\o\l\g\k\0\4\5\s\0\0\o\0\a\k\6\f\e\w\r\0\f\e\d\1\k\b\e\n\j\d\b\r\5\a\j\0\w\3\n\b\o\f\e\i\b\y\j\t\m\i\o\0\h\c\i\1\d\f\5\h\p\4\v\s\z\8\m\q\b\c\5\o\9\9\3\b\8\p\5\6\3\x\8\k\j\i\q\p\j\v\t\1\j\v\3\y\d\3\a\p\9\f\m\t\c\6\k\j\l\g\x\d\6\0\3\f\j\s\9\1\d\x\d\i\l\z\s\d\9\k\6\n\3\h\h\s\w\2\b\f\1\o\p\j\l\n\t\j\w\q\5\c\i\4\k\n\o\g\p\l\h\x\0\u\k\l\m\n\q\8\a\e\3\f\6\5\v\7\u\1\i\n\1\l\5\b\g\i\r\0\z\o\a\z\b\z\5\7\q\c\q\k\o\7\6\d\1\u\8\j\8\y\i\p\p\n\k\6\6\6\q\e\u\v\3\d\y\w\z\j\k\u\4\y\4\v\e\u\o\g\7\f\4\m\4\q\g\l\5\r\4\f\v\6\g\6\3\e\c\0\k\7\y\r\y\v\r\7\f\q\a\5\b\0\f\5\9\h\6\u\w\7\s\d\j\3\a\8\w\q\c\u\4\x\m\u\x\k\4\8\q\o\p\5\q\0\1\x\u\1\y\6\i\1\7\6\1\0\7\1\o\x\7\l\a\o\z\b\n\8\w\u\r\x\a\f\2\1\b\y\j\q\l\g\8\f\f\2\1\j\7\y\1\5\v\l\f\c\q\z\0\n\h\g\l\q\e\m\d\e\r\z\g\l\a\h\6\9\h\w\6\j\l\8\o\9\i\p\h\4\b\j\h\w\q\f\q\s\z\7\t\l\b\f\t\8\9\n\s\x\x\n\p\z\8\5\u\g\1\h\z\f\9\z\f\q\z\9\i\j\v\g\d\p\q\a\a\x\q\3\c\p\1\j\q\j\i\b\g\l\w\3\g\b\g\n\f\f\r\i\l\r\a\5\l\k\z\t\9\4\i\p\h\v\8\b\3\l\s\r\6\h\v\1\6\l\b\h\6\v\5\l\s\d\d\d\u\v\1\d\x\2\1\b\2\i\1\c\8\s\a\x\z\t\v\t\o\q\0\y\b\v\d\u\8\g\v\6\x\e\z\d\r\e\f\b\c\8\f\t\m\f\5\w\t\s\5\y\3\0\u\t\r\1\p\w\o\j\t\i\b\z\q\r\r\v\d\0\s\x\b\u\p\w\5\g\j\v\e\t\p\p\7\t\t\g\u\c\e\o\b\q\s\u\m\3\o\m\n\v\m\t\a\u\i\e\y\7\k\p\6\v\g\p\t\1\v\p\d\8\1\j\y\e\4\s\4\4\p\x\v\3\p\c\l\7\a\c\w\u\0\0\3\8\4\l\l\i\8\0\y\3\p\8\i\q\0\8\d\l\3\m\g\9\1\2\c\o\f\s\0\8\e\9\a\m\q\d\4\l\0\i\7\x\v\g\i\4\6\t\r\c\c\b\3\w\3\5\3\j\l\m\0\d\g\0\o\n\j\m\b\9\5\c\k\z\2\y\n\o\9\v\a\8\y\i\e\i\a\5\v\9\n\w\f\t\t\j\s\s\9\e\g\e\z\g\0\h\g\k\w\2\a\i\c\i\g\7\1\4\y\4\z\u\o\e\a\g\8\f\f\z\c\l\7\o\z\b\4\0\p\d\v\d\a\u\i\t\n\y\d\p\o\q\4\j\b\1\2\6\y\i\u\l\o\h\y\9\m\2\2\0\5\5\r\i\2\g\a\o\4\9\6\7\u\x\6\z\i\a\c\7\8\k\e\0\a\f\t\f\k\a\s\2\c\x\m\l\x\h\g\g\u\e\a\t\c\a\5\v\p\7\e\2\7\8\s\s\f\f\7\t\r\z\b\2\0\8\7\f\8\6\d\x\5\z\j\b\c\b\c\2\g\c\9\a\m\x\p\i\9\b\a\j\g\o\s\q\a\k\w\1\8\8\w\i\r\3\m\6\f\t\5\w\e\3\4\y\c\r\1\k\o\o\6\9\j\4\u\e\o\d\p\b\h\e\u\v\l\p\c\j\j\x\f\0\u\c\r\w\n\l\m\p\3\1\k\c\s\n\u\m\k\7\l\i\o\s\n\9\8\7\p\z\r\m\a\w\j\j\4\t\l\d\n\7\3\p\u\4\m\m\q\p\g\2\n\2\a\u\9\i\z\i\e\8\i\a\1\a\k\u\8\c\0\2\x\s\x\t\f\x\f\j\m\r\p\4\n\w\v\8\c\l\8\7\2\n\j\5\5\t\d\d\z\6\v\k\n\q\0\o\m\5\e\6\m\w\3\z\x\q\w\d\5\4\w\z\4\5\7\r\u\k\f\l\n\l\d\6\4\6\g\j\q\k\3\1\c\l\w\p\q\8\x\z\t\e\x\m\v\h\y\j\z\s\q\6\h\j\w\c\0\9\f\7\n\p\a\5\8\p\k\3\k\9\x\4\h\n\z\r\8\b\m\j\7\j\2\o\0\y\w\f\w\r\7\a\9\r\b\b\4\p\u\c\6\a\y\b\9\d\h\6\j\i\t\g\p\p\i\g\2\p\7\u\i\s\c\h\t\l\o\a\y\9\p\5\k\w\h\m\x\l\b\n\l\7\c\c\j\5\k\p\8\y\3\b\0\e\p\m\r\j\g\2\x\m\s\5\6\1\t\v\k\n\e\w\a\m\n\g\s\3\1\r\q\7\3\2\t\6\m\k\q\4\y\i\o\l\8\p\y\g\3\h\3\w\z\u\n\x\a\o\w\x\r\f\g\5\b\f\e\r\w\z\j\4\1\8\h\7\4\c\l\y\9\5\r\1\j\1\6\j\q\7\a\z\g\q\f\q\x\o\e\h\2\2\j\s\3\d\n\f\t\d\r\z\3\4\z\t\i\4\n\i\g\o\f\j\k\s\w\l\y\p\i\5\m\2\q\9\t\k\h\8\4\2\7\r\v\3\u\w\x\h\o\t\9\n\f\2\f\m\a\z\z\3\s\a\e\s\o\f\1\i\d\c\1\v\b\w\1\d\9\u\y\w\6\l\r\c\4\f\a\t\6\5\r\7\k\7\g\x\e\l\d\l\e\g\d\s\a\8\6\s\a\0\8\x\y\q\x\0\f\3\q\v\u\y\6\w\f\i\i\z\b\2\7\f\0\l\v\h\0\f\p\7\8\j\9\d\n\0\4\x\b\w\g\n\0\m\n\t\n\e\u\1\7\i\j\o\w\7\i\c\k\a\t\2\v\z\g\e\v\b\4\u\d\p\p\5\9\2\q\0\7\p\z\t\u\x\o\o\o\y\d\s\o\h\v\n\b\b\a\z\h\l\l\y\b\e\n\e\q\l\p\j\1\q\t\p\6\e\b\0\7\x\l\c\1\e\c\d\o\9\g\1\r\p\h\e\m\o\c\g\1\p\4\r\5\f\i\g\y\s\1\c\7\z\y\n\5\o\v\3\1\y\v\v\6\q\r\h\q\p\z\4\t\m\l\8\9\r\w\k\g\w\o\u\i\4\a\5\7\0\m\s\6\q\c\6\2\t\9\d\4\u\9\o\3\w\e\v\g\j\l\h\5\0\w\i\a\p\o\i\e\s\s\r\x\k\w\f\y\w\h\e\s\b\a\d\0\s\i\q\c\3\v\z\r\8\9\q\b\8\u\e\s\c\y\7\j\t\y\5\v\y\5\z\e\1\b\e\q\w\f\v\8\v\i\w\p\a\w\1\m\a\9\0\n\0\p\y\9\w\7\l\1\k\z\8\n\d\y\n\0\n\6\8\4\s\k\y\a\o\r\q\j\u\1\r\j\l\9\l\r\b\w\t\r\j\k\w\0\o\y\q\0\m\5\e\b\f\m\5\e\w\4\2\0\j\8\8\a\m\j\n\x\u\c\w\k\9\8\v\z\4\t\j\0\v\w\c\7\1\0\z\7\d\a\h\m\s\s\6\e\1\h\r\x\t\d\z\5\c\z\9\l\m\x\z\v\s\f\0\o\c\u\e\5\3\5\y\v\t\2\n\3\k\x\7\2\m\o\2\2\o\n\d\t\b\g\y\1\6\t\7\t\8\5\f\p\g\4\u\3\6\l\f\q\g\8\6\a\o\3\j\c\k\v\n\s\3\s\5\0\f\l\z\c\2\z\0\y ]]
00:25:33.033  
00:25:33.033  real	0m1.393s
00:25:33.033  user	0m0.844s
00:25:33.033  sys	0m0.397s
00:25:33.033   17:09:25	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:25:33.033   17:09:25	-- common/autotest_common.sh@10 -- # set +x
00:25:33.033  ************************************
00:25:33.033  END TEST dd_rw_offset
00:25:33.033  ************************************
00:25:33.033   17:09:25	-- dd/basic_rw.sh@1 -- # cleanup
00:25:33.033   17:09:25	-- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1
00:25:33.033   17:09:25	-- dd/common.sh@10 -- # local bdev=Nvme0n1
00:25:33.033   17:09:25	-- dd/common.sh@11 -- # local nvme_ref=
00:25:33.033   17:09:25	-- dd/common.sh@12 -- # local size=0xffff
00:25:33.033   17:09:25	-- dd/common.sh@14 -- # local bs=1048576
00:25:33.033   17:09:25	-- dd/common.sh@15 -- # local count=1
00:25:33.033    17:09:25	-- dd/common.sh@18 -- # gen_conf
00:25:33.034   17:09:25	-- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62
00:25:33.034    17:09:25	-- dd/common.sh@31 -- # xtrace_disable
00:25:33.034    17:09:25	-- common/autotest_common.sh@10 -- # set +x
00:25:33.034  [2024-11-19 17:09:25.809508] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:25:33.034  [2024-11-19 17:09:25.809797] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144105 ]
00:25:33.034  {
00:25:33.034    "subsystems": [
00:25:33.034      {
00:25:33.034        "subsystem": "bdev",
00:25:33.034        "config": [
00:25:33.034          {
00:25:33.034            "params": {
00:25:33.034              "trtype": "pcie",
00:25:33.034              "traddr": "0000:00:06.0",
00:25:33.034              "name": "Nvme0"
00:25:33.034            },
00:25:33.034            "method": "bdev_nvme_attach_controller"
00:25:33.034          },
00:25:33.034          {
00:25:33.034            "method": "bdev_wait_for_examine"
00:25:33.034          }
00:25:33.034        ]
00:25:33.034      }
00:25:33.034    ]
00:25:33.034  }
00:25:33.292  [2024-11-19 17:09:25.951090] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:25:33.292  [2024-11-19 17:09:25.996554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:25:33.292  
[2024-11-19T17:09:26.724Z] Copying: 1024/1024 [kB] (average 1000 MBps)
00:25:33.860  
00:25:33.860   17:09:26	-- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:25:33.860  ************************************
00:25:33.860  END TEST spdk_dd_basic_rw
00:25:33.860  ************************************
00:25:33.860  
00:25:33.860  real	0m18.881s
00:25:33.860  user	0m12.090s
00:25:33.860  sys	0m5.067s
00:25:33.860   17:09:26	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:25:33.860   17:09:26	-- common/autotest_common.sh@10 -- # set +x
00:25:33.860   17:09:26	-- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh
00:25:33.860   17:09:26	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:25:33.860   17:09:26	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:25:33.860   17:09:26	-- common/autotest_common.sh@10 -- # set +x
00:25:33.860  ************************************
00:25:33.860  START TEST spdk_dd_posix
00:25:33.860  ************************************
00:25:33.860   17:09:26	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh
00:25:33.860  * Looking for test storage...
00:25:33.860  * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd
00:25:33.860     17:09:26	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:25:33.860      17:09:26	-- common/autotest_common.sh@1690 -- # lcov --version
00:25:33.860      17:09:26	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:25:33.860     17:09:26	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:25:33.860     17:09:26	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:25:33.860     17:09:26	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:25:33.860     17:09:26	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:25:33.860     17:09:26	-- scripts/common.sh@335 -- # IFS=.-:
00:25:33.860     17:09:26	-- scripts/common.sh@335 -- # read -ra ver1
00:25:33.860     17:09:26	-- scripts/common.sh@336 -- # IFS=.-:
00:25:33.860     17:09:26	-- scripts/common.sh@336 -- # read -ra ver2
00:25:33.860     17:09:26	-- scripts/common.sh@337 -- # local 'op=<'
00:25:33.860     17:09:26	-- scripts/common.sh@339 -- # ver1_l=2
00:25:33.860     17:09:26	-- scripts/common.sh@340 -- # ver2_l=1
00:25:33.860     17:09:26	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:25:33.860     17:09:26	-- scripts/common.sh@343 -- # case "$op" in
00:25:33.860     17:09:26	-- scripts/common.sh@344 -- # : 1
00:25:33.860     17:09:26	-- scripts/common.sh@363 -- # (( v = 0 ))
00:25:33.860     17:09:26	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:25:33.860      17:09:26	-- scripts/common.sh@364 -- # decimal 1
00:25:33.860      17:09:26	-- scripts/common.sh@352 -- # local d=1
00:25:33.860      17:09:26	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:25:33.860      17:09:26	-- scripts/common.sh@354 -- # echo 1
00:25:33.860     17:09:26	-- scripts/common.sh@364 -- # ver1[v]=1
00:25:33.860      17:09:26	-- scripts/common.sh@365 -- # decimal 2
00:25:33.860      17:09:26	-- scripts/common.sh@352 -- # local d=2
00:25:33.860      17:09:26	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:25:33.860      17:09:26	-- scripts/common.sh@354 -- # echo 2
00:25:33.860     17:09:26	-- scripts/common.sh@365 -- # ver2[v]=2
00:25:33.860     17:09:26	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:25:33.860     17:09:26	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:25:33.860     17:09:26	-- scripts/common.sh@367 -- # return 0
00:25:33.860     17:09:26	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:25:33.860     17:09:26	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:25:33.860  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:33.860  		--rc genhtml_branch_coverage=1
00:25:33.860  		--rc genhtml_function_coverage=1
00:25:33.860  		--rc genhtml_legend=1
00:25:33.860  		--rc geninfo_all_blocks=1
00:25:33.860  		--rc geninfo_unexecuted_blocks=1
00:25:33.860  		
00:25:33.860  		'
00:25:33.860     17:09:26	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:25:33.861  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:33.861  		--rc genhtml_branch_coverage=1
00:25:33.861  		--rc genhtml_function_coverage=1
00:25:33.861  		--rc genhtml_legend=1
00:25:33.861  		--rc geninfo_all_blocks=1
00:25:33.861  		--rc geninfo_unexecuted_blocks=1
00:25:33.861  		
00:25:33.861  		'
00:25:33.861     17:09:26	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:25:33.861  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:33.861  		--rc genhtml_branch_coverage=1
00:25:33.861  		--rc genhtml_function_coverage=1
00:25:33.861  		--rc genhtml_legend=1
00:25:33.861  		--rc geninfo_all_blocks=1
00:25:33.861  		--rc geninfo_unexecuted_blocks=1
00:25:33.861  		
00:25:33.861  		'
00:25:33.861     17:09:26	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:25:33.861  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:33.861  		--rc genhtml_branch_coverage=1
00:25:33.861  		--rc genhtml_function_coverage=1
00:25:33.861  		--rc genhtml_legend=1
00:25:33.861  		--rc geninfo_all_blocks=1
00:25:33.861  		--rc geninfo_unexecuted_blocks=1
00:25:33.861  		
00:25:33.861  		'
00:25:33.861    17:09:26	-- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:25:33.861     17:09:26	-- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]]
00:25:33.861     17:09:26	-- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:25:33.861     17:09:26	-- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:25:33.861      17:09:26	-- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:25:33.861      17:09:26	-- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:25:33.861      17:09:26	-- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:25:33.861      17:09:26	-- paths/export.sh@5 -- # export PATH
00:25:33.861      17:09:26	-- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:25:33.861   17:09:26	-- dd/posix.sh@121 -- # msg[0]=', using AIO'
00:25:33.861   17:09:26	-- dd/posix.sh@122 -- # msg[1]=', liburing in use'
00:25:33.861   17:09:26	-- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO'
00:25:33.861   17:09:26	-- dd/posix.sh@125 -- # trap cleanup EXIT
00:25:33.861   17:09:26	-- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:25:33.861   17:09:26	-- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:25:33.861   17:09:26	-- dd/posix.sh@130 -- # tests
00:25:33.861   17:09:26	-- dd/posix.sh@99 -- # printf '* First test run%s\n' ', using AIO'
00:25:33.861  * First test run, using AIO
00:25:33.861   17:09:26	-- dd/posix.sh@102 -- # run_test dd_flag_append append
00:25:33.861   17:09:26	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:25:33.861   17:09:26	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:25:33.861   17:09:26	-- common/autotest_common.sh@10 -- # set +x
00:25:33.861  ************************************
00:25:33.861  START TEST dd_flag_append
00:25:33.861  ************************************
00:25:33.861   17:09:26	-- common/autotest_common.sh@1114 -- # append
00:25:33.861   17:09:26	-- dd/posix.sh@16 -- # local dump0
00:25:33.861   17:09:26	-- dd/posix.sh@17 -- # local dump1
00:25:33.861    17:09:26	-- dd/posix.sh@19 -- # gen_bytes 32
00:25:33.861    17:09:26	-- dd/common.sh@98 -- # xtrace_disable
00:25:33.861    17:09:26	-- common/autotest_common.sh@10 -- # set +x
00:25:33.861   17:09:26	-- dd/posix.sh@19 -- # dump0=4g6yrndwikqhgy3g15vdhr5m7y1niei3
00:25:33.861    17:09:26	-- dd/posix.sh@20 -- # gen_bytes 32
00:25:33.861    17:09:26	-- dd/common.sh@98 -- # xtrace_disable
00:25:33.861    17:09:26	-- common/autotest_common.sh@10 -- # set +x
00:25:34.120   17:09:26	-- dd/posix.sh@20 -- # dump1=6cqcq7zv0yniwqqmb1nyft5as4p9dctb
00:25:34.120   17:09:26	-- dd/posix.sh@22 -- # printf %s 4g6yrndwikqhgy3g15vdhr5m7y1niei3
00:25:34.120   17:09:26	-- dd/posix.sh@23 -- # printf %s 6cqcq7zv0yniwqqmb1nyft5as4p9dctb
00:25:34.120   17:09:26	-- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append
00:25:34.120  [2024-11-19 17:09:26.765654] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:25:34.120  [2024-11-19 17:09:26.765964] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144184 ]
00:25:34.120  [2024-11-19 17:09:26.906482] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:25:34.120  [2024-11-19 17:09:26.952591] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:25:34.378  
[2024-11-19T17:09:27.501Z] Copying: 32/32 [B] (average 31 kBps)
00:25:34.637  
00:25:34.637  ************************************
00:25:34.637  END TEST dd_flag_append
00:25:34.637  ************************************
00:25:34.637   17:09:27	-- dd/posix.sh@27 -- # [[ 6cqcq7zv0yniwqqmb1nyft5as4p9dctb4g6yrndwikqhgy3g15vdhr5m7y1niei3 == \6\c\q\c\q\7\z\v\0\y\n\i\w\q\q\m\b\1\n\y\f\t\5\a\s\4\p\9\d\c\t\b\4\g\6\y\r\n\d\w\i\k\q\h\g\y\3\g\1\5\v\d\h\r\5\m\7\y\1\n\i\e\i\3 ]]
00:25:34.637  
00:25:34.637  real	0m0.602s
00:25:34.637  user	0m0.286s
00:25:34.637  sys	0m0.172s
00:25:34.637   17:09:27	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:25:34.637   17:09:27	-- common/autotest_common.sh@10 -- # set +x
00:25:34.637   17:09:27	-- dd/posix.sh@103 -- # run_test dd_flag_directory directory
00:25:34.637   17:09:27	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:25:34.637   17:09:27	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:25:34.637   17:09:27	-- common/autotest_common.sh@10 -- # set +x
00:25:34.637  ************************************
00:25:34.637  START TEST dd_flag_directory
00:25:34.637  ************************************
00:25:34.637   17:09:27	-- common/autotest_common.sh@1114 -- # directory
00:25:34.637   17:09:27	-- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:25:34.637   17:09:27	-- common/autotest_common.sh@650 -- # local es=0
00:25:34.637   17:09:27	-- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:25:34.637   17:09:27	-- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:25:34.637   17:09:27	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:25:34.637    17:09:27	-- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:25:34.637   17:09:27	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:25:34.637    17:09:27	-- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:25:34.638   17:09:27	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:25:34.638   17:09:27	-- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:25:34.638   17:09:27	-- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:25:34.638   17:09:27	-- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:25:34.638  [2024-11-19 17:09:27.434443] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:25:34.638  [2024-11-19 17:09:27.434875] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144218 ]
00:25:34.897  [2024-11-19 17:09:27.587684] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:25:34.897  [2024-11-19 17:09:27.633841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:25:34.897  [2024-11-19 17:09:27.697052] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory
00:25:34.897  [2024-11-19 17:09:27.697245] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory
00:25:34.897  [2024-11-19 17:09:27.697311] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:25:35.156  [2024-11-19 17:09:27.804044] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy
00:25:35.156   17:09:27	-- common/autotest_common.sh@653 -- # es=236
00:25:35.156   17:09:27	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:25:35.156   17:09:27	-- common/autotest_common.sh@662 -- # es=108
00:25:35.156   17:09:27	-- common/autotest_common.sh@663 -- # case "$es" in
00:25:35.156   17:09:27	-- common/autotest_common.sh@670 -- # es=1
00:25:35.156   17:09:27	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:25:35.156   17:09:27	-- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory
00:25:35.156   17:09:27	-- common/autotest_common.sh@650 -- # local es=0
00:25:35.156   17:09:27	-- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory
00:25:35.156   17:09:27	-- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:25:35.156   17:09:27	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:25:35.156    17:09:27	-- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:25:35.156   17:09:27	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:25:35.156    17:09:27	-- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:25:35.156   17:09:27	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:25:35.156   17:09:27	-- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:25:35.156   17:09:27	-- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:25:35.156   17:09:27	-- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory
00:25:35.156  [2024-11-19 17:09:28.003231] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:25:35.156  [2024-11-19 17:09:28.003751] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144230 ]
00:25:35.416  [2024-11-19 17:09:28.156018] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:25:35.416  [2024-11-19 17:09:28.200572] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:25:35.416  [2024-11-19 17:09:28.265228] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory
00:25:35.416  [2024-11-19 17:09:28.265508] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory
00:25:35.416  [2024-11-19 17:09:28.265576] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:25:35.673  [2024-11-19 17:09:28.372741] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy
00:25:35.673   17:09:28	-- common/autotest_common.sh@653 -- # es=236
00:25:35.674   17:09:28	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:25:35.674   17:09:28	-- common/autotest_common.sh@662 -- # es=108
00:25:35.674   17:09:28	-- common/autotest_common.sh@663 -- # case "$es" in
00:25:35.674   17:09:28	-- common/autotest_common.sh@670 -- # es=1
00:25:35.674   17:09:28	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:25:35.674  
00:25:35.674  real	0m1.152s
00:25:35.674  user	0m0.583s
00:25:35.674  sys	0m0.362s
00:25:35.674   17:09:28	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:25:35.674   17:09:28	-- common/autotest_common.sh@10 -- # set +x
00:25:35.674  ************************************
00:25:35.674  END TEST dd_flag_directory
00:25:35.674  ************************************
00:25:35.932   17:09:28	-- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow
00:25:35.932   17:09:28	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:25:35.932   17:09:28	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:25:35.932   17:09:28	-- common/autotest_common.sh@10 -- # set +x
00:25:35.932  ************************************
00:25:35.932  START TEST dd_flag_nofollow
00:25:35.932  ************************************
00:25:35.932   17:09:28	-- common/autotest_common.sh@1114 -- # nofollow
00:25:35.933   17:09:28	-- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link
00:25:35.933   17:09:28	-- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link
00:25:35.933   17:09:28	-- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link
00:25:35.933   17:09:28	-- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link
00:25:35.933   17:09:28	-- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:25:35.933   17:09:28	-- common/autotest_common.sh@650 -- # local es=0
00:25:35.933   17:09:28	-- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:25:35.933   17:09:28	-- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:25:35.933   17:09:28	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:25:35.933    17:09:28	-- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:25:35.933   17:09:28	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:25:35.933    17:09:28	-- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:25:35.933   17:09:28	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:25:35.933   17:09:28	-- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:25:35.933   17:09:28	-- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:25:35.933   17:09:28	-- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:25:35.933  [2024-11-19 17:09:28.641697] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:25:35.933  [2024-11-19 17:09:28.642073] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144267 ]
00:25:36.222  [2024-11-19 17:09:28.787433] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:25:36.222  [2024-11-19 17:09:28.840165] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:25:36.222  [2024-11-19 17:09:28.904868] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links
00:25:36.222  [2024-11-19 17:09:28.905120] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links
00:25:36.222  [2024-11-19 17:09:28.905222] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:25:36.222  [2024-11-19 17:09:29.014994] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy
00:25:36.483   17:09:29	-- common/autotest_common.sh@653 -- # es=216
00:25:36.483   17:09:29	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:25:36.483   17:09:29	-- common/autotest_common.sh@662 -- # es=88
00:25:36.483   17:09:29	-- common/autotest_common.sh@663 -- # case "$es" in
00:25:36.483   17:09:29	-- common/autotest_common.sh@670 -- # es=1
00:25:36.483   17:09:29	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:25:36.483   17:09:29	-- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow
00:25:36.483   17:09:29	-- common/autotest_common.sh@650 -- # local es=0
00:25:36.483   17:09:29	-- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow
00:25:36.483   17:09:29	-- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:25:36.483   17:09:29	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:25:36.483    17:09:29	-- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:25:36.483   17:09:29	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:25:36.483    17:09:29	-- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:25:36.483   17:09:29	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:25:36.483   17:09:29	-- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:25:36.483   17:09:29	-- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:25:36.483   17:09:29	-- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow
00:25:36.483  [2024-11-19 17:09:29.221188] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:25:36.483  [2024-11-19 17:09:29.221555] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144279 ]
00:25:36.741  [2024-11-19 17:09:29.363685] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:25:36.741  [2024-11-19 17:09:29.408655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:25:36.741  [2024-11-19 17:09:29.471376] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links
00:25:36.741  [2024-11-19 17:09:29.471639] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links
00:25:36.741  [2024-11-19 17:09:29.471724] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:25:36.741  [2024-11-19 17:09:29.576490] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy
00:25:36.999   17:09:29	-- common/autotest_common.sh@653 -- # es=216
00:25:36.999   17:09:29	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:25:36.999   17:09:29	-- common/autotest_common.sh@662 -- # es=88
00:25:36.999   17:09:29	-- common/autotest_common.sh@663 -- # case "$es" in
00:25:36.999   17:09:29	-- common/autotest_common.sh@670 -- # es=1
00:25:36.999   17:09:29	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:25:36.999   17:09:29	-- dd/posix.sh@46 -- # gen_bytes 512
00:25:36.999   17:09:29	-- dd/common.sh@98 -- # xtrace_disable
00:25:36.999   17:09:29	-- common/autotest_common.sh@10 -- # set +x
00:25:36.999   17:09:29	-- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:25:36.999  [2024-11-19 17:09:29.797488] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:25:36.999  [2024-11-19 17:09:29.797992] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144291 ]
00:25:37.258  [2024-11-19 17:09:29.951544] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:25:37.258  [2024-11-19 17:09:30.002038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:25:37.258  
[2024-11-19T17:09:30.380Z] Copying: 512/512 [B] (average 500 kBps)
00:25:37.516  
00:25:37.516  ************************************
00:25:37.516  END TEST dd_flag_nofollow
00:25:37.516  ************************************
00:25:37.516   17:09:30	-- dd/posix.sh@49 -- # [[ zz51j0c9hlfn4yxcfmnr513d9fmsu8cboapaql1z65tztqyk7bylj66qlaq0wvfhz9rmyjx68lp01fyi5iukv07jnacb6stuhdzpy266f6ztmfv72xrxhutyrsslb95pz9x8olisfjmus1kwyiub956dkviydx1v4o2x96xig3g5zgkimijzfv4p4me4a4zl76h08usxl9enmlm71hnnmvfuxub0km0qecsxgrhvdfsbt8uxtofjszyupwubmtm9ame3ffds9u9vanlq1r4ymjvrmyjr0f5hfbjzyfah9nj7s2h942zh2z9xd77xldals64euoio07923xeyagvqu2n7am9race98hx44kmlxv1ebs2fplf9z2oa2igvn864jry170srkdebv3cilwlo8pn1nc1z4vue3iib4h8uabahcx2ws4u7t4q7qwfqtthj3hfz6w92vxwfngy4g1e6w0klnbcaq20s241ye4yhbjrxdsjddewny8gz7r8rs64t == \z\z\5\1\j\0\c\9\h\l\f\n\4\y\x\c\f\m\n\r\5\1\3\d\9\f\m\s\u\8\c\b\o\a\p\a\q\l\1\z\6\5\t\z\t\q\y\k\7\b\y\l\j\6\6\q\l\a\q\0\w\v\f\h\z\9\r\m\y\j\x\6\8\l\p\0\1\f\y\i\5\i\u\k\v\0\7\j\n\a\c\b\6\s\t\u\h\d\z\p\y\2\6\6\f\6\z\t\m\f\v\7\2\x\r\x\h\u\t\y\r\s\s\l\b\9\5\p\z\9\x\8\o\l\i\s\f\j\m\u\s\1\k\w\y\i\u\b\9\5\6\d\k\v\i\y\d\x\1\v\4\o\2\x\9\6\x\i\g\3\g\5\z\g\k\i\m\i\j\z\f\v\4\p\4\m\e\4\a\4\z\l\7\6\h\0\8\u\s\x\l\9\e\n\m\l\m\7\1\h\n\n\m\v\f\u\x\u\b\0\k\m\0\q\e\c\s\x\g\r\h\v\d\f\s\b\t\8\u\x\t\o\f\j\s\z\y\u\p\w\u\b\m\t\m\9\a\m\e\3\f\f\d\s\9\u\9\v\a\n\l\q\1\r\4\y\m\j\v\r\m\y\j\r\0\f\5\h\f\b\j\z\y\f\a\h\9\n\j\7\s\2\h\9\4\2\z\h\2\z\9\x\d\7\7\x\l\d\a\l\s\6\4\e\u\o\i\o\0\7\9\2\3\x\e\y\a\g\v\q\u\2\n\7\a\m\9\r\a\c\e\9\8\h\x\4\4\k\m\l\x\v\1\e\b\s\2\f\p\l\f\9\z\2\o\a\2\i\g\v\n\8\6\4\j\r\y\1\7\0\s\r\k\d\e\b\v\3\c\i\l\w\l\o\8\p\n\1\n\c\1\z\4\v\u\e\3\i\i\b\4\h\8\u\a\b\a\h\c\x\2\w\s\4\u\7\t\4\q\7\q\w\f\q\t\t\h\j\3\h\f\z\6\w\9\2\v\x\w\f\n\g\y\4\g\1\e\6\w\0\k\l\n\b\c\a\q\2\0\s\2\4\1\y\e\4\y\h\b\j\r\x\d\s\j\d\d\e\w\n\y\8\g\z\7\r\8\r\s\6\4\t ]]
00:25:37.516  
00:25:37.516  real	0m1.779s
00:25:37.516  user	0m0.901s
00:25:37.516  sys	0m0.531s
00:25:37.516   17:09:30	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:25:37.516   17:09:30	-- common/autotest_common.sh@10 -- # set +x
00:25:37.775   17:09:30	-- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime
00:25:37.775   17:09:30	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:25:37.775   17:09:30	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:25:37.775   17:09:30	-- common/autotest_common.sh@10 -- # set +x
00:25:37.775  ************************************
00:25:37.775  START TEST dd_flag_noatime
00:25:37.775  ************************************
00:25:37.775   17:09:30	-- common/autotest_common.sh@1114 -- # noatime
00:25:37.775   17:09:30	-- dd/posix.sh@53 -- # local atime_if
00:25:37.775   17:09:30	-- dd/posix.sh@54 -- # local atime_of
00:25:37.775   17:09:30	-- dd/posix.sh@58 -- # gen_bytes 512
00:25:37.775   17:09:30	-- dd/common.sh@98 -- # xtrace_disable
00:25:37.775   17:09:30	-- common/autotest_common.sh@10 -- # set +x
00:25:37.775    17:09:30	-- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:25:37.775   17:09:30	-- dd/posix.sh@60 -- # atime_if=1732036170
00:25:37.775    17:09:30	-- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:25:37.775   17:09:30	-- dd/posix.sh@61 -- # atime_of=1732036170
00:25:37.775   17:09:30	-- dd/posix.sh@66 -- # sleep 1
00:25:38.713   17:09:31	-- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:25:38.713  [2024-11-19 17:09:31.511457] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:25:38.713  [2024-11-19 17:09:31.511920] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144347 ]
00:25:38.970  [2024-11-19 17:09:31.672039] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:25:38.970  [2024-11-19 17:09:31.724101] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:25:38.970  
[2024-11-19T17:09:32.092Z] Copying: 512/512 [B] (average 500 kBps)
00:25:39.228  
00:25:39.486    17:09:32	-- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:25:39.486   17:09:32	-- dd/posix.sh@69 -- # (( atime_if == 1732036170 ))
00:25:39.486    17:09:32	-- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:25:39.486   17:09:32	-- dd/posix.sh@70 -- # (( atime_of == 1732036170 ))
00:25:39.486   17:09:32	-- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:25:39.486  [2024-11-19 17:09:32.171199] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:25:39.486  [2024-11-19 17:09:32.171661] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144355 ]
00:25:39.486  [2024-11-19 17:09:32.327011] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:25:39.744  [2024-11-19 17:09:32.376675] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:25:39.744  
[2024-11-19T17:09:32.865Z] Copying: 512/512 [B] (average 500 kBps)
00:25:40.001  
00:25:40.001    17:09:32	-- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:25:40.001   17:09:32	-- dd/posix.sh@73 -- # (( atime_if < 1732036172 ))
00:25:40.001  
00:25:40.001  real	0m2.321s
00:25:40.001  user	0m0.639s
00:25:40.001  sys	0m0.394s
00:25:40.001   17:09:32	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:25:40.001   17:09:32	-- common/autotest_common.sh@10 -- # set +x
00:25:40.001  ************************************
00:25:40.001  END TEST dd_flag_noatime
00:25:40.001  ************************************
00:25:40.001   17:09:32	-- dd/posix.sh@106 -- # run_test dd_flags_misc io
00:25:40.001   17:09:32	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:25:40.001   17:09:32	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:25:40.001   17:09:32	-- common/autotest_common.sh@10 -- # set +x
00:25:40.001  ************************************
00:25:40.001  START TEST dd_flags_misc
00:25:40.001  ************************************
00:25:40.001   17:09:32	-- common/autotest_common.sh@1114 -- # io
00:25:40.001   17:09:32	-- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw
00:25:40.001   17:09:32	-- dd/posix.sh@81 -- # flags_ro=(direct nonblock)
00:25:40.001   17:09:32	-- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync)
00:25:40.001   17:09:32	-- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}"
00:25:40.001   17:09:32	-- dd/posix.sh@86 -- # gen_bytes 512
00:25:40.001   17:09:32	-- dd/common.sh@98 -- # xtrace_disable
00:25:40.001   17:09:32	-- common/autotest_common.sh@10 -- # set +x
00:25:40.001   17:09:32	-- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}"
00:25:40.001   17:09:32	-- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct
00:25:40.259  [2024-11-19 17:09:32.880423] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:25:40.259  [2024-11-19 17:09:32.880870] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144391 ]
00:25:40.259  [2024-11-19 17:09:33.035140] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:25:40.259  [2024-11-19 17:09:33.084839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:25:40.517  
[2024-11-19T17:09:33.640Z] Copying: 512/512 [B] (average 500 kBps)
00:25:40.776  
00:25:40.776   17:09:33	-- dd/posix.sh@93 -- # [[ 4h4jwuzd0zn1n6mbi4cvbiomnbbrxjcijmkbyymrl63pxnq6jezwx0k16sl3tcmls6cjc7fzygqjobj46zk6nppezhih48ef9ell0gmps0yv2w8z1lh1yd3mzk2t9yt17idg4a51l7p86zsz1zkw3mojl2v8jp23tomu7jgkq05fp27ex3sbq06gh19eo6hcvabkr5r463p7h9a3e2zy0e2dyevyc2c605ebb6iag4sfekngwzn3qg4we6ln520mrcg1k7ewb53wnkmdhevp0qbzpq9tx0yigwp689x5lf09e8v2zgtc3uttzk2m2ht0ssydna76x974a013xh4ultedy7ogzo16u1ktr97vuvg54qe5vo557128uflr69ieb0y0kirwjmjrxk95tunk6e0qmwp07tsh44kdiv9f605ht07gu5ih7yig8oqrqs2uzdyfyhv5yxfbjsq0pqhin87fdyrlrogo32grrslk1mqy8pjauozcadxyugzjy17y == \4\h\4\j\w\u\z\d\0\z\n\1\n\6\m\b\i\4\c\v\b\i\o\m\n\b\b\r\x\j\c\i\j\m\k\b\y\y\m\r\l\6\3\p\x\n\q\6\j\e\z\w\x\0\k\1\6\s\l\3\t\c\m\l\s\6\c\j\c\7\f\z\y\g\q\j\o\b\j\4\6\z\k\6\n\p\p\e\z\h\i\h\4\8\e\f\9\e\l\l\0\g\m\p\s\0\y\v\2\w\8\z\1\l\h\1\y\d\3\m\z\k\2\t\9\y\t\1\7\i\d\g\4\a\5\1\l\7\p\8\6\z\s\z\1\z\k\w\3\m\o\j\l\2\v\8\j\p\2\3\t\o\m\u\7\j\g\k\q\0\5\f\p\2\7\e\x\3\s\b\q\0\6\g\h\1\9\e\o\6\h\c\v\a\b\k\r\5\r\4\6\3\p\7\h\9\a\3\e\2\z\y\0\e\2\d\y\e\v\y\c\2\c\6\0\5\e\b\b\6\i\a\g\4\s\f\e\k\n\g\w\z\n\3\q\g\4\w\e\6\l\n\5\2\0\m\r\c\g\1\k\7\e\w\b\5\3\w\n\k\m\d\h\e\v\p\0\q\b\z\p\q\9\t\x\0\y\i\g\w\p\6\8\9\x\5\l\f\0\9\e\8\v\2\z\g\t\c\3\u\t\t\z\k\2\m\2\h\t\0\s\s\y\d\n\a\7\6\x\9\7\4\a\0\1\3\x\h\4\u\l\t\e\d\y\7\o\g\z\o\1\6\u\1\k\t\r\9\7\v\u\v\g\5\4\q\e\5\v\o\5\5\7\1\2\8\u\f\l\r\6\9\i\e\b\0\y\0\k\i\r\w\j\m\j\r\x\k\9\5\t\u\n\k\6\e\0\q\m\w\p\0\7\t\s\h\4\4\k\d\i\v\9\f\6\0\5\h\t\0\7\g\u\5\i\h\7\y\i\g\8\o\q\r\q\s\2\u\z\d\y\f\y\h\v\5\y\x\f\b\j\s\q\0\p\q\h\i\n\8\7\f\d\y\r\l\r\o\g\o\3\2\g\r\r\s\l\k\1\m\q\y\8\p\j\a\u\o\z\c\a\d\x\y\u\g\z\j\y\1\7\y ]]
00:25:40.776   17:09:33	-- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}"
00:25:40.776   17:09:33	-- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock
00:25:40.776  [2024-11-19 17:09:33.479919] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:25:40.776  [2024-11-19 17:09:33.480376] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144405 ]
00:25:40.776  [2024-11-19 17:09:33.622114] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:25:41.033  [2024-11-19 17:09:33.671944] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:25:41.033  
[2024-11-19T17:09:34.155Z] Copying: 512/512 [B] (average 500 kBps)
00:25:41.291  
00:25:41.291   17:09:34	-- dd/posix.sh@93 -- # [[ 4h4jwuzd0zn1n6mbi4cvbiomnbbrxjcijmkbyymrl63pxnq6jezwx0k16sl3tcmls6cjc7fzygqjobj46zk6nppezhih48ef9ell0gmps0yv2w8z1lh1yd3mzk2t9yt17idg4a51l7p86zsz1zkw3mojl2v8jp23tomu7jgkq05fp27ex3sbq06gh19eo6hcvabkr5r463p7h9a3e2zy0e2dyevyc2c605ebb6iag4sfekngwzn3qg4we6ln520mrcg1k7ewb53wnkmdhevp0qbzpq9tx0yigwp689x5lf09e8v2zgtc3uttzk2m2ht0ssydna76x974a013xh4ultedy7ogzo16u1ktr97vuvg54qe5vo557128uflr69ieb0y0kirwjmjrxk95tunk6e0qmwp07tsh44kdiv9f605ht07gu5ih7yig8oqrqs2uzdyfyhv5yxfbjsq0pqhin87fdyrlrogo32grrslk1mqy8pjauozcadxyugzjy17y == \4\h\4\j\w\u\z\d\0\z\n\1\n\6\m\b\i\4\c\v\b\i\o\m\n\b\b\r\x\j\c\i\j\m\k\b\y\y\m\r\l\6\3\p\x\n\q\6\j\e\z\w\x\0\k\1\6\s\l\3\t\c\m\l\s\6\c\j\c\7\f\z\y\g\q\j\o\b\j\4\6\z\k\6\n\p\p\e\z\h\i\h\4\8\e\f\9\e\l\l\0\g\m\p\s\0\y\v\2\w\8\z\1\l\h\1\y\d\3\m\z\k\2\t\9\y\t\1\7\i\d\g\4\a\5\1\l\7\p\8\6\z\s\z\1\z\k\w\3\m\o\j\l\2\v\8\j\p\2\3\t\o\m\u\7\j\g\k\q\0\5\f\p\2\7\e\x\3\s\b\q\0\6\g\h\1\9\e\o\6\h\c\v\a\b\k\r\5\r\4\6\3\p\7\h\9\a\3\e\2\z\y\0\e\2\d\y\e\v\y\c\2\c\6\0\5\e\b\b\6\i\a\g\4\s\f\e\k\n\g\w\z\n\3\q\g\4\w\e\6\l\n\5\2\0\m\r\c\g\1\k\7\e\w\b\5\3\w\n\k\m\d\h\e\v\p\0\q\b\z\p\q\9\t\x\0\y\i\g\w\p\6\8\9\x\5\l\f\0\9\e\8\v\2\z\g\t\c\3\u\t\t\z\k\2\m\2\h\t\0\s\s\y\d\n\a\7\6\x\9\7\4\a\0\1\3\x\h\4\u\l\t\e\d\y\7\o\g\z\o\1\6\u\1\k\t\r\9\7\v\u\v\g\5\4\q\e\5\v\o\5\5\7\1\2\8\u\f\l\r\6\9\i\e\b\0\y\0\k\i\r\w\j\m\j\r\x\k\9\5\t\u\n\k\6\e\0\q\m\w\p\0\7\t\s\h\4\4\k\d\i\v\9\f\6\0\5\h\t\0\7\g\u\5\i\h\7\y\i\g\8\o\q\r\q\s\2\u\z\d\y\f\y\h\v\5\y\x\f\b\j\s\q\0\p\q\h\i\n\8\7\f\d\y\r\l\r\o\g\o\3\2\g\r\r\s\l\k\1\m\q\y\8\p\j\a\u\o\z\c\a\d\x\y\u\g\z\j\y\1\7\y ]]
00:25:41.291   17:09:34	-- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}"
00:25:41.291   17:09:34	-- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync
00:25:41.291  [2024-11-19 17:09:34.075620] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:25:41.291  [2024-11-19 17:09:34.075979] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144417 ]
00:25:41.550  [2024-11-19 17:09:34.215318] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:25:41.550  [2024-11-19 17:09:34.261400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:25:41.550  
[2024-11-19T17:09:34.671Z] Copying: 512/512 [B] (average 100 kBps)
00:25:41.807  
00:25:41.807   17:09:34	-- dd/posix.sh@93 -- # [[ 4h4jwuzd0zn1n6mbi4cvbiomnbbrxjcijmkbyymrl63pxnq6jezwx0k16sl3tcmls6cjc7fzygqjobj46zk6nppezhih48ef9ell0gmps0yv2w8z1lh1yd3mzk2t9yt17idg4a51l7p86zsz1zkw3mojl2v8jp23tomu7jgkq05fp27ex3sbq06gh19eo6hcvabkr5r463p7h9a3e2zy0e2dyevyc2c605ebb6iag4sfekngwzn3qg4we6ln520mrcg1k7ewb53wnkmdhevp0qbzpq9tx0yigwp689x5lf09e8v2zgtc3uttzk2m2ht0ssydna76x974a013xh4ultedy7ogzo16u1ktr97vuvg54qe5vo557128uflr69ieb0y0kirwjmjrxk95tunk6e0qmwp07tsh44kdiv9f605ht07gu5ih7yig8oqrqs2uzdyfyhv5yxfbjsq0pqhin87fdyrlrogo32grrslk1mqy8pjauozcadxyugzjy17y == \4\h\4\j\w\u\z\d\0\z\n\1\n\6\m\b\i\4\c\v\b\i\o\m\n\b\b\r\x\j\c\i\j\m\k\b\y\y\m\r\l\6\3\p\x\n\q\6\j\e\z\w\x\0\k\1\6\s\l\3\t\c\m\l\s\6\c\j\c\7\f\z\y\g\q\j\o\b\j\4\6\z\k\6\n\p\p\e\z\h\i\h\4\8\e\f\9\e\l\l\0\g\m\p\s\0\y\v\2\w\8\z\1\l\h\1\y\d\3\m\z\k\2\t\9\y\t\1\7\i\d\g\4\a\5\1\l\7\p\8\6\z\s\z\1\z\k\w\3\m\o\j\l\2\v\8\j\p\2\3\t\o\m\u\7\j\g\k\q\0\5\f\p\2\7\e\x\3\s\b\q\0\6\g\h\1\9\e\o\6\h\c\v\a\b\k\r\5\r\4\6\3\p\7\h\9\a\3\e\2\z\y\0\e\2\d\y\e\v\y\c\2\c\6\0\5\e\b\b\6\i\a\g\4\s\f\e\k\n\g\w\z\n\3\q\g\4\w\e\6\l\n\5\2\0\m\r\c\g\1\k\7\e\w\b\5\3\w\n\k\m\d\h\e\v\p\0\q\b\z\p\q\9\t\x\0\y\i\g\w\p\6\8\9\x\5\l\f\0\9\e\8\v\2\z\g\t\c\3\u\t\t\z\k\2\m\2\h\t\0\s\s\y\d\n\a\7\6\x\9\7\4\a\0\1\3\x\h\4\u\l\t\e\d\y\7\o\g\z\o\1\6\u\1\k\t\r\9\7\v\u\v\g\5\4\q\e\5\v\o\5\5\7\1\2\8\u\f\l\r\6\9\i\e\b\0\y\0\k\i\r\w\j\m\j\r\x\k\9\5\t\u\n\k\6\e\0\q\m\w\p\0\7\t\s\h\4\4\k\d\i\v\9\f\6\0\5\h\t\0\7\g\u\5\i\h\7\y\i\g\8\o\q\r\q\s\2\u\z\d\y\f\y\h\v\5\y\x\f\b\j\s\q\0\p\q\h\i\n\8\7\f\d\y\r\l\r\o\g\o\3\2\g\r\r\s\l\k\1\m\q\y\8\p\j\a\u\o\z\c\a\d\x\y\u\g\z\j\y\1\7\y ]]
00:25:41.807   17:09:34	-- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}"
00:25:41.807   17:09:34	-- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync
00:25:42.066  [2024-11-19 17:09:34.693265] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:25:42.066  [2024-11-19 17:09:34.693740] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144429 ]
00:25:42.066  [2024-11-19 17:09:34.841247] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:25:42.066  [2024-11-19 17:09:34.909033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:25:42.324  
[2024-11-19T17:09:35.447Z] Copying: 512/512 [B] (average 166 kBps)
00:25:42.583  
00:25:42.583   17:09:35	-- dd/posix.sh@93 -- # [[ 4h4jwuzd0zn1n6mbi4cvbiomnbbrxjcijmkbyymrl63pxnq6jezwx0k16sl3tcmls6cjc7fzygqjobj46zk6nppezhih48ef9ell0gmps0yv2w8z1lh1yd3mzk2t9yt17idg4a51l7p86zsz1zkw3mojl2v8jp23tomu7jgkq05fp27ex3sbq06gh19eo6hcvabkr5r463p7h9a3e2zy0e2dyevyc2c605ebb6iag4sfekngwzn3qg4we6ln520mrcg1k7ewb53wnkmdhevp0qbzpq9tx0yigwp689x5lf09e8v2zgtc3uttzk2m2ht0ssydna76x974a013xh4ultedy7ogzo16u1ktr97vuvg54qe5vo557128uflr69ieb0y0kirwjmjrxk95tunk6e0qmwp07tsh44kdiv9f605ht07gu5ih7yig8oqrqs2uzdyfyhv5yxfbjsq0pqhin87fdyrlrogo32grrslk1mqy8pjauozcadxyugzjy17y == \4\h\4\j\w\u\z\d\0\z\n\1\n\6\m\b\i\4\c\v\b\i\o\m\n\b\b\r\x\j\c\i\j\m\k\b\y\y\m\r\l\6\3\p\x\n\q\6\j\e\z\w\x\0\k\1\6\s\l\3\t\c\m\l\s\6\c\j\c\7\f\z\y\g\q\j\o\b\j\4\6\z\k\6\n\p\p\e\z\h\i\h\4\8\e\f\9\e\l\l\0\g\m\p\s\0\y\v\2\w\8\z\1\l\h\1\y\d\3\m\z\k\2\t\9\y\t\1\7\i\d\g\4\a\5\1\l\7\p\8\6\z\s\z\1\z\k\w\3\m\o\j\l\2\v\8\j\p\2\3\t\o\m\u\7\j\g\k\q\0\5\f\p\2\7\e\x\3\s\b\q\0\6\g\h\1\9\e\o\6\h\c\v\a\b\k\r\5\r\4\6\3\p\7\h\9\a\3\e\2\z\y\0\e\2\d\y\e\v\y\c\2\c\6\0\5\e\b\b\6\i\a\g\4\s\f\e\k\n\g\w\z\n\3\q\g\4\w\e\6\l\n\5\2\0\m\r\c\g\1\k\7\e\w\b\5\3\w\n\k\m\d\h\e\v\p\0\q\b\z\p\q\9\t\x\0\y\i\g\w\p\6\8\9\x\5\l\f\0\9\e\8\v\2\z\g\t\c\3\u\t\t\z\k\2\m\2\h\t\0\s\s\y\d\n\a\7\6\x\9\7\4\a\0\1\3\x\h\4\u\l\t\e\d\y\7\o\g\z\o\1\6\u\1\k\t\r\9\7\v\u\v\g\5\4\q\e\5\v\o\5\5\7\1\2\8\u\f\l\r\6\9\i\e\b\0\y\0\k\i\r\w\j\m\j\r\x\k\9\5\t\u\n\k\6\e\0\q\m\w\p\0\7\t\s\h\4\4\k\d\i\v\9\f\6\0\5\h\t\0\7\g\u\5\i\h\7\y\i\g\8\o\q\r\q\s\2\u\z\d\y\f\y\h\v\5\y\x\f\b\j\s\q\0\p\q\h\i\n\8\7\f\d\y\r\l\r\o\g\o\3\2\g\r\r\s\l\k\1\m\q\y\8\p\j\a\u\o\z\c\a\d\x\y\u\g\z\j\y\1\7\y ]]
00:25:42.583   17:09:35	-- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}"
00:25:42.583   17:09:35	-- dd/posix.sh@86 -- # gen_bytes 512
00:25:42.583   17:09:35	-- dd/common.sh@98 -- # xtrace_disable
00:25:42.583   17:09:35	-- common/autotest_common.sh@10 -- # set +x
00:25:42.583   17:09:35	-- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}"
00:25:42.583   17:09:35	-- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct
00:25:42.583  [2024-11-19 17:09:35.401699] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:25:42.583  [2024-11-19 17:09:35.402221] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144439 ]
00:25:42.841  [2024-11-19 17:09:35.560113] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:25:42.841  [2024-11-19 17:09:35.611461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:25:42.841  
[2024-11-19T17:09:36.272Z] Copying: 512/512 [B] (average 500 kBps)
00:25:43.408  
00:25:43.408   17:09:35	-- dd/posix.sh@93 -- # [[ 7b6laf55yqn4elf00ij5lu517xzjc6sl6ejvqyh3e0gtkkj0whbqfsq4stgh2vse5yqkkwo4owlcja0j9shqpb40zjv4a34ou9vv6uonk2g4594r0q6vfcd1se9ot7x80lw0hgl2a4cjwjn76jtus62fqgmpfky72haj88kia54lf0umvc5nemptmzhobgpt7ob8yequu5b2x78zcnxqbn6dc43i4zc6ifs1ff9u15f1j7dl91bmutzxb9y0brepho18yarqj64x3p1yykrykmm43g22vd47hlfbfbkhmpu4t0w5mu2fb9xm1ihpyhblxz2uptt7pd4ltdk3b8uqa9kcm9zhxk51a34dx8a599nf4wkc0fg9ua7nibxm7jrb2jd96ouxnjnw0fiwdzzeya7gtg7x6dzefoam0cfntbq591re25739vffptdcz5hq87agnxu444oihbgfhedrrurfq3r8j1pvtzj6gnrf4rc85jjzaudhymrirkq7w7uz == \7\b\6\l\a\f\5\5\y\q\n\4\e\l\f\0\0\i\j\5\l\u\5\1\7\x\z\j\c\6\s\l\6\e\j\v\q\y\h\3\e\0\g\t\k\k\j\0\w\h\b\q\f\s\q\4\s\t\g\h\2\v\s\e\5\y\q\k\k\w\o\4\o\w\l\c\j\a\0\j\9\s\h\q\p\b\4\0\z\j\v\4\a\3\4\o\u\9\v\v\6\u\o\n\k\2\g\4\5\9\4\r\0\q\6\v\f\c\d\1\s\e\9\o\t\7\x\8\0\l\w\0\h\g\l\2\a\4\c\j\w\j\n\7\6\j\t\u\s\6\2\f\q\g\m\p\f\k\y\7\2\h\a\j\8\8\k\i\a\5\4\l\f\0\u\m\v\c\5\n\e\m\p\t\m\z\h\o\b\g\p\t\7\o\b\8\y\e\q\u\u\5\b\2\x\7\8\z\c\n\x\q\b\n\6\d\c\4\3\i\4\z\c\6\i\f\s\1\f\f\9\u\1\5\f\1\j\7\d\l\9\1\b\m\u\t\z\x\b\9\y\0\b\r\e\p\h\o\1\8\y\a\r\q\j\6\4\x\3\p\1\y\y\k\r\y\k\m\m\4\3\g\2\2\v\d\4\7\h\l\f\b\f\b\k\h\m\p\u\4\t\0\w\5\m\u\2\f\b\9\x\m\1\i\h\p\y\h\b\l\x\z\2\u\p\t\t\7\p\d\4\l\t\d\k\3\b\8\u\q\a\9\k\c\m\9\z\h\x\k\5\1\a\3\4\d\x\8\a\5\9\9\n\f\4\w\k\c\0\f\g\9\u\a\7\n\i\b\x\m\7\j\r\b\2\j\d\9\6\o\u\x\n\j\n\w\0\f\i\w\d\z\z\e\y\a\7\g\t\g\7\x\6\d\z\e\f\o\a\m\0\c\f\n\t\b\q\5\9\1\r\e\2\5\7\3\9\v\f\f\p\t\d\c\z\5\h\q\8\7\a\g\n\x\u\4\4\4\o\i\h\b\g\f\h\e\d\r\r\u\r\f\q\3\r\8\j\1\p\v\t\z\j\6\g\n\r\f\4\r\c\8\5\j\j\z\a\u\d\h\y\m\r\i\r\k\q\7\w\7\u\z ]]
00:25:43.408   17:09:35	-- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}"
00:25:43.408   17:09:35	-- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock
00:25:43.408  [2024-11-19 17:09:36.044330] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:25:43.408  [2024-11-19 17:09:36.044813] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144451 ]
00:25:43.408  [2024-11-19 17:09:36.199472] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:25:43.408  [2024-11-19 17:09:36.243352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:25:43.666  
[2024-11-19T17:09:36.788Z] Copying: 512/512 [B] (average 500 kBps)
00:25:43.924  
00:25:43.924   17:09:36	-- dd/posix.sh@93 -- # [[ 7b6laf55yqn4elf00ij5lu517xzjc6sl6ejvqyh3e0gtkkj0whbqfsq4stgh2vse5yqkkwo4owlcja0j9shqpb40zjv4a34ou9vv6uonk2g4594r0q6vfcd1se9ot7x80lw0hgl2a4cjwjn76jtus62fqgmpfky72haj88kia54lf0umvc5nemptmzhobgpt7ob8yequu5b2x78zcnxqbn6dc43i4zc6ifs1ff9u15f1j7dl91bmutzxb9y0brepho18yarqj64x3p1yykrykmm43g22vd47hlfbfbkhmpu4t0w5mu2fb9xm1ihpyhblxz2uptt7pd4ltdk3b8uqa9kcm9zhxk51a34dx8a599nf4wkc0fg9ua7nibxm7jrb2jd96ouxnjnw0fiwdzzeya7gtg7x6dzefoam0cfntbq591re25739vffptdcz5hq87agnxu444oihbgfhedrrurfq3r8j1pvtzj6gnrf4rc85jjzaudhymrirkq7w7uz == \7\b\6\l\a\f\5\5\y\q\n\4\e\l\f\0\0\i\j\5\l\u\5\1\7\x\z\j\c\6\s\l\6\e\j\v\q\y\h\3\e\0\g\t\k\k\j\0\w\h\b\q\f\s\q\4\s\t\g\h\2\v\s\e\5\y\q\k\k\w\o\4\o\w\l\c\j\a\0\j\9\s\h\q\p\b\4\0\z\j\v\4\a\3\4\o\u\9\v\v\6\u\o\n\k\2\g\4\5\9\4\r\0\q\6\v\f\c\d\1\s\e\9\o\t\7\x\8\0\l\w\0\h\g\l\2\a\4\c\j\w\j\n\7\6\j\t\u\s\6\2\f\q\g\m\p\f\k\y\7\2\h\a\j\8\8\k\i\a\5\4\l\f\0\u\m\v\c\5\n\e\m\p\t\m\z\h\o\b\g\p\t\7\o\b\8\y\e\q\u\u\5\b\2\x\7\8\z\c\n\x\q\b\n\6\d\c\4\3\i\4\z\c\6\i\f\s\1\f\f\9\u\1\5\f\1\j\7\d\l\9\1\b\m\u\t\z\x\b\9\y\0\b\r\e\p\h\o\1\8\y\a\r\q\j\6\4\x\3\p\1\y\y\k\r\y\k\m\m\4\3\g\2\2\v\d\4\7\h\l\f\b\f\b\k\h\m\p\u\4\t\0\w\5\m\u\2\f\b\9\x\m\1\i\h\p\y\h\b\l\x\z\2\u\p\t\t\7\p\d\4\l\t\d\k\3\b\8\u\q\a\9\k\c\m\9\z\h\x\k\5\1\a\3\4\d\x\8\a\5\9\9\n\f\4\w\k\c\0\f\g\9\u\a\7\n\i\b\x\m\7\j\r\b\2\j\d\9\6\o\u\x\n\j\n\w\0\f\i\w\d\z\z\e\y\a\7\g\t\g\7\x\6\d\z\e\f\o\a\m\0\c\f\n\t\b\q\5\9\1\r\e\2\5\7\3\9\v\f\f\p\t\d\c\z\5\h\q\8\7\a\g\n\x\u\4\4\4\o\i\h\b\g\f\h\e\d\r\r\u\r\f\q\3\r\8\j\1\p\v\t\z\j\6\g\n\r\f\4\r\c\8\5\j\j\z\a\u\d\h\y\m\r\i\r\k\q\7\w\7\u\z ]]
00:25:43.924   17:09:36	-- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}"
00:25:43.924   17:09:36	-- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync
00:25:43.925  [2024-11-19 17:09:36.656337] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:25:43.925  [2024-11-19 17:09:36.656816] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144468 ]
00:25:44.183  [2024-11-19 17:09:36.810253] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:25:44.183  [2024-11-19 17:09:36.856215] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:25:44.183  
[2024-11-19T17:09:37.311Z] Copying: 512/512 [B] (average 125 kBps)
00:25:44.447  
00:25:44.448   17:09:37	-- dd/posix.sh@93 -- # [[ 7b6laf55yqn4elf00ij5lu517xzjc6sl6ejvqyh3e0gtkkj0whbqfsq4stgh2vse5yqkkwo4owlcja0j9shqpb40zjv4a34ou9vv6uonk2g4594r0q6vfcd1se9ot7x80lw0hgl2a4cjwjn76jtus62fqgmpfky72haj88kia54lf0umvc5nemptmzhobgpt7ob8yequu5b2x78zcnxqbn6dc43i4zc6ifs1ff9u15f1j7dl91bmutzxb9y0brepho18yarqj64x3p1yykrykmm43g22vd47hlfbfbkhmpu4t0w5mu2fb9xm1ihpyhblxz2uptt7pd4ltdk3b8uqa9kcm9zhxk51a34dx8a599nf4wkc0fg9ua7nibxm7jrb2jd96ouxnjnw0fiwdzzeya7gtg7x6dzefoam0cfntbq591re25739vffptdcz5hq87agnxu444oihbgfhedrrurfq3r8j1pvtzj6gnrf4rc85jjzaudhymrirkq7w7uz == \7\b\6\l\a\f\5\5\y\q\n\4\e\l\f\0\0\i\j\5\l\u\5\1\7\x\z\j\c\6\s\l\6\e\j\v\q\y\h\3\e\0\g\t\k\k\j\0\w\h\b\q\f\s\q\4\s\t\g\h\2\v\s\e\5\y\q\k\k\w\o\4\o\w\l\c\j\a\0\j\9\s\h\q\p\b\4\0\z\j\v\4\a\3\4\o\u\9\v\v\6\u\o\n\k\2\g\4\5\9\4\r\0\q\6\v\f\c\d\1\s\e\9\o\t\7\x\8\0\l\w\0\h\g\l\2\a\4\c\j\w\j\n\7\6\j\t\u\s\6\2\f\q\g\m\p\f\k\y\7\2\h\a\j\8\8\k\i\a\5\4\l\f\0\u\m\v\c\5\n\e\m\p\t\m\z\h\o\b\g\p\t\7\o\b\8\y\e\q\u\u\5\b\2\x\7\8\z\c\n\x\q\b\n\6\d\c\4\3\i\4\z\c\6\i\f\s\1\f\f\9\u\1\5\f\1\j\7\d\l\9\1\b\m\u\t\z\x\b\9\y\0\b\r\e\p\h\o\1\8\y\a\r\q\j\6\4\x\3\p\1\y\y\k\r\y\k\m\m\4\3\g\2\2\v\d\4\7\h\l\f\b\f\b\k\h\m\p\u\4\t\0\w\5\m\u\2\f\b\9\x\m\1\i\h\p\y\h\b\l\x\z\2\u\p\t\t\7\p\d\4\l\t\d\k\3\b\8\u\q\a\9\k\c\m\9\z\h\x\k\5\1\a\3\4\d\x\8\a\5\9\9\n\f\4\w\k\c\0\f\g\9\u\a\7\n\i\b\x\m\7\j\r\b\2\j\d\9\6\o\u\x\n\j\n\w\0\f\i\w\d\z\z\e\y\a\7\g\t\g\7\x\6\d\z\e\f\o\a\m\0\c\f\n\t\b\q\5\9\1\r\e\2\5\7\3\9\v\f\f\p\t\d\c\z\5\h\q\8\7\a\g\n\x\u\4\4\4\o\i\h\b\g\f\h\e\d\r\r\u\r\f\q\3\r\8\j\1\p\v\t\z\j\6\g\n\r\f\4\r\c\8\5\j\j\z\a\u\d\h\y\m\r\i\r\k\q\7\w\7\u\z ]]
00:25:44.448   17:09:37	-- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}"
00:25:44.448   17:09:37	-- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync
00:25:44.448  [2024-11-19 17:09:37.260451] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:25:44.448  [2024-11-19 17:09:37.260904] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144473 ]
00:25:44.705  [2024-11-19 17:09:37.404118] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:25:44.705  [2024-11-19 17:09:37.451410] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:25:44.705  
[2024-11-19T17:09:37.827Z] Copying: 512/512 [B] (average 250 kBps)
00:25:44.963  
00:25:44.963   17:09:37	-- dd/posix.sh@93 -- # [[ 7b6laf55yqn4elf00ij5lu517xzjc6sl6ejvqyh3e0gtkkj0whbqfsq4stgh2vse5yqkkwo4owlcja0j9shqpb40zjv4a34ou9vv6uonk2g4594r0q6vfcd1se9ot7x80lw0hgl2a4cjwjn76jtus62fqgmpfky72haj88kia54lf0umvc5nemptmzhobgpt7ob8yequu5b2x78zcnxqbn6dc43i4zc6ifs1ff9u15f1j7dl91bmutzxb9y0brepho18yarqj64x3p1yykrykmm43g22vd47hlfbfbkhmpu4t0w5mu2fb9xm1ihpyhblxz2uptt7pd4ltdk3b8uqa9kcm9zhxk51a34dx8a599nf4wkc0fg9ua7nibxm7jrb2jd96ouxnjnw0fiwdzzeya7gtg7x6dzefoam0cfntbq591re25739vffptdcz5hq87agnxu444oihbgfhedrrurfq3r8j1pvtzj6gnrf4rc85jjzaudhymrirkq7w7uz == \7\b\6\l\a\f\5\5\y\q\n\4\e\l\f\0\0\i\j\5\l\u\5\1\7\x\z\j\c\6\s\l\6\e\j\v\q\y\h\3\e\0\g\t\k\k\j\0\w\h\b\q\f\s\q\4\s\t\g\h\2\v\s\e\5\y\q\k\k\w\o\4\o\w\l\c\j\a\0\j\9\s\h\q\p\b\4\0\z\j\v\4\a\3\4\o\u\9\v\v\6\u\o\n\k\2\g\4\5\9\4\r\0\q\6\v\f\c\d\1\s\e\9\o\t\7\x\8\0\l\w\0\h\g\l\2\a\4\c\j\w\j\n\7\6\j\t\u\s\6\2\f\q\g\m\p\f\k\y\7\2\h\a\j\8\8\k\i\a\5\4\l\f\0\u\m\v\c\5\n\e\m\p\t\m\z\h\o\b\g\p\t\7\o\b\8\y\e\q\u\u\5\b\2\x\7\8\z\c\n\x\q\b\n\6\d\c\4\3\i\4\z\c\6\i\f\s\1\f\f\9\u\1\5\f\1\j\7\d\l\9\1\b\m\u\t\z\x\b\9\y\0\b\r\e\p\h\o\1\8\y\a\r\q\j\6\4\x\3\p\1\y\y\k\r\y\k\m\m\4\3\g\2\2\v\d\4\7\h\l\f\b\f\b\k\h\m\p\u\4\t\0\w\5\m\u\2\f\b\9\x\m\1\i\h\p\y\h\b\l\x\z\2\u\p\t\t\7\p\d\4\l\t\d\k\3\b\8\u\q\a\9\k\c\m\9\z\h\x\k\5\1\a\3\4\d\x\8\a\5\9\9\n\f\4\w\k\c\0\f\g\9\u\a\7\n\i\b\x\m\7\j\r\b\2\j\d\9\6\o\u\x\n\j\n\w\0\f\i\w\d\z\z\e\y\a\7\g\t\g\7\x\6\d\z\e\f\o\a\m\0\c\f\n\t\b\q\5\9\1\r\e\2\5\7\3\9\v\f\f\p\t\d\c\z\5\h\q\8\7\a\g\n\x\u\4\4\4\o\i\h\b\g\f\h\e\d\r\r\u\r\f\q\3\r\8\j\1\p\v\t\z\j\6\g\n\r\f\4\r\c\8\5\j\j\z\a\u\d\h\y\m\r\i\r\k\q\7\w\7\u\z ]]
00:25:44.963  
00:25:44.963  real	0m4.998s
00:25:44.963  user	0m2.427s
00:25:44.963  sys	0m1.459s
00:25:44.963   17:09:37	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:25:44.963   17:09:37	-- common/autotest_common.sh@10 -- # set +x
00:25:44.963  ************************************
00:25:44.963  END TEST dd_flags_misc
00:25:44.963  ************************************
00:25:45.221   17:09:37	-- dd/posix.sh@131 -- # tests_forced_aio
00:25:45.221   17:09:37	-- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', using AIO'
00:25:45.221  * Second test run, using AIO
00:25:45.221   17:09:37	-- dd/posix.sh@113 -- # DD_APP+=("--aio")
00:25:45.221   17:09:37	-- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append
00:25:45.221   17:09:37	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:25:45.221   17:09:37	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:25:45.221   17:09:37	-- common/autotest_common.sh@10 -- # set +x
00:25:45.221  ************************************
00:25:45.221  START TEST dd_flag_append_forced_aio
00:25:45.221  ************************************
00:25:45.221   17:09:37	-- common/autotest_common.sh@1114 -- # append
00:25:45.221   17:09:37	-- dd/posix.sh@16 -- # local dump0
00:25:45.221   17:09:37	-- dd/posix.sh@17 -- # local dump1
00:25:45.221    17:09:37	-- dd/posix.sh@19 -- # gen_bytes 32
00:25:45.221    17:09:37	-- dd/common.sh@98 -- # xtrace_disable
00:25:45.221    17:09:37	-- common/autotest_common.sh@10 -- # set +x
00:25:45.221   17:09:37	-- dd/posix.sh@19 -- # dump0=52rls99ay0kotuoo9pb7kpm5c7bkxjxf
00:25:45.221    17:09:37	-- dd/posix.sh@20 -- # gen_bytes 32
00:25:45.221    17:09:37	-- dd/common.sh@98 -- # xtrace_disable
00:25:45.221    17:09:37	-- common/autotest_common.sh@10 -- # set +x
00:25:45.221   17:09:37	-- dd/posix.sh@20 -- # dump1=ogkv9vq6ui79e6jr049ix7d1zr30wzjx
00:25:45.221   17:09:37	-- dd/posix.sh@22 -- # printf %s 52rls99ay0kotuoo9pb7kpm5c7bkxjxf
00:25:45.221   17:09:37	-- dd/posix.sh@23 -- # printf %s ogkv9vq6ui79e6jr049ix7d1zr30wzjx
00:25:45.221   17:09:37	-- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append
00:25:45.221  [2024-11-19 17:09:37.947006] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:25:45.221  [2024-11-19 17:09:37.947431] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144511 ]
00:25:45.480  [2024-11-19 17:09:38.102980] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:25:45.480  [2024-11-19 17:09:38.147558] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:25:45.480  
[2024-11-19T17:09:38.601Z] Copying: 32/32 [B] (average 31 kBps)
00:25:45.737  
00:25:45.737   17:09:38	-- dd/posix.sh@27 -- # [[ ogkv9vq6ui79e6jr049ix7d1zr30wzjx52rls99ay0kotuoo9pb7kpm5c7bkxjxf == \o\g\k\v\9\v\q\6\u\i\7\9\e\6\j\r\0\4\9\i\x\7\d\1\z\r\3\0\w\z\j\x\5\2\r\l\s\9\9\a\y\0\k\o\t\u\o\o\9\p\b\7\k\p\m\5\c\7\b\k\x\j\x\f ]]
00:25:45.737  
00:25:45.737  real	0m0.622s
00:25:45.737  user	0m0.306s
00:25:45.737  sys	0m0.179s
00:25:45.737   17:09:38	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:25:45.737   17:09:38	-- common/autotest_common.sh@10 -- # set +x
00:25:45.737  ************************************
00:25:45.737  END TEST dd_flag_append_forced_aio
00:25:45.737  ************************************
00:25:45.737   17:09:38	-- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory
00:25:45.737   17:09:38	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:25:45.737   17:09:38	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:25:45.737   17:09:38	-- common/autotest_common.sh@10 -- # set +x
00:25:45.737  ************************************
00:25:45.737  START TEST dd_flag_directory_forced_aio
00:25:45.737  ************************************
00:25:45.737   17:09:38	-- common/autotest_common.sh@1114 -- # directory
00:25:45.737   17:09:38	-- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:25:45.737   17:09:38	-- common/autotest_common.sh@650 -- # local es=0
00:25:45.737   17:09:38	-- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:25:45.737   17:09:38	-- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:25:45.737   17:09:38	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:25:45.737    17:09:38	-- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:25:45.738   17:09:38	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:25:45.738    17:09:38	-- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:25:45.738   17:09:38	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:25:45.738   17:09:38	-- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:25:45.738   17:09:38	-- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:25:45.738   17:09:38	-- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:25:45.996  [2024-11-19 17:09:38.623800] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:25:45.996  [2024-11-19 17:09:38.624223] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144546 ]
00:25:45.996  [2024-11-19 17:09:38.779256] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:25:45.996  [2024-11-19 17:09:38.826852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:25:46.254  [2024-11-19 17:09:38.890770] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory
00:25:46.254  [2024-11-19 17:09:38.891121] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory
00:25:46.254  [2024-11-19 17:09:38.891202] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:25:46.254  [2024-11-19 17:09:38.996387] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy
00:25:46.511   17:09:39	-- common/autotest_common.sh@653 -- # es=236
00:25:46.511   17:09:39	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:25:46.511   17:09:39	-- common/autotest_common.sh@662 -- # es=108
00:25:46.511   17:09:39	-- common/autotest_common.sh@663 -- # case "$es" in
00:25:46.511   17:09:39	-- common/autotest_common.sh@670 -- # es=1
00:25:46.511   17:09:39	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:25:46.511   17:09:39	-- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory
00:25:46.511   17:09:39	-- common/autotest_common.sh@650 -- # local es=0
00:25:46.511   17:09:39	-- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory
00:25:46.511   17:09:39	-- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:25:46.511   17:09:39	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:25:46.511    17:09:39	-- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:25:46.511   17:09:39	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:25:46.511    17:09:39	-- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:25:46.511   17:09:39	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:25:46.511   17:09:39	-- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:25:46.511   17:09:39	-- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:25:46.511   17:09:39	-- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory
00:25:46.511  [2024-11-19 17:09:39.183990] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:25:46.511  [2024-11-19 17:09:39.184330] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144560 ]
00:25:46.512  [2024-11-19 17:09:39.322824] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:25:46.770  [2024-11-19 17:09:39.369550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:25:46.770  [2024-11-19 17:09:39.432220] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory
00:25:46.770  [2024-11-19 17:09:39.432487] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory
00:25:46.770  [2024-11-19 17:09:39.432554] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:25:46.770  [2024-11-19 17:09:39.538421] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy
00:25:47.028  ************************************
00:25:47.028  END TEST dd_flag_directory_forced_aio
00:25:47.028  ************************************
00:25:47.028   17:09:39	-- common/autotest_common.sh@653 -- # es=236
00:25:47.028   17:09:39	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:25:47.028   17:09:39	-- common/autotest_common.sh@662 -- # es=108
00:25:47.028   17:09:39	-- common/autotest_common.sh@663 -- # case "$es" in
00:25:47.028   17:09:39	-- common/autotest_common.sh@670 -- # es=1
00:25:47.028   17:09:39	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:25:47.028  
00:25:47.028  real	0m1.126s
00:25:47.028  user	0m0.523s
00:25:47.028  sys	0m0.397s
00:25:47.028   17:09:39	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:25:47.028   17:09:39	-- common/autotest_common.sh@10 -- # set +x
00:25:47.028   17:09:39	-- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow
00:25:47.028   17:09:39	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:25:47.028   17:09:39	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:25:47.028   17:09:39	-- common/autotest_common.sh@10 -- # set +x
00:25:47.028  ************************************
00:25:47.028  START TEST dd_flag_nofollow_forced_aio
00:25:47.028  ************************************
00:25:47.028   17:09:39	-- common/autotest_common.sh@1114 -- # nofollow
00:25:47.028   17:09:39	-- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link
00:25:47.028   17:09:39	-- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link
00:25:47.028   17:09:39	-- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link
00:25:47.028   17:09:39	-- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link
00:25:47.028   17:09:39	-- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:25:47.028   17:09:39	-- common/autotest_common.sh@650 -- # local es=0
00:25:47.028   17:09:39	-- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:25:47.028   17:09:39	-- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:25:47.028   17:09:39	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:25:47.028    17:09:39	-- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:25:47.028   17:09:39	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:25:47.028    17:09:39	-- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:25:47.028   17:09:39	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:25:47.028   17:09:39	-- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:25:47.028   17:09:39	-- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:25:47.028   17:09:39	-- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:25:47.028  [2024-11-19 17:09:39.828714] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:25:47.028  [2024-11-19 17:09:39.829647] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144595 ]
00:25:47.288  [2024-11-19 17:09:39.978501] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:25:47.288  [2024-11-19 17:09:40.043410] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:25:47.288  [2024-11-19 17:09:40.114545] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links
00:25:47.288  [2024-11-19 17:09:40.114803] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links
00:25:47.288  [2024-11-19 17:09:40.114957] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:25:47.546  [2024-11-19 17:09:40.221228] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy
00:25:47.546   17:09:40	-- common/autotest_common.sh@653 -- # es=216
00:25:47.546   17:09:40	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:25:47.546   17:09:40	-- common/autotest_common.sh@662 -- # es=88
00:25:47.546   17:09:40	-- common/autotest_common.sh@663 -- # case "$es" in
00:25:47.546   17:09:40	-- common/autotest_common.sh@670 -- # es=1
00:25:47.546   17:09:40	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:25:47.546   17:09:40	-- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow
00:25:47.546   17:09:40	-- common/autotest_common.sh@650 -- # local es=0
00:25:47.546   17:09:40	-- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow
00:25:47.546   17:09:40	-- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:25:47.546   17:09:40	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:25:47.546    17:09:40	-- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:25:47.546   17:09:40	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:25:47.546    17:09:40	-- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:25:47.546   17:09:40	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:25:47.546   17:09:40	-- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:25:47.546   17:09:40	-- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:25:47.546   17:09:40	-- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow
00:25:47.804  [2024-11-19 17:09:40.433137] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:25:47.804  [2024-11-19 17:09:40.433608] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144607 ]
00:25:47.804  [2024-11-19 17:09:40.586827] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:25:47.804  [2024-11-19 17:09:40.632186] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:25:48.062  [2024-11-19 17:09:40.694259] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links
00:25:48.062  [2024-11-19 17:09:40.694543] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links
00:25:48.062  [2024-11-19 17:09:40.694610] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:25:48.062  [2024-11-19 17:09:40.798494] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy
00:25:48.320   17:09:40	-- common/autotest_common.sh@653 -- # es=216
00:25:48.320   17:09:40	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:25:48.320   17:09:40	-- common/autotest_common.sh@662 -- # es=88
00:25:48.320   17:09:40	-- common/autotest_common.sh@663 -- # case "$es" in
00:25:48.320   17:09:40	-- common/autotest_common.sh@670 -- # es=1
00:25:48.320   17:09:40	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:25:48.320   17:09:40	-- dd/posix.sh@46 -- # gen_bytes 512
00:25:48.320   17:09:40	-- dd/common.sh@98 -- # xtrace_disable
00:25:48.320   17:09:40	-- common/autotest_common.sh@10 -- # set +x
00:25:48.320   17:09:40	-- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:25:48.321  [2024-11-19 17:09:41.004861] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:25:48.321  [2024-11-19 17:09:41.005328] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144619 ]
00:25:48.321  [2024-11-19 17:09:41.158217] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:25:48.579  [2024-11-19 17:09:41.201569] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:25:48.579  
[2024-11-19T17:09:41.701Z] Copying: 512/512 [B] (average 500 kBps)
00:25:48.837  
00:25:48.837   17:09:41	-- dd/posix.sh@49 -- # [[ vceigoqzu3fbg4pycgf8xu0ki3m1la729t43wezcdvvcjeznu7x23tmue8easci2c5wrkwmlqush51v3fg7ibzq0y5wly7hdljw98jqvbbbn1gzk5xj02dwtlilo3na3crl6vr5mdy7rk0sjsorbuu59k4nb4v8aqgnhqvidyqf6rg08z2zarrcsf7h2ec92oeqahy7kxdxmwb2fu663fowq2sv8ip9bgshp5eb7jhhojn4qjwkv8gyp8t1zisj0bmgy315vh5ffdpts15kz1ypcs5we22tew7m7hmkd7c35naxbf1v64jey140oxlvjmyu5o1ckgebzwwe1xpe209n4fwnnj0k9ozw17ehmq9rjh9kwd1yke00n4cd1z7vxs7psgppvl53ktrua26co76feus04vdlwx662hslpbns5aeutdieh5d309rueuex1pfi5cxlgfo9advwh1n0o9qjb9mqmfmgeelzrreclu8hzo7tbrrehvingwa1nncqw == \v\c\e\i\g\o\q\z\u\3\f\b\g\4\p\y\c\g\f\8\x\u\0\k\i\3\m\1\l\a\7\2\9\t\4\3\w\e\z\c\d\v\v\c\j\e\z\n\u\7\x\2\3\t\m\u\e\8\e\a\s\c\i\2\c\5\w\r\k\w\m\l\q\u\s\h\5\1\v\3\f\g\7\i\b\z\q\0\y\5\w\l\y\7\h\d\l\j\w\9\8\j\q\v\b\b\b\n\1\g\z\k\5\x\j\0\2\d\w\t\l\i\l\o\3\n\a\3\c\r\l\6\v\r\5\m\d\y\7\r\k\0\s\j\s\o\r\b\u\u\5\9\k\4\n\b\4\v\8\a\q\g\n\h\q\v\i\d\y\q\f\6\r\g\0\8\z\2\z\a\r\r\c\s\f\7\h\2\e\c\9\2\o\e\q\a\h\y\7\k\x\d\x\m\w\b\2\f\u\6\6\3\f\o\w\q\2\s\v\8\i\p\9\b\g\s\h\p\5\e\b\7\j\h\h\o\j\n\4\q\j\w\k\v\8\g\y\p\8\t\1\z\i\s\j\0\b\m\g\y\3\1\5\v\h\5\f\f\d\p\t\s\1\5\k\z\1\y\p\c\s\5\w\e\2\2\t\e\w\7\m\7\h\m\k\d\7\c\3\5\n\a\x\b\f\1\v\6\4\j\e\y\1\4\0\o\x\l\v\j\m\y\u\5\o\1\c\k\g\e\b\z\w\w\e\1\x\p\e\2\0\9\n\4\f\w\n\n\j\0\k\9\o\z\w\1\7\e\h\m\q\9\r\j\h\9\k\w\d\1\y\k\e\0\0\n\4\c\d\1\z\7\v\x\s\7\p\s\g\p\p\v\l\5\3\k\t\r\u\a\2\6\c\o\7\6\f\e\u\s\0\4\v\d\l\w\x\6\6\2\h\s\l\p\b\n\s\5\a\e\u\t\d\i\e\h\5\d\3\0\9\r\u\e\u\e\x\1\p\f\i\5\c\x\l\g\f\o\9\a\d\v\w\h\1\n\0\o\9\q\j\b\9\m\q\m\f\m\g\e\e\l\z\r\r\e\c\l\u\8\h\z\o\7\t\b\r\r\e\h\v\i\n\g\w\a\1\n\n\c\q\w ]]
00:25:48.837  
00:25:48.837  real	0m1.797s
00:25:48.837  user	0m0.866s
00:25:48.837  sys	0m0.594s
00:25:48.837   17:09:41	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:25:48.837   17:09:41	-- common/autotest_common.sh@10 -- # set +x
00:25:48.837  ************************************
00:25:48.837  END TEST dd_flag_nofollow_forced_aio
00:25:48.837  ************************************
00:25:48.837   17:09:41	-- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime
00:25:48.837   17:09:41	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:25:48.837   17:09:41	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:25:48.837   17:09:41	-- common/autotest_common.sh@10 -- # set +x
00:25:48.837  ************************************
00:25:48.837  START TEST dd_flag_noatime_forced_aio
00:25:48.837  ************************************
00:25:48.837   17:09:41	-- common/autotest_common.sh@1114 -- # noatime
00:25:48.837   17:09:41	-- dd/posix.sh@53 -- # local atime_if
00:25:48.837   17:09:41	-- dd/posix.sh@54 -- # local atime_of
00:25:48.837   17:09:41	-- dd/posix.sh@58 -- # gen_bytes 512
00:25:48.837   17:09:41	-- dd/common.sh@98 -- # xtrace_disable
00:25:48.837   17:09:41	-- common/autotest_common.sh@10 -- # set +x
00:25:48.837    17:09:41	-- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:25:48.837   17:09:41	-- dd/posix.sh@60 -- # atime_if=1732036181
00:25:48.837    17:09:41	-- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:25:48.837   17:09:41	-- dd/posix.sh@61 -- # atime_of=1732036181
00:25:48.837   17:09:41	-- dd/posix.sh@66 -- # sleep 1
00:25:50.212   17:09:42	-- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:25:50.212  [2024-11-19 17:09:42.689302] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:25:50.212  [2024-11-19 17:09:42.689526] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144676 ]
00:25:50.212  [2024-11-19 17:09:42.851831] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:25:50.212  [2024-11-19 17:09:42.900350] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:25:50.212  
[2024-11-19T17:09:43.333Z] Copying: 512/512 [B] (average 500 kBps)
00:25:50.470  
00:25:50.470    17:09:43	-- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:25:50.470   17:09:43	-- dd/posix.sh@69 -- # (( atime_if == 1732036181 ))
00:25:50.470    17:09:43	-- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:25:50.470   17:09:43	-- dd/posix.sh@70 -- # (( atime_of == 1732036181 ))
00:25:50.470   17:09:43	-- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:25:50.470  [2024-11-19 17:09:43.318809] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:25:50.470  [2024-11-19 17:09:43.319003] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144683 ]
00:25:50.728  [2024-11-19 17:09:43.457016] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:25:50.728  [2024-11-19 17:09:43.502876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:25:50.728  
[2024-11-19T17:09:43.851Z] Copying: 512/512 [B] (average 500 kBps)
00:25:50.987  
00:25:51.245    17:09:43	-- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:25:51.245   17:09:43	-- dd/posix.sh@73 -- # (( atime_if < 1732036183 ))
00:25:51.245  
00:25:51.245  real	0m2.256s
00:25:51.245  user	0m0.577s
00:25:51.245  sys	0m0.409s
00:25:51.245   17:09:43	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:25:51.245   17:09:43	-- common/autotest_common.sh@10 -- # set +x
00:25:51.245  ************************************
00:25:51.245  END TEST dd_flag_noatime_forced_aio
00:25:51.245  ************************************
00:25:51.245   17:09:43	-- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io
00:25:51.245   17:09:43	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:25:51.245   17:09:43	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:25:51.245   17:09:43	-- common/autotest_common.sh@10 -- # set +x
00:25:51.245  ************************************
00:25:51.245  START TEST dd_flags_misc_forced_aio
00:25:51.245  ************************************
00:25:51.245   17:09:43	-- common/autotest_common.sh@1114 -- # io
00:25:51.245   17:09:43	-- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw
00:25:51.245   17:09:43	-- dd/posix.sh@81 -- # flags_ro=(direct nonblock)
00:25:51.245   17:09:43	-- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync)
00:25:51.245   17:09:43	-- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}"
00:25:51.245   17:09:43	-- dd/posix.sh@86 -- # gen_bytes 512
00:25:51.245   17:09:43	-- dd/common.sh@98 -- # xtrace_disable
00:25:51.245   17:09:43	-- common/autotest_common.sh@10 -- # set +x
00:25:51.245   17:09:43	-- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}"
00:25:51.245   17:09:43	-- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct
00:25:51.245  [2024-11-19 17:09:43.984199] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:25:51.245  [2024-11-19 17:09:43.984366] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144719 ]
00:25:51.503  [2024-11-19 17:09:44.123167] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:25:51.503  [2024-11-19 17:09:44.166143] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:25:51.503  
[2024-11-19T17:09:44.626Z] Copying: 512/512 [B] (average 500 kBps)
00:25:51.762  
00:25:51.762   17:09:44	-- dd/posix.sh@93 -- # [[ rnsu3nf2ucn2a61x2tjgukvkg01k5mici643gfhjclj96rll2celcwkt0xypzr33x5krodce4cb2diix1o9r8dsut4dh51kocme2ye8mkktwwjvngv60wk435wqciuvezh0amd4xczk6qkg33yyfp5ckxkj56r9mf3spvu0g48uj7fltpzwwuhhec3r30egrxb6a19yo7ww33u7qa0b3bcdq8em57okvk46vcc3n5udqolelekhuq73hls16ibw48k34r4ei22f7xwt3o0rl3muibmsjb5t1hlpmsyczator63xfcxgkbkv4goabao8wwzlecd8sj6bejuz4e9sl00vwcl2hluu5bfphvp4ajkoahrgtdhf6i5nnztypbftlif0fbvqf8w0kq88s6dg64dnogru994tnob0881w1a7u4is1rd1ed2cml8yeh1f792e9ibnjfjuvwybsqzd8vw1ym637go1w47msp3ihj57vccb4h68vuu12e0811vk3p == \r\n\s\u\3\n\f\2\u\c\n\2\a\6\1\x\2\t\j\g\u\k\v\k\g\0\1\k\5\m\i\c\i\6\4\3\g\f\h\j\c\l\j\9\6\r\l\l\2\c\e\l\c\w\k\t\0\x\y\p\z\r\3\3\x\5\k\r\o\d\c\e\4\c\b\2\d\i\i\x\1\o\9\r\8\d\s\u\t\4\d\h\5\1\k\o\c\m\e\2\y\e\8\m\k\k\t\w\w\j\v\n\g\v\6\0\w\k\4\3\5\w\q\c\i\u\v\e\z\h\0\a\m\d\4\x\c\z\k\6\q\k\g\3\3\y\y\f\p\5\c\k\x\k\j\5\6\r\9\m\f\3\s\p\v\u\0\g\4\8\u\j\7\f\l\t\p\z\w\w\u\h\h\e\c\3\r\3\0\e\g\r\x\b\6\a\1\9\y\o\7\w\w\3\3\u\7\q\a\0\b\3\b\c\d\q\8\e\m\5\7\o\k\v\k\4\6\v\c\c\3\n\5\u\d\q\o\l\e\l\e\k\h\u\q\7\3\h\l\s\1\6\i\b\w\4\8\k\3\4\r\4\e\i\2\2\f\7\x\w\t\3\o\0\r\l\3\m\u\i\b\m\s\j\b\5\t\1\h\l\p\m\s\y\c\z\a\t\o\r\6\3\x\f\c\x\g\k\b\k\v\4\g\o\a\b\a\o\8\w\w\z\l\e\c\d\8\s\j\6\b\e\j\u\z\4\e\9\s\l\0\0\v\w\c\l\2\h\l\u\u\5\b\f\p\h\v\p\4\a\j\k\o\a\h\r\g\t\d\h\f\6\i\5\n\n\z\t\y\p\b\f\t\l\i\f\0\f\b\v\q\f\8\w\0\k\q\8\8\s\6\d\g\6\4\d\n\o\g\r\u\9\9\4\t\n\o\b\0\8\8\1\w\1\a\7\u\4\i\s\1\r\d\1\e\d\2\c\m\l\8\y\e\h\1\f\7\9\2\e\9\i\b\n\j\f\j\u\v\w\y\b\s\q\z\d\8\v\w\1\y\m\6\3\7\g\o\1\w\4\7\m\s\p\3\i\h\j\5\7\v\c\c\b\4\h\6\8\v\u\u\1\2\e\0\8\1\1\v\k\3\p ]]
00:25:51.762   17:09:44	-- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}"
00:25:51.762   17:09:44	-- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock
00:25:51.762  [2024-11-19 17:09:44.577397] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:25:51.762  [2024-11-19 17:09:44.577640] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144724 ]
00:25:52.021  [2024-11-19 17:09:44.731582] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:25:52.021  [2024-11-19 17:09:44.783727] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:25:52.021  
[2024-11-19T17:09:45.144Z] Copying: 512/512 [B] (average 500 kBps)
00:25:52.280  
00:25:52.539   17:09:45	-- dd/posix.sh@93 -- # [[ rnsu3nf2ucn2a61x2tjgukvkg01k5mici643gfhjclj96rll2celcwkt0xypzr33x5krodce4cb2diix1o9r8dsut4dh51kocme2ye8mkktwwjvngv60wk435wqciuvezh0amd4xczk6qkg33yyfp5ckxkj56r9mf3spvu0g48uj7fltpzwwuhhec3r30egrxb6a19yo7ww33u7qa0b3bcdq8em57okvk46vcc3n5udqolelekhuq73hls16ibw48k34r4ei22f7xwt3o0rl3muibmsjb5t1hlpmsyczator63xfcxgkbkv4goabao8wwzlecd8sj6bejuz4e9sl00vwcl2hluu5bfphvp4ajkoahrgtdhf6i5nnztypbftlif0fbvqf8w0kq88s6dg64dnogru994tnob0881w1a7u4is1rd1ed2cml8yeh1f792e9ibnjfjuvwybsqzd8vw1ym637go1w47msp3ihj57vccb4h68vuu12e0811vk3p == \r\n\s\u\3\n\f\2\u\c\n\2\a\6\1\x\2\t\j\g\u\k\v\k\g\0\1\k\5\m\i\c\i\6\4\3\g\f\h\j\c\l\j\9\6\r\l\l\2\c\e\l\c\w\k\t\0\x\y\p\z\r\3\3\x\5\k\r\o\d\c\e\4\c\b\2\d\i\i\x\1\o\9\r\8\d\s\u\t\4\d\h\5\1\k\o\c\m\e\2\y\e\8\m\k\k\t\w\w\j\v\n\g\v\6\0\w\k\4\3\5\w\q\c\i\u\v\e\z\h\0\a\m\d\4\x\c\z\k\6\q\k\g\3\3\y\y\f\p\5\c\k\x\k\j\5\6\r\9\m\f\3\s\p\v\u\0\g\4\8\u\j\7\f\l\t\p\z\w\w\u\h\h\e\c\3\r\3\0\e\g\r\x\b\6\a\1\9\y\o\7\w\w\3\3\u\7\q\a\0\b\3\b\c\d\q\8\e\m\5\7\o\k\v\k\4\6\v\c\c\3\n\5\u\d\q\o\l\e\l\e\k\h\u\q\7\3\h\l\s\1\6\i\b\w\4\8\k\3\4\r\4\e\i\2\2\f\7\x\w\t\3\o\0\r\l\3\m\u\i\b\m\s\j\b\5\t\1\h\l\p\m\s\y\c\z\a\t\o\r\6\3\x\f\c\x\g\k\b\k\v\4\g\o\a\b\a\o\8\w\w\z\l\e\c\d\8\s\j\6\b\e\j\u\z\4\e\9\s\l\0\0\v\w\c\l\2\h\l\u\u\5\b\f\p\h\v\p\4\a\j\k\o\a\h\r\g\t\d\h\f\6\i\5\n\n\z\t\y\p\b\f\t\l\i\f\0\f\b\v\q\f\8\w\0\k\q\8\8\s\6\d\g\6\4\d\n\o\g\r\u\9\9\4\t\n\o\b\0\8\8\1\w\1\a\7\u\4\i\s\1\r\d\1\e\d\2\c\m\l\8\y\e\h\1\f\7\9\2\e\9\i\b\n\j\f\j\u\v\w\y\b\s\q\z\d\8\v\w\1\y\m\6\3\7\g\o\1\w\4\7\m\s\p\3\i\h\j\5\7\v\c\c\b\4\h\6\8\v\u\u\1\2\e\0\8\1\1\v\k\3\p ]]
00:25:52.539   17:09:45	-- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}"
00:25:52.539   17:09:45	-- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync
00:25:52.539  [2024-11-19 17:09:45.193755] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:25:52.539  [2024-11-19 17:09:45.194105] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144745 ]
00:25:52.539  [2024-11-19 17:09:45.347964] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:25:52.798  [2024-11-19 17:09:45.393244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:25:52.798  
[2024-11-19T17:09:45.920Z] Copying: 512/512 [B] (average 166 kBps)
00:25:53.056  
00:25:53.056   17:09:45	-- dd/posix.sh@93 -- # [[ rnsu3nf2ucn2a61x2tjgukvkg01k5mici643gfhjclj96rll2celcwkt0xypzr33x5krodce4cb2diix1o9r8dsut4dh51kocme2ye8mkktwwjvngv60wk435wqciuvezh0amd4xczk6qkg33yyfp5ckxkj56r9mf3spvu0g48uj7fltpzwwuhhec3r30egrxb6a19yo7ww33u7qa0b3bcdq8em57okvk46vcc3n5udqolelekhuq73hls16ibw48k34r4ei22f7xwt3o0rl3muibmsjb5t1hlpmsyczator63xfcxgkbkv4goabao8wwzlecd8sj6bejuz4e9sl00vwcl2hluu5bfphvp4ajkoahrgtdhf6i5nnztypbftlif0fbvqf8w0kq88s6dg64dnogru994tnob0881w1a7u4is1rd1ed2cml8yeh1f792e9ibnjfjuvwybsqzd8vw1ym637go1w47msp3ihj57vccb4h68vuu12e0811vk3p == \r\n\s\u\3\n\f\2\u\c\n\2\a\6\1\x\2\t\j\g\u\k\v\k\g\0\1\k\5\m\i\c\i\6\4\3\g\f\h\j\c\l\j\9\6\r\l\l\2\c\e\l\c\w\k\t\0\x\y\p\z\r\3\3\x\5\k\r\o\d\c\e\4\c\b\2\d\i\i\x\1\o\9\r\8\d\s\u\t\4\d\h\5\1\k\o\c\m\e\2\y\e\8\m\k\k\t\w\w\j\v\n\g\v\6\0\w\k\4\3\5\w\q\c\i\u\v\e\z\h\0\a\m\d\4\x\c\z\k\6\q\k\g\3\3\y\y\f\p\5\c\k\x\k\j\5\6\r\9\m\f\3\s\p\v\u\0\g\4\8\u\j\7\f\l\t\p\z\w\w\u\h\h\e\c\3\r\3\0\e\g\r\x\b\6\a\1\9\y\o\7\w\w\3\3\u\7\q\a\0\b\3\b\c\d\q\8\e\m\5\7\o\k\v\k\4\6\v\c\c\3\n\5\u\d\q\o\l\e\l\e\k\h\u\q\7\3\h\l\s\1\6\i\b\w\4\8\k\3\4\r\4\e\i\2\2\f\7\x\w\t\3\o\0\r\l\3\m\u\i\b\m\s\j\b\5\t\1\h\l\p\m\s\y\c\z\a\t\o\r\6\3\x\f\c\x\g\k\b\k\v\4\g\o\a\b\a\o\8\w\w\z\l\e\c\d\8\s\j\6\b\e\j\u\z\4\e\9\s\l\0\0\v\w\c\l\2\h\l\u\u\5\b\f\p\h\v\p\4\a\j\k\o\a\h\r\g\t\d\h\f\6\i\5\n\n\z\t\y\p\b\f\t\l\i\f\0\f\b\v\q\f\8\w\0\k\q\8\8\s\6\d\g\6\4\d\n\o\g\r\u\9\9\4\t\n\o\b\0\8\8\1\w\1\a\7\u\4\i\s\1\r\d\1\e\d\2\c\m\l\8\y\e\h\1\f\7\9\2\e\9\i\b\n\j\f\j\u\v\w\y\b\s\q\z\d\8\v\w\1\y\m\6\3\7\g\o\1\w\4\7\m\s\p\3\i\h\j\5\7\v\c\c\b\4\h\6\8\v\u\u\1\2\e\0\8\1\1\v\k\3\p ]]
00:25:53.056   17:09:45	-- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}"
00:25:53.056   17:09:45	-- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync
00:25:53.056  [2024-11-19 17:09:45.805367] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:25:53.056  [2024-11-19 17:09:45.806296] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144757 ]
00:25:53.314  [2024-11-19 17:09:45.957676] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:25:53.314  [2024-11-19 17:09:46.003046] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:25:53.314  
[2024-11-19T17:09:46.436Z] Copying: 512/512 [B] (average 100 kBps)
00:25:53.573  
00:25:53.573   17:09:46	-- dd/posix.sh@93 -- # [[ rnsu3nf2ucn2a61x2tjgukvkg01k5mici643gfhjclj96rll2celcwkt0xypzr33x5krodce4cb2diix1o9r8dsut4dh51kocme2ye8mkktwwjvngv60wk435wqciuvezh0amd4xczk6qkg33yyfp5ckxkj56r9mf3spvu0g48uj7fltpzwwuhhec3r30egrxb6a19yo7ww33u7qa0b3bcdq8em57okvk46vcc3n5udqolelekhuq73hls16ibw48k34r4ei22f7xwt3o0rl3muibmsjb5t1hlpmsyczator63xfcxgkbkv4goabao8wwzlecd8sj6bejuz4e9sl00vwcl2hluu5bfphvp4ajkoahrgtdhf6i5nnztypbftlif0fbvqf8w0kq88s6dg64dnogru994tnob0881w1a7u4is1rd1ed2cml8yeh1f792e9ibnjfjuvwybsqzd8vw1ym637go1w47msp3ihj57vccb4h68vuu12e0811vk3p == \r\n\s\u\3\n\f\2\u\c\n\2\a\6\1\x\2\t\j\g\u\k\v\k\g\0\1\k\5\m\i\c\i\6\4\3\g\f\h\j\c\l\j\9\6\r\l\l\2\c\e\l\c\w\k\t\0\x\y\p\z\r\3\3\x\5\k\r\o\d\c\e\4\c\b\2\d\i\i\x\1\o\9\r\8\d\s\u\t\4\d\h\5\1\k\o\c\m\e\2\y\e\8\m\k\k\t\w\w\j\v\n\g\v\6\0\w\k\4\3\5\w\q\c\i\u\v\e\z\h\0\a\m\d\4\x\c\z\k\6\q\k\g\3\3\y\y\f\p\5\c\k\x\k\j\5\6\r\9\m\f\3\s\p\v\u\0\g\4\8\u\j\7\f\l\t\p\z\w\w\u\h\h\e\c\3\r\3\0\e\g\r\x\b\6\a\1\9\y\o\7\w\w\3\3\u\7\q\a\0\b\3\b\c\d\q\8\e\m\5\7\o\k\v\k\4\6\v\c\c\3\n\5\u\d\q\o\l\e\l\e\k\h\u\q\7\3\h\l\s\1\6\i\b\w\4\8\k\3\4\r\4\e\i\2\2\f\7\x\w\t\3\o\0\r\l\3\m\u\i\b\m\s\j\b\5\t\1\h\l\p\m\s\y\c\z\a\t\o\r\6\3\x\f\c\x\g\k\b\k\v\4\g\o\a\b\a\o\8\w\w\z\l\e\c\d\8\s\j\6\b\e\j\u\z\4\e\9\s\l\0\0\v\w\c\l\2\h\l\u\u\5\b\f\p\h\v\p\4\a\j\k\o\a\h\r\g\t\d\h\f\6\i\5\n\n\z\t\y\p\b\f\t\l\i\f\0\f\b\v\q\f\8\w\0\k\q\8\8\s\6\d\g\6\4\d\n\o\g\r\u\9\9\4\t\n\o\b\0\8\8\1\w\1\a\7\u\4\i\s\1\r\d\1\e\d\2\c\m\l\8\y\e\h\1\f\7\9\2\e\9\i\b\n\j\f\j\u\v\w\y\b\s\q\z\d\8\v\w\1\y\m\6\3\7\g\o\1\w\4\7\m\s\p\3\i\h\j\5\7\v\c\c\b\4\h\6\8\v\u\u\1\2\e\0\8\1\1\v\k\3\p ]]
00:25:53.573   17:09:46	-- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}"
00:25:53.573   17:09:46	-- dd/posix.sh@86 -- # gen_bytes 512
00:25:53.573   17:09:46	-- dd/common.sh@98 -- # xtrace_disable
00:25:53.573   17:09:46	-- common/autotest_common.sh@10 -- # set +x
00:25:53.573   17:09:46	-- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}"
00:25:53.573   17:09:46	-- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct
00:25:53.573  [2024-11-19 17:09:46.414786] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:25:53.573  [2024-11-19 17:09:46.414974] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144767 ]
00:25:53.831  [2024-11-19 17:09:46.554315] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:25:53.831  [2024-11-19 17:09:46.597046] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:25:53.831  
[2024-11-19T17:09:46.953Z] Copying: 512/512 [B] (average 500 kBps)
00:25:54.089  
00:25:54.347   17:09:46	-- dd/posix.sh@93 -- # [[ qpazy31zrmjynygg83njmnduf7krjh8sv0cjvjrxn535cl2pa8u26yb287n4wb15zh4hwbrqnqs69pwmcklbnl0ci4h07h2vz1dxur6dm29mj55y47a5w5dd371l5sestcv4tddixaw2oz5yurel22yibynhz6bgt6khtymrnwl5bj15b4d4cekdbcki4a4368rk1d4aqc000e9a0ms1409fkodf70e54scsox7jolsrarvl0my903ilwomjpi1ouly7we4yuk95qdygpzkvi77guepw793pmvpyd46adnedwnyyi70imbbd7aywlkst9a49722okypkgqfk588xm06lhxr8g94mlag86qevnh1slytd2c8lfov1h4796853hg6jb22xu9sk6j8pz4byjfxvgrdvw50wiwm7hdl1vjgov020xf1r5sqt72j4865e06flw1flosm5i92k1tuqkoz8kstjtlposscsj4it85pwmarc5wk86qku2aiymeo4 == \q\p\a\z\y\3\1\z\r\m\j\y\n\y\g\g\8\3\n\j\m\n\d\u\f\7\k\r\j\h\8\s\v\0\c\j\v\j\r\x\n\5\3\5\c\l\2\p\a\8\u\2\6\y\b\2\8\7\n\4\w\b\1\5\z\h\4\h\w\b\r\q\n\q\s\6\9\p\w\m\c\k\l\b\n\l\0\c\i\4\h\0\7\h\2\v\z\1\d\x\u\r\6\d\m\2\9\m\j\5\5\y\4\7\a\5\w\5\d\d\3\7\1\l\5\s\e\s\t\c\v\4\t\d\d\i\x\a\w\2\o\z\5\y\u\r\e\l\2\2\y\i\b\y\n\h\z\6\b\g\t\6\k\h\t\y\m\r\n\w\l\5\b\j\1\5\b\4\d\4\c\e\k\d\b\c\k\i\4\a\4\3\6\8\r\k\1\d\4\a\q\c\0\0\0\e\9\a\0\m\s\1\4\0\9\f\k\o\d\f\7\0\e\5\4\s\c\s\o\x\7\j\o\l\s\r\a\r\v\l\0\m\y\9\0\3\i\l\w\o\m\j\p\i\1\o\u\l\y\7\w\e\4\y\u\k\9\5\q\d\y\g\p\z\k\v\i\7\7\g\u\e\p\w\7\9\3\p\m\v\p\y\d\4\6\a\d\n\e\d\w\n\y\y\i\7\0\i\m\b\b\d\7\a\y\w\l\k\s\t\9\a\4\9\7\2\2\o\k\y\p\k\g\q\f\k\5\8\8\x\m\0\6\l\h\x\r\8\g\9\4\m\l\a\g\8\6\q\e\v\n\h\1\s\l\y\t\d\2\c\8\l\f\o\v\1\h\4\7\9\6\8\5\3\h\g\6\j\b\2\2\x\u\9\s\k\6\j\8\p\z\4\b\y\j\f\x\v\g\r\d\v\w\5\0\w\i\w\m\7\h\d\l\1\v\j\g\o\v\0\2\0\x\f\1\r\5\s\q\t\7\2\j\4\8\6\5\e\0\6\f\l\w\1\f\l\o\s\m\5\i\9\2\k\1\t\u\q\k\o\z\8\k\s\t\j\t\l\p\o\s\s\c\s\j\4\i\t\8\5\p\w\m\a\r\c\5\w\k\8\6\q\k\u\2\a\i\y\m\e\o\4 ]]
00:25:54.348   17:09:46	-- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}"
00:25:54.348   17:09:46	-- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock
00:25:54.348  [2024-11-19 17:09:47.008717] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:25:54.348  [2024-11-19 17:09:47.009532] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144779 ]
00:25:54.348  [2024-11-19 17:09:47.163019] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:25:54.605  [2024-11-19 17:09:47.206826] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:25:54.605  
[2024-11-19T17:09:47.727Z] Copying: 512/512 [B] (average 500 kBps)
00:25:54.863  
00:25:54.863   17:09:47	-- dd/posix.sh@93 -- # [[ qpazy31zrmjynygg83njmnduf7krjh8sv0cjvjrxn535cl2pa8u26yb287n4wb15zh4hwbrqnqs69pwmcklbnl0ci4h07h2vz1dxur6dm29mj55y47a5w5dd371l5sestcv4tddixaw2oz5yurel22yibynhz6bgt6khtymrnwl5bj15b4d4cekdbcki4a4368rk1d4aqc000e9a0ms1409fkodf70e54scsox7jolsrarvl0my903ilwomjpi1ouly7we4yuk95qdygpzkvi77guepw793pmvpyd46adnedwnyyi70imbbd7aywlkst9a49722okypkgqfk588xm06lhxr8g94mlag86qevnh1slytd2c8lfov1h4796853hg6jb22xu9sk6j8pz4byjfxvgrdvw50wiwm7hdl1vjgov020xf1r5sqt72j4865e06flw1flosm5i92k1tuqkoz8kstjtlposscsj4it85pwmarc5wk86qku2aiymeo4 == \q\p\a\z\y\3\1\z\r\m\j\y\n\y\g\g\8\3\n\j\m\n\d\u\f\7\k\r\j\h\8\s\v\0\c\j\v\j\r\x\n\5\3\5\c\l\2\p\a\8\u\2\6\y\b\2\8\7\n\4\w\b\1\5\z\h\4\h\w\b\r\q\n\q\s\6\9\p\w\m\c\k\l\b\n\l\0\c\i\4\h\0\7\h\2\v\z\1\d\x\u\r\6\d\m\2\9\m\j\5\5\y\4\7\a\5\w\5\d\d\3\7\1\l\5\s\e\s\t\c\v\4\t\d\d\i\x\a\w\2\o\z\5\y\u\r\e\l\2\2\y\i\b\y\n\h\z\6\b\g\t\6\k\h\t\y\m\r\n\w\l\5\b\j\1\5\b\4\d\4\c\e\k\d\b\c\k\i\4\a\4\3\6\8\r\k\1\d\4\a\q\c\0\0\0\e\9\a\0\m\s\1\4\0\9\f\k\o\d\f\7\0\e\5\4\s\c\s\o\x\7\j\o\l\s\r\a\r\v\l\0\m\y\9\0\3\i\l\w\o\m\j\p\i\1\o\u\l\y\7\w\e\4\y\u\k\9\5\q\d\y\g\p\z\k\v\i\7\7\g\u\e\p\w\7\9\3\p\m\v\p\y\d\4\6\a\d\n\e\d\w\n\y\y\i\7\0\i\m\b\b\d\7\a\y\w\l\k\s\t\9\a\4\9\7\2\2\o\k\y\p\k\g\q\f\k\5\8\8\x\m\0\6\l\h\x\r\8\g\9\4\m\l\a\g\8\6\q\e\v\n\h\1\s\l\y\t\d\2\c\8\l\f\o\v\1\h\4\7\9\6\8\5\3\h\g\6\j\b\2\2\x\u\9\s\k\6\j\8\p\z\4\b\y\j\f\x\v\g\r\d\v\w\5\0\w\i\w\m\7\h\d\l\1\v\j\g\o\v\0\2\0\x\f\1\r\5\s\q\t\7\2\j\4\8\6\5\e\0\6\f\l\w\1\f\l\o\s\m\5\i\9\2\k\1\t\u\q\k\o\z\8\k\s\t\j\t\l\p\o\s\s\c\s\j\4\i\t\8\5\p\w\m\a\r\c\5\w\k\8\6\q\k\u\2\a\i\y\m\e\o\4 ]]
00:25:54.863   17:09:47	-- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}"
00:25:54.863   17:09:47	-- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync
00:25:54.863  [2024-11-19 17:09:47.618024] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:25:54.863  [2024-11-19 17:09:47.618645] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144789 ]
00:25:55.122  [2024-11-19 17:09:47.769865] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:25:55.122  [2024-11-19 17:09:47.817240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:25:55.122  
[2024-11-19T17:09:48.244Z] Copying: 512/512 [B] (average 166 kBps)
00:25:55.380  
00:25:55.380   17:09:48	-- dd/posix.sh@93 -- # [[ qpazy31zrmjynygg83njmnduf7krjh8sv0cjvjrxn535cl2pa8u26yb287n4wb15zh4hwbrqnqs69pwmcklbnl0ci4h07h2vz1dxur6dm29mj55y47a5w5dd371l5sestcv4tddixaw2oz5yurel22yibynhz6bgt6khtymrnwl5bj15b4d4cekdbcki4a4368rk1d4aqc000e9a0ms1409fkodf70e54scsox7jolsrarvl0my903ilwomjpi1ouly7we4yuk95qdygpzkvi77guepw793pmvpyd46adnedwnyyi70imbbd7aywlkst9a49722okypkgqfk588xm06lhxr8g94mlag86qevnh1slytd2c8lfov1h4796853hg6jb22xu9sk6j8pz4byjfxvgrdvw50wiwm7hdl1vjgov020xf1r5sqt72j4865e06flw1flosm5i92k1tuqkoz8kstjtlposscsj4it85pwmarc5wk86qku2aiymeo4 == \q\p\a\z\y\3\1\z\r\m\j\y\n\y\g\g\8\3\n\j\m\n\d\u\f\7\k\r\j\h\8\s\v\0\c\j\v\j\r\x\n\5\3\5\c\l\2\p\a\8\u\2\6\y\b\2\8\7\n\4\w\b\1\5\z\h\4\h\w\b\r\q\n\q\s\6\9\p\w\m\c\k\l\b\n\l\0\c\i\4\h\0\7\h\2\v\z\1\d\x\u\r\6\d\m\2\9\m\j\5\5\y\4\7\a\5\w\5\d\d\3\7\1\l\5\s\e\s\t\c\v\4\t\d\d\i\x\a\w\2\o\z\5\y\u\r\e\l\2\2\y\i\b\y\n\h\z\6\b\g\t\6\k\h\t\y\m\r\n\w\l\5\b\j\1\5\b\4\d\4\c\e\k\d\b\c\k\i\4\a\4\3\6\8\r\k\1\d\4\a\q\c\0\0\0\e\9\a\0\m\s\1\4\0\9\f\k\o\d\f\7\0\e\5\4\s\c\s\o\x\7\j\o\l\s\r\a\r\v\l\0\m\y\9\0\3\i\l\w\o\m\j\p\i\1\o\u\l\y\7\w\e\4\y\u\k\9\5\q\d\y\g\p\z\k\v\i\7\7\g\u\e\p\w\7\9\3\p\m\v\p\y\d\4\6\a\d\n\e\d\w\n\y\y\i\7\0\i\m\b\b\d\7\a\y\w\l\k\s\t\9\a\4\9\7\2\2\o\k\y\p\k\g\q\f\k\5\8\8\x\m\0\6\l\h\x\r\8\g\9\4\m\l\a\g\8\6\q\e\v\n\h\1\s\l\y\t\d\2\c\8\l\f\o\v\1\h\4\7\9\6\8\5\3\h\g\6\j\b\2\2\x\u\9\s\k\6\j\8\p\z\4\b\y\j\f\x\v\g\r\d\v\w\5\0\w\i\w\m\7\h\d\l\1\v\j\g\o\v\0\2\0\x\f\1\r\5\s\q\t\7\2\j\4\8\6\5\e\0\6\f\l\w\1\f\l\o\s\m\5\i\9\2\k\1\t\u\q\k\o\z\8\k\s\t\j\t\l\p\o\s\s\c\s\j\4\i\t\8\5\p\w\m\a\r\c\5\w\k\8\6\q\k\u\2\a\i\y\m\e\o\4 ]]
00:25:55.380   17:09:48	-- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}"
00:25:55.380   17:09:48	-- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync
00:25:55.380  [2024-11-19 17:09:48.232117] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:25:55.380  [2024-11-19 17:09:48.232391] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144801 ]
00:25:55.638  [2024-11-19 17:09:48.387322] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:25:55.638  [2024-11-19 17:09:48.430102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:25:55.902  
[2024-11-19T17:09:49.033Z] Copying: 512/512 [B] (average 250 kBps)
00:25:56.169  
00:25:56.169   17:09:48	-- dd/posix.sh@93 -- # [[ qpazy31zrmjynygg83njmnduf7krjh8sv0cjvjrxn535cl2pa8u26yb287n4wb15zh4hwbrqnqs69pwmcklbnl0ci4h07h2vz1dxur6dm29mj55y47a5w5dd371l5sestcv4tddixaw2oz5yurel22yibynhz6bgt6khtymrnwl5bj15b4d4cekdbcki4a4368rk1d4aqc000e9a0ms1409fkodf70e54scsox7jolsrarvl0my903ilwomjpi1ouly7we4yuk95qdygpzkvi77guepw793pmvpyd46adnedwnyyi70imbbd7aywlkst9a49722okypkgqfk588xm06lhxr8g94mlag86qevnh1slytd2c8lfov1h4796853hg6jb22xu9sk6j8pz4byjfxvgrdvw50wiwm7hdl1vjgov020xf1r5sqt72j4865e06flw1flosm5i92k1tuqkoz8kstjtlposscsj4it85pwmarc5wk86qku2aiymeo4 == \q\p\a\z\y\3\1\z\r\m\j\y\n\y\g\g\8\3\n\j\m\n\d\u\f\7\k\r\j\h\8\s\v\0\c\j\v\j\r\x\n\5\3\5\c\l\2\p\a\8\u\2\6\y\b\2\8\7\n\4\w\b\1\5\z\h\4\h\w\b\r\q\n\q\s\6\9\p\w\m\c\k\l\b\n\l\0\c\i\4\h\0\7\h\2\v\z\1\d\x\u\r\6\d\m\2\9\m\j\5\5\y\4\7\a\5\w\5\d\d\3\7\1\l\5\s\e\s\t\c\v\4\t\d\d\i\x\a\w\2\o\z\5\y\u\r\e\l\2\2\y\i\b\y\n\h\z\6\b\g\t\6\k\h\t\y\m\r\n\w\l\5\b\j\1\5\b\4\d\4\c\e\k\d\b\c\k\i\4\a\4\3\6\8\r\k\1\d\4\a\q\c\0\0\0\e\9\a\0\m\s\1\4\0\9\f\k\o\d\f\7\0\e\5\4\s\c\s\o\x\7\j\o\l\s\r\a\r\v\l\0\m\y\9\0\3\i\l\w\o\m\j\p\i\1\o\u\l\y\7\w\e\4\y\u\k\9\5\q\d\y\g\p\z\k\v\i\7\7\g\u\e\p\w\7\9\3\p\m\v\p\y\d\4\6\a\d\n\e\d\w\n\y\y\i\7\0\i\m\b\b\d\7\a\y\w\l\k\s\t\9\a\4\9\7\2\2\o\k\y\p\k\g\q\f\k\5\8\8\x\m\0\6\l\h\x\r\8\g\9\4\m\l\a\g\8\6\q\e\v\n\h\1\s\l\y\t\d\2\c\8\l\f\o\v\1\h\4\7\9\6\8\5\3\h\g\6\j\b\2\2\x\u\9\s\k\6\j\8\p\z\4\b\y\j\f\x\v\g\r\d\v\w\5\0\w\i\w\m\7\h\d\l\1\v\j\g\o\v\0\2\0\x\f\1\r\5\s\q\t\7\2\j\4\8\6\5\e\0\6\f\l\w\1\f\l\o\s\m\5\i\9\2\k\1\t\u\q\k\o\z\8\k\s\t\j\t\l\p\o\s\s\c\s\j\4\i\t\8\5\p\w\m\a\r\c\5\w\k\8\6\q\k\u\2\a\i\y\m\e\o\4 ]]
00:25:56.169  
00:25:56.169  real	0m4.858s
00:25:56.169  user	0m2.371s
00:25:56.169  sys	0m1.400s
00:25:56.169   17:09:48	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:25:56.169   17:09:48	-- common/autotest_common.sh@10 -- # set +x
00:25:56.169  ************************************
00:25:56.169  END TEST dd_flags_misc_forced_aio
00:25:56.169  ************************************
00:25:56.169   17:09:48	-- dd/posix.sh@1 -- # cleanup
00:25:56.169   17:09:48	-- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link
00:25:56.169   17:09:48	-- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link
00:25:56.169  
00:25:56.169  real	0m22.357s
00:25:56.169  user	0m9.876s
00:25:56.169  sys	0m6.345s
00:25:56.169   17:09:48	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:25:56.169   17:09:48	-- common/autotest_common.sh@10 -- # set +x
00:25:56.169  ************************************
00:25:56.169  END TEST spdk_dd_posix
00:25:56.169  ************************************
00:25:56.169   17:09:48	-- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh
00:25:56.169   17:09:48	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:25:56.169   17:09:48	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:25:56.169   17:09:48	-- common/autotest_common.sh@10 -- # set +x
00:25:56.169  ************************************
00:25:56.169  START TEST spdk_dd_malloc
00:25:56.169  ************************************
00:25:56.169   17:09:48	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh
00:25:56.169  * Looking for test storage...
00:25:56.169  * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd
00:25:56.169     17:09:49	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:25:56.169      17:09:49	-- common/autotest_common.sh@1690 -- # lcov --version
00:25:56.169      17:09:49	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:25:56.427     17:09:49	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:25:56.427     17:09:49	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:25:56.427     17:09:49	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:25:56.427     17:09:49	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:25:56.427     17:09:49	-- scripts/common.sh@335 -- # IFS=.-:
00:25:56.427     17:09:49	-- scripts/common.sh@335 -- # read -ra ver1
00:25:56.427     17:09:49	-- scripts/common.sh@336 -- # IFS=.-:
00:25:56.427     17:09:49	-- scripts/common.sh@336 -- # read -ra ver2
00:25:56.427     17:09:49	-- scripts/common.sh@337 -- # local 'op=<'
00:25:56.427     17:09:49	-- scripts/common.sh@339 -- # ver1_l=2
00:25:56.427     17:09:49	-- scripts/common.sh@340 -- # ver2_l=1
00:25:56.427     17:09:49	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:25:56.427     17:09:49	-- scripts/common.sh@343 -- # case "$op" in
00:25:56.427     17:09:49	-- scripts/common.sh@344 -- # : 1
00:25:56.427     17:09:49	-- scripts/common.sh@363 -- # (( v = 0 ))
00:25:56.427     17:09:49	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:25:56.427      17:09:49	-- scripts/common.sh@364 -- # decimal 1
00:25:56.427      17:09:49	-- scripts/common.sh@352 -- # local d=1
00:25:56.427      17:09:49	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:25:56.427      17:09:49	-- scripts/common.sh@354 -- # echo 1
00:25:56.427     17:09:49	-- scripts/common.sh@364 -- # ver1[v]=1
00:25:56.427      17:09:49	-- scripts/common.sh@365 -- # decimal 2
00:25:56.427      17:09:49	-- scripts/common.sh@352 -- # local d=2
00:25:56.427      17:09:49	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:25:56.427      17:09:49	-- scripts/common.sh@354 -- # echo 2
00:25:56.427     17:09:49	-- scripts/common.sh@365 -- # ver2[v]=2
00:25:56.427     17:09:49	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:25:56.427     17:09:49	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:25:56.427     17:09:49	-- scripts/common.sh@367 -- # return 0
00:25:56.427     17:09:49	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:25:56.427     17:09:49	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:25:56.427  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:56.427  		--rc genhtml_branch_coverage=1
00:25:56.427  		--rc genhtml_function_coverage=1
00:25:56.427  		--rc genhtml_legend=1
00:25:56.427  		--rc geninfo_all_blocks=1
00:25:56.427  		--rc geninfo_unexecuted_blocks=1
00:25:56.427  		
00:25:56.427  		'
00:25:56.427     17:09:49	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:25:56.427  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:56.427  		--rc genhtml_branch_coverage=1
00:25:56.427  		--rc genhtml_function_coverage=1
00:25:56.427  		--rc genhtml_legend=1
00:25:56.427  		--rc geninfo_all_blocks=1
00:25:56.427  		--rc geninfo_unexecuted_blocks=1
00:25:56.427  		
00:25:56.427  		'
00:25:56.427     17:09:49	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:25:56.427  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:56.427  		--rc genhtml_branch_coverage=1
00:25:56.427  		--rc genhtml_function_coverage=1
00:25:56.427  		--rc genhtml_legend=1
00:25:56.427  		--rc geninfo_all_blocks=1
00:25:56.427  		--rc geninfo_unexecuted_blocks=1
00:25:56.427  		
00:25:56.427  		'
00:25:56.427     17:09:49	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:25:56.427  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:56.427  		--rc genhtml_branch_coverage=1
00:25:56.427  		--rc genhtml_function_coverage=1
00:25:56.427  		--rc genhtml_legend=1
00:25:56.427  		--rc geninfo_all_blocks=1
00:25:56.427  		--rc geninfo_unexecuted_blocks=1
00:25:56.427  		
00:25:56.427  		'
00:25:56.427    17:09:49	-- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:25:56.427     17:09:49	-- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]]
00:25:56.427     17:09:49	-- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:25:56.427     17:09:49	-- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:25:56.427      17:09:49	-- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:25:56.427      17:09:49	-- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:25:56.427      17:09:49	-- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:25:56.427      17:09:49	-- paths/export.sh@5 -- # export PATH
00:25:56.427      17:09:49	-- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:25:56.427   17:09:49	-- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy
00:25:56.427   17:09:49	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:25:56.427   17:09:49	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:25:56.427   17:09:49	-- common/autotest_common.sh@10 -- # set +x
00:25:56.427  ************************************
00:25:56.427  START TEST dd_malloc_copy
00:25:56.427  ************************************
00:25:56.427   17:09:49	-- common/autotest_common.sh@1114 -- # malloc_copy
00:25:56.427   17:09:49	-- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512
00:25:56.427   17:09:49	-- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512
00:25:56.427   17:09:49	-- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512')
00:25:56.427   17:09:49	-- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0
00:25:56.427   17:09:49	-- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512')
00:25:56.427   17:09:49	-- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1
00:25:56.427   17:09:49	-- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62
00:25:56.427    17:09:49	-- dd/malloc.sh@28 -- # gen_conf
00:25:56.427    17:09:49	-- dd/common.sh@31 -- # xtrace_disable
00:25:56.427    17:09:49	-- common/autotest_common.sh@10 -- # set +x
00:25:56.427  [2024-11-19 17:09:49.166885] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:25:56.427  [2024-11-19 17:09:49.167262] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144894 ]
00:25:56.427  {
00:25:56.427    "subsystems": [
00:25:56.427      {
00:25:56.427        "subsystem": "bdev",
00:25:56.427        "config": [
00:25:56.427          {
00:25:56.427            "params": {
00:25:56.427              "block_size": 512,
00:25:56.427              "num_blocks": 1048576,
00:25:56.427              "name": "malloc0"
00:25:56.427            },
00:25:56.427            "method": "bdev_malloc_create"
00:25:56.427          },
00:25:56.427          {
00:25:56.427            "params": {
00:25:56.427              "block_size": 512,
00:25:56.427              "num_blocks": 1048576,
00:25:56.427              "name": "malloc1"
00:25:56.427            },
00:25:56.427            "method": "bdev_malloc_create"
00:25:56.427          },
00:25:56.427          {
00:25:56.427            "method": "bdev_wait_for_examine"
00:25:56.427          }
00:25:56.427        ]
00:25:56.427      }
00:25:56.427    ]
00:25:56.427  }
00:25:56.685  [2024-11-19 17:09:49.327135] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:25:56.685  [2024-11-19 17:09:49.371290] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:25:58.059  
[2024-11-19T17:09:51.858Z] Copying: 223/512 [MB] (223 MBps)
[2024-11-19T17:09:52.117Z] Copying: 448/512 [MB] (224 MBps)
[2024-11-19T17:09:52.683Z] Copying: 512/512 [MB] (average 223 MBps)
00:25:59.819  
00:25:59.819   17:09:52	-- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62
00:25:59.819    17:09:52	-- dd/malloc.sh@33 -- # gen_conf
00:25:59.819    17:09:52	-- dd/common.sh@31 -- # xtrace_disable
00:25:59.819    17:09:52	-- common/autotest_common.sh@10 -- # set +x
00:25:59.819  {
00:25:59.819    "subsystems": [
00:25:59.819      {
00:25:59.819        "subsystem": "bdev",
00:25:59.820        "config": [
00:25:59.820          {
00:25:59.820            "params": {
00:25:59.820              "block_size": 512,
00:25:59.820              "num_blocks": 1048576,
00:25:59.820              "name": "malloc0"
00:25:59.820            },
00:25:59.820            "method": "bdev_malloc_create"
00:25:59.820          },
00:25:59.820          {
00:25:59.820            "params": {
00:25:59.820              "block_size": 512,
00:25:59.820              "num_blocks": 1048576,
00:25:59.820              "name": "malloc1"
00:25:59.820            },
00:25:59.820            "method": "bdev_malloc_create"
00:25:59.820          },
00:25:59.820          {
00:25:59.820            "method": "bdev_wait_for_examine"
00:25:59.820          }
00:25:59.820        ]
00:25:59.820      }
00:25:59.820    ]
00:25:59.820  }
00:25:59.820  [2024-11-19 17:09:52.608029] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:25:59.820  [2024-11-19 17:09:52.608787] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144944 ]
00:26:00.078  [2024-11-19 17:09:52.762878] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:26:00.078  [2024-11-19 17:09:52.810079] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:26:01.480  
[2024-11-19T17:09:55.279Z] Copying: 226/512 [MB] (226 MBps)
[2024-11-19T17:09:55.542Z] Copying: 451/512 [MB] (225 MBps)
[2024-11-19T17:09:56.109Z] Copying: 512/512 [MB] (average 226 MBps)
00:26:03.245  
00:26:03.245  
00:26:03.245  real	0m6.860s
00:26:03.245  user	0m5.854s
00:26:03.245  sys	0m0.879s
00:26:03.245   17:09:55	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:26:03.245   17:09:55	-- common/autotest_common.sh@10 -- # set +x
00:26:03.245  ************************************
00:26:03.245  END TEST dd_malloc_copy
00:26:03.245  ************************************
00:26:03.245  
00:26:03.245  real	0m7.094s
00:26:03.245  user	0m6.001s
00:26:03.245  sys	0m0.984s
00:26:03.245   17:09:55	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:26:03.245   17:09:55	-- common/autotest_common.sh@10 -- # set +x
00:26:03.245  ************************************
00:26:03.245  END TEST spdk_dd_malloc
00:26:03.245  ************************************
00:26:03.245   17:09:56	-- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0
00:26:03.245   17:09:56	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:26:03.245   17:09:56	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:26:03.245   17:09:56	-- common/autotest_common.sh@10 -- # set +x
00:26:03.245  ************************************
00:26:03.245  START TEST spdk_dd_bdev_to_bdev
00:26:03.245  ************************************
00:26:03.245   17:09:56	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0
00:26:03.503  * Looking for test storage...
00:26:03.503  * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd
00:26:03.503     17:09:56	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:26:03.503      17:09:56	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:26:03.503      17:09:56	-- common/autotest_common.sh@1690 -- # lcov --version
00:26:03.503     17:09:56	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:26:03.503     17:09:56	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:26:03.503     17:09:56	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:26:03.503     17:09:56	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:26:03.503     17:09:56	-- scripts/common.sh@335 -- # IFS=.-:
00:26:03.503     17:09:56	-- scripts/common.sh@335 -- # read -ra ver1
00:26:03.503     17:09:56	-- scripts/common.sh@336 -- # IFS=.-:
00:26:03.503     17:09:56	-- scripts/common.sh@336 -- # read -ra ver2
00:26:03.503     17:09:56	-- scripts/common.sh@337 -- # local 'op=<'
00:26:03.503     17:09:56	-- scripts/common.sh@339 -- # ver1_l=2
00:26:03.503     17:09:56	-- scripts/common.sh@340 -- # ver2_l=1
00:26:03.503     17:09:56	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:26:03.503     17:09:56	-- scripts/common.sh@343 -- # case "$op" in
00:26:03.503     17:09:56	-- scripts/common.sh@344 -- # : 1
00:26:03.503     17:09:56	-- scripts/common.sh@363 -- # (( v = 0 ))
00:26:03.503     17:09:56	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:26:03.503      17:09:56	-- scripts/common.sh@364 -- # decimal 1
00:26:03.503      17:09:56	-- scripts/common.sh@352 -- # local d=1
00:26:03.503      17:09:56	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:26:03.503      17:09:56	-- scripts/common.sh@354 -- # echo 1
00:26:03.503     17:09:56	-- scripts/common.sh@364 -- # ver1[v]=1
00:26:03.503      17:09:56	-- scripts/common.sh@365 -- # decimal 2
00:26:03.503      17:09:56	-- scripts/common.sh@352 -- # local d=2
00:26:03.503      17:09:56	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:26:03.503      17:09:56	-- scripts/common.sh@354 -- # echo 2
00:26:03.503     17:09:56	-- scripts/common.sh@365 -- # ver2[v]=2
00:26:03.503     17:09:56	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:26:03.504     17:09:56	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:26:03.504     17:09:56	-- scripts/common.sh@367 -- # return 0
00:26:03.504     17:09:56	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:26:03.504     17:09:56	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:26:03.504  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:26:03.504  		--rc genhtml_branch_coverage=1
00:26:03.504  		--rc genhtml_function_coverage=1
00:26:03.504  		--rc genhtml_legend=1
00:26:03.504  		--rc geninfo_all_blocks=1
00:26:03.504  		--rc geninfo_unexecuted_blocks=1
00:26:03.504  		
00:26:03.504  		'
00:26:03.504     17:09:56	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:26:03.504  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:26:03.504  		--rc genhtml_branch_coverage=1
00:26:03.504  		--rc genhtml_function_coverage=1
00:26:03.504  		--rc genhtml_legend=1
00:26:03.504  		--rc geninfo_all_blocks=1
00:26:03.504  		--rc geninfo_unexecuted_blocks=1
00:26:03.504  		
00:26:03.504  		'
00:26:03.504     17:09:56	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:26:03.504  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:26:03.504  		--rc genhtml_branch_coverage=1
00:26:03.504  		--rc genhtml_function_coverage=1
00:26:03.504  		--rc genhtml_legend=1
00:26:03.504  		--rc geninfo_all_blocks=1
00:26:03.504  		--rc geninfo_unexecuted_blocks=1
00:26:03.504  		
00:26:03.504  		'
00:26:03.504     17:09:56	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:26:03.504  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:26:03.504  		--rc genhtml_branch_coverage=1
00:26:03.504  		--rc genhtml_function_coverage=1
00:26:03.504  		--rc genhtml_legend=1
00:26:03.504  		--rc geninfo_all_blocks=1
00:26:03.504  		--rc geninfo_unexecuted_blocks=1
00:26:03.504  		
00:26:03.504  		'
00:26:03.504    17:09:56	-- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:26:03.504     17:09:56	-- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]]
00:26:03.504     17:09:56	-- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:26:03.504     17:09:56	-- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:26:03.504      17:09:56	-- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:26:03.504      17:09:56	-- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:26:03.504      17:09:56	-- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:26:03.504      17:09:56	-- paths/export.sh@5 -- # export PATH
00:26:03.504      17:09:56	-- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:26:03.504   17:09:56	-- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@")
00:26:03.504   17:09:56	-- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT
00:26:03.504   17:09:56	-- dd/bdev_to_bdev.sh@49 -- # bs=1048576
00:26:03.504   17:09:56	-- dd/bdev_to_bdev.sh@51 -- # (( 1 > 1 ))
00:26:03.504   17:09:56	-- dd/bdev_to_bdev.sh@67 -- # nvme0=Nvme0
00:26:03.504   17:09:56	-- dd/bdev_to_bdev.sh@67 -- # bdev0=Nvme0n1
00:26:03.504   17:09:56	-- dd/bdev_to_bdev.sh@67 -- # nvme0_pci=0000:00:06.0
00:26:03.504   17:09:56	-- dd/bdev_to_bdev.sh@68 -- # aio1=/home/vagrant/spdk_repo/spdk/test/dd/aio1
00:26:03.504   17:09:56	-- dd/bdev_to_bdev.sh@68 -- # bdev1=aio1
00:26:03.504   17:09:56	-- dd/bdev_to_bdev.sh@70 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme0' ['traddr']='0000:00:06.0' ['trtype']='pcie')
00:26:03.504   17:09:56	-- dd/bdev_to_bdev.sh@70 -- # declare -A method_bdev_nvme_attach_controller_1
00:26:03.504   17:09:56	-- dd/bdev_to_bdev.sh@75 -- # method_bdev_aio_create_0=(['name']='aio1' ['filename']='/home/vagrant/spdk_repo/spdk/test/dd/aio1' ['block_size']='4096')
00:26:03.504   17:09:56	-- dd/bdev_to_bdev.sh@75 -- # declare -A method_bdev_aio_create_0
00:26:03.504   17:09:56	-- dd/bdev_to_bdev.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/aio1 --bs=1048576 --count=256
00:26:03.504  [2024-11-19 17:09:56.315521] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:26:03.504  [2024-11-19 17:09:56.315778] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145064 ]
00:26:03.762  [2024-11-19 17:09:56.470950] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:26:03.762  [2024-11-19 17:09:56.514353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:26:04.020  
[2024-11-19T17:09:57.142Z] Copying: 256/256 [MB] (average 1185 MBps)
00:26:04.278  
00:26:04.278   17:09:57	-- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:26:04.278   17:09:57	-- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:26:04.278   17:09:57	-- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it'
00:26:04.278   17:09:57	-- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it'
00:26:04.278   17:09:57	-- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64
00:26:04.278   17:09:57	-- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']'
00:26:04.278   17:09:57	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:26:04.278   17:09:57	-- common/autotest_common.sh@10 -- # set +x
00:26:04.278  ************************************
00:26:04.278  START TEST dd_inflate_file
00:26:04.278  ************************************
00:26:04.278   17:09:57	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64
00:26:04.537  [2024-11-19 17:09:57.149598] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:26:04.537  [2024-11-19 17:09:57.149852] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145080 ]
00:26:04.537  [2024-11-19 17:09:57.303101] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:26:04.537  [2024-11-19 17:09:57.350602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:26:04.795  
[2024-11-19T17:09:57.918Z] Copying: 64/64 [MB] (average 1163 MBps)
00:26:05.054  
00:26:05.054  
00:26:05.054  real	0m0.657s
00:26:05.054  user	0m0.286s
00:26:05.054  sys	0m0.243s
00:26:05.054   17:09:57	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:26:05.054   17:09:57	-- common/autotest_common.sh@10 -- # set +x
00:26:05.054  ************************************
00:26:05.054  END TEST dd_inflate_file
00:26:05.054  ************************************
00:26:05.054    17:09:57	-- dd/bdev_to_bdev.sh@104 -- # wc -c
00:26:05.054   17:09:57	-- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891
00:26:05.054   17:09:57	-- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62
00:26:05.054    17:09:57	-- dd/bdev_to_bdev.sh@107 -- # gen_conf
00:26:05.054   17:09:57	-- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']'
00:26:05.054   17:09:57	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:26:05.054   17:09:57	-- common/autotest_common.sh@10 -- # set +x
00:26:05.054    17:09:57	-- dd/common.sh@31 -- # xtrace_disable
00:26:05.054    17:09:57	-- common/autotest_common.sh@10 -- # set +x
00:26:05.054  ************************************
00:26:05.054  START TEST dd_copy_to_out_bdev
00:26:05.054  ************************************
00:26:05.054   17:09:57	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62
00:26:05.054  {
00:26:05.054    "subsystems": [
00:26:05.054      {
00:26:05.054        "subsystem": "bdev",
00:26:05.054        "config": [
00:26:05.054          {
00:26:05.054            "params": {
00:26:05.054              "block_size": 4096,
00:26:05.054              "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1",
00:26:05.054              "name": "aio1"
00:26:05.054            },
00:26:05.054            "method": "bdev_aio_create"
00:26:05.054          },
00:26:05.054          {
00:26:05.054            "params": {
00:26:05.054              "trtype": "pcie",
00:26:05.054              "traddr": "0000:00:06.0",
00:26:05.054              "name": "Nvme0"
00:26:05.054            },
00:26:05.054            "method": "bdev_nvme_attach_controller"
00:26:05.054          },
00:26:05.054          {
00:26:05.054            "method": "bdev_wait_for_examine"
00:26:05.054          }
00:26:05.054        ]
00:26:05.054      }
00:26:05.054    ]
00:26:05.054  }
00:26:05.054  [2024-11-19 17:09:57.874533] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:26:05.054  [2024-11-19 17:09:57.874942] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145118 ]
00:26:05.312  [2024-11-19 17:09:58.025293] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:26:05.312  [2024-11-19 17:09:58.071990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:26:06.686  
[2024-11-19T17:09:59.550Z] Copying: 64/64 [MB] (average 73 MBps)
00:26:06.686  
00:26:06.686  
00:26:06.686  real	0m1.636s
00:26:06.686  user	0m1.293s
00:26:06.686  sys	0m0.228s
00:26:06.686   17:09:59	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:26:06.686  ************************************
00:26:06.686  END TEST dd_copy_to_out_bdev
00:26:06.686   17:09:59	-- common/autotest_common.sh@10 -- # set +x
00:26:06.686  ************************************
00:26:06.686   17:09:59	-- dd/bdev_to_bdev.sh@113 -- # count=65
00:26:06.686   17:09:59	-- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic
00:26:06.686   17:09:59	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:26:06.686   17:09:59	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:26:06.686   17:09:59	-- common/autotest_common.sh@10 -- # set +x
00:26:06.686  ************************************
00:26:06.686  START TEST dd_offset_magic
00:26:06.686  ************************************
00:26:06.686   17:09:59	-- common/autotest_common.sh@1114 -- # offset_magic
00:26:06.686   17:09:59	-- dd/bdev_to_bdev.sh@13 -- # local magic_check
00:26:06.686   17:09:59	-- dd/bdev_to_bdev.sh@14 -- # local offsets offset
00:26:06.686   17:09:59	-- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64)
00:26:06.686   17:09:59	-- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}"
00:26:06.686   17:09:59	-- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62
00:26:06.686    17:09:59	-- dd/bdev_to_bdev.sh@20 -- # gen_conf
00:26:06.686    17:09:59	-- dd/common.sh@31 -- # xtrace_disable
00:26:06.686    17:09:59	-- common/autotest_common.sh@10 -- # set +x
00:26:06.945  {
00:26:06.945    "subsystems": [
00:26:06.945      {
00:26:06.945        "subsystem": "bdev",
00:26:06.945        "config": [
00:26:06.945          {
00:26:06.945            "params": {
00:26:06.945              "block_size": 4096,
00:26:06.945              "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1",
00:26:06.945              "name": "aio1"
00:26:06.945            },
00:26:06.945            "method": "bdev_aio_create"
00:26:06.945          },
00:26:06.945          {
00:26:06.945            "params": {
00:26:06.945              "trtype": "pcie",
00:26:06.945              "traddr": "0000:00:06.0",
00:26:06.945              "name": "Nvme0"
00:26:06.945            },
00:26:06.945            "method": "bdev_nvme_attach_controller"
00:26:06.945          },
00:26:06.945          {
00:26:06.945            "method": "bdev_wait_for_examine"
00:26:06.945          }
00:26:06.945        ]
00:26:06.945      }
00:26:06.945    ]
00:26:06.945  }
00:26:06.945  [2024-11-19 17:09:59.586969] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:26:06.945  [2024-11-19 17:09:59.587219] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145167 ]
00:26:06.945  [2024-11-19 17:09:59.740300] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:26:06.945  [2024-11-19 17:09:59.785184] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:26:07.880  
[2024-11-19T17:10:00.744Z] Copying: 65/65 [MB] (average 156 MBps)
00:26:07.880  
00:26:07.880    17:10:00	-- dd/bdev_to_bdev.sh@28 -- # gen_conf
00:26:07.880   17:10:00	-- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62
00:26:07.880    17:10:00	-- dd/common.sh@31 -- # xtrace_disable
00:26:07.880    17:10:00	-- common/autotest_common.sh@10 -- # set +x
00:26:07.880  {
00:26:07.880    "subsystems": [
00:26:07.880      {
00:26:07.880        "subsystem": "bdev",
00:26:07.880        "config": [
00:26:07.880          {
00:26:07.880            "params": {
00:26:07.880              "block_size": 4096,
00:26:07.880              "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1",
00:26:07.880              "name": "aio1"
00:26:07.880            },
00:26:07.880            "method": "bdev_aio_create"
00:26:07.880          },
00:26:07.880          {
00:26:07.880            "params": {
00:26:07.880              "trtype": "pcie",
00:26:07.880              "traddr": "0000:00:06.0",
00:26:07.880              "name": "Nvme0"
00:26:07.880            },
00:26:07.880            "method": "bdev_nvme_attach_controller"
00:26:07.881          },
00:26:07.881          {
00:26:07.881            "method": "bdev_wait_for_examine"
00:26:07.881          }
00:26:07.881        ]
00:26:07.881      }
00:26:07.881    ]
00:26:07.881  }
00:26:08.139  [2024-11-19 17:10:00.735085] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:26:08.139  [2024-11-19 17:10:00.735322] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145193 ]
00:26:08.139  [2024-11-19 17:10:00.886949] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:26:08.139  [2024-11-19 17:10:00.932385] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:26:08.397  
[2024-11-19T17:10:01.520Z] Copying: 1024/1024 [kB] (average 500 MBps)
00:26:08.656  
00:26:08.656   17:10:01	-- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check
00:26:08.656   17:10:01	-- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]]
00:26:08.656   17:10:01	-- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}"
00:26:08.656   17:10:01	-- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62
00:26:08.656    17:10:01	-- dd/bdev_to_bdev.sh@20 -- # gen_conf
00:26:08.656    17:10:01	-- dd/common.sh@31 -- # xtrace_disable
00:26:08.656    17:10:01	-- common/autotest_common.sh@10 -- # set +x
00:26:08.914  [2024-11-19 17:10:01.556097] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:26:08.914  [2024-11-19 17:10:01.556347] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145215 ]
00:26:08.914  {
00:26:08.914    "subsystems": [
00:26:08.914      {
00:26:08.914        "subsystem": "bdev",
00:26:08.914        "config": [
00:26:08.914          {
00:26:08.914            "params": {
00:26:08.914              "block_size": 4096,
00:26:08.914              "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1",
00:26:08.914              "name": "aio1"
00:26:08.914            },
00:26:08.914            "method": "bdev_aio_create"
00:26:08.914          },
00:26:08.914          {
00:26:08.914            "params": {
00:26:08.914              "trtype": "pcie",
00:26:08.914              "traddr": "0000:00:06.0",
00:26:08.914              "name": "Nvme0"
00:26:08.914            },
00:26:08.914            "method": "bdev_nvme_attach_controller"
00:26:08.914          },
00:26:08.914          {
00:26:08.914            "method": "bdev_wait_for_examine"
00:26:08.914          }
00:26:08.914        ]
00:26:08.914      }
00:26:08.914    ]
00:26:08.914  }
00:26:08.914  [2024-11-19 17:10:01.716356] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:26:09.173  [2024-11-19 17:10:01.774566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:26:09.432  
[2024-11-19T17:10:02.863Z] Copying: 65/65 [MB] (average 201 MBps)
00:26:09.999  
00:26:09.999   17:10:02	-- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62
00:26:09.999    17:10:02	-- dd/bdev_to_bdev.sh@28 -- # gen_conf
00:26:09.999    17:10:02	-- dd/common.sh@31 -- # xtrace_disable
00:26:09.999    17:10:02	-- common/autotest_common.sh@10 -- # set +x
00:26:09.999  {
00:26:09.999    "subsystems": [
00:26:09.999      {
00:26:09.999        "subsystem": "bdev",
00:26:09.999        "config": [
00:26:09.999          {
00:26:09.999            "params": {
00:26:09.999              "block_size": 4096,
00:26:09.999              "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1",
00:26:09.999              "name": "aio1"
00:26:09.999            },
00:26:09.999            "method": "bdev_aio_create"
00:26:09.999          },
00:26:09.999          {
00:26:09.999            "params": {
00:26:09.999              "trtype": "pcie",
00:26:09.999              "traddr": "0000:00:06.0",
00:26:09.999              "name": "Nvme0"
00:26:09.999            },
00:26:09.999            "method": "bdev_nvme_attach_controller"
00:26:09.999          },
00:26:09.999          {
00:26:09.999            "method": "bdev_wait_for_examine"
00:26:09.999          }
00:26:09.999        ]
00:26:09.999      }
00:26:09.999    ]
00:26:09.999  }
00:26:09.999  [2024-11-19 17:10:02.703434] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:26:09.999  [2024-11-19 17:10:02.703683] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145237 ]
00:26:10.258  [2024-11-19 17:10:02.862392] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:26:10.258  [2024-11-19 17:10:02.916232] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:26:10.518  
[2024-11-19T17:10:03.641Z] Copying: 1024/1024 [kB] (average 500 MBps)
00:26:10.777  
00:26:10.777   17:10:03	-- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check
00:26:10.777   17:10:03	-- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]]
00:26:10.777  
00:26:10.777  real	0m3.951s
00:26:10.777  user	0m1.806s
00:26:10.777  sys	0m1.052s
00:26:10.777   17:10:03	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:26:10.777   17:10:03	-- common/autotest_common.sh@10 -- # set +x
00:26:10.777  ************************************
00:26:10.777  END TEST dd_offset_magic
00:26:10.777  ************************************
00:26:10.777   17:10:03	-- dd/bdev_to_bdev.sh@1 -- # cleanup
00:26:10.777   17:10:03	-- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330
00:26:10.777   17:10:03	-- dd/common.sh@10 -- # local bdev=Nvme0n1
00:26:10.777   17:10:03	-- dd/common.sh@11 -- # local nvme_ref=
00:26:10.777   17:10:03	-- dd/common.sh@12 -- # local size=4194330
00:26:10.777   17:10:03	-- dd/common.sh@14 -- # local bs=1048576
00:26:10.777   17:10:03	-- dd/common.sh@15 -- # local count=5
00:26:10.777   17:10:03	-- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62
00:26:10.777    17:10:03	-- dd/common.sh@18 -- # gen_conf
00:26:10.777    17:10:03	-- dd/common.sh@31 -- # xtrace_disable
00:26:10.777    17:10:03	-- common/autotest_common.sh@10 -- # set +x
00:26:10.777  {
00:26:10.777    "subsystems": [
00:26:10.777      {
00:26:10.777        "subsystem": "bdev",
00:26:10.777        "config": [
00:26:10.777          {
00:26:10.777            "params": {
00:26:10.777              "block_size": 4096,
00:26:10.777              "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1",
00:26:10.777              "name": "aio1"
00:26:10.777            },
00:26:10.777            "method": "bdev_aio_create"
00:26:10.777          },
00:26:10.777          {
00:26:10.777            "params": {
00:26:10.777              "trtype": "pcie",
00:26:10.777              "traddr": "0000:00:06.0",
00:26:10.777              "name": "Nvme0"
00:26:10.777            },
00:26:10.777            "method": "bdev_nvme_attach_controller"
00:26:10.777          },
00:26:10.777          {
00:26:10.777            "method": "bdev_wait_for_examine"
00:26:10.777          }
00:26:10.777        ]
00:26:10.777      }
00:26:10.777    ]
00:26:10.777  }
00:26:10.777  [2024-11-19 17:10:03.582730] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:26:10.777  [2024-11-19 17:10:03.583139] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145273 ]
00:26:11.036  [2024-11-19 17:10:03.737959] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:26:11.036  [2024-11-19 17:10:03.789221] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:26:11.294  
[2024-11-19T17:10:04.416Z] Copying: 5120/5120 [kB] (average 1000 MBps)
00:26:11.552  
00:26:11.552   17:10:04	-- dd/bdev_to_bdev.sh@43 -- # clear_nvme aio1 '' 4194330
00:26:11.552   17:10:04	-- dd/common.sh@10 -- # local bdev=aio1
00:26:11.552   17:10:04	-- dd/common.sh@11 -- # local nvme_ref=
00:26:11.552   17:10:04	-- dd/common.sh@12 -- # local size=4194330
00:26:11.552   17:10:04	-- dd/common.sh@14 -- # local bs=1048576
00:26:11.552   17:10:04	-- dd/common.sh@15 -- # local count=5
00:26:11.552   17:10:04	-- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=aio1 --count=5 --json /dev/fd/62
00:26:11.552    17:10:04	-- dd/common.sh@18 -- # gen_conf
00:26:11.552    17:10:04	-- dd/common.sh@31 -- # xtrace_disable
00:26:11.552    17:10:04	-- common/autotest_common.sh@10 -- # set +x
00:26:11.552  [2024-11-19 17:10:04.339439] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:26:11.552  [2024-11-19 17:10:04.339927] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145291 ]
00:26:11.552  {
00:26:11.552    "subsystems": [
00:26:11.552      {
00:26:11.552        "subsystem": "bdev",
00:26:11.552        "config": [
00:26:11.552          {
00:26:11.552            "params": {
00:26:11.552              "block_size": 4096,
00:26:11.552              "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1",
00:26:11.552              "name": "aio1"
00:26:11.552            },
00:26:11.552            "method": "bdev_aio_create"
00:26:11.552          },
00:26:11.552          {
00:26:11.552            "params": {
00:26:11.552              "trtype": "pcie",
00:26:11.552              "traddr": "0000:00:06.0",
00:26:11.552              "name": "Nvme0"
00:26:11.552            },
00:26:11.552            "method": "bdev_nvme_attach_controller"
00:26:11.552          },
00:26:11.552          {
00:26:11.552            "method": "bdev_wait_for_examine"
00:26:11.552          }
00:26:11.552        ]
00:26:11.552      }
00:26:11.552    ]
00:26:11.552  }
00:26:11.811  [2024-11-19 17:10:04.495389] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:26:11.811  [2024-11-19 17:10:04.543701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:26:12.070  
[2024-11-19T17:10:05.193Z] Copying: 5120/5120 [kB] (average 227 MBps)
00:26:12.329  
00:26:12.329   17:10:05	-- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/aio1
00:26:12.329  ************************************
00:26:12.329  END TEST spdk_dd_bdev_to_bdev
00:26:12.329  ************************************
00:26:12.329  
00:26:12.329  real	0m9.100s
00:26:12.329  user	0m4.760s
00:26:12.329  sys	0m2.632s
00:26:12.329   17:10:05	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:26:12.329   17:10:05	-- common/autotest_common.sh@10 -- # set +x
00:26:12.589   17:10:05	-- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 ))
00:26:12.589   17:10:05	-- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh
00:26:12.589   17:10:05	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:26:12.589   17:10:05	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:26:12.589   17:10:05	-- common/autotest_common.sh@10 -- # set +x
00:26:12.589  ************************************
00:26:12.589  START TEST spdk_dd_sparse
00:26:12.589  ************************************
00:26:12.589   17:10:05	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh
00:26:12.589  * Looking for test storage...
00:26:12.589  * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd
00:26:12.589     17:10:05	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:26:12.589      17:10:05	-- common/autotest_common.sh@1690 -- # lcov --version
00:26:12.589      17:10:05	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:26:12.589     17:10:05	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:26:12.589     17:10:05	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:26:12.589     17:10:05	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:26:12.589     17:10:05	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:26:12.589     17:10:05	-- scripts/common.sh@335 -- # IFS=.-:
00:26:12.589     17:10:05	-- scripts/common.sh@335 -- # read -ra ver1
00:26:12.589     17:10:05	-- scripts/common.sh@336 -- # IFS=.-:
00:26:12.589     17:10:05	-- scripts/common.sh@336 -- # read -ra ver2
00:26:12.589     17:10:05	-- scripts/common.sh@337 -- # local 'op=<'
00:26:12.589     17:10:05	-- scripts/common.sh@339 -- # ver1_l=2
00:26:12.589     17:10:05	-- scripts/common.sh@340 -- # ver2_l=1
00:26:12.589     17:10:05	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:26:12.589     17:10:05	-- scripts/common.sh@343 -- # case "$op" in
00:26:12.589     17:10:05	-- scripts/common.sh@344 -- # : 1
00:26:12.589     17:10:05	-- scripts/common.sh@363 -- # (( v = 0 ))
00:26:12.589     17:10:05	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:26:12.589      17:10:05	-- scripts/common.sh@364 -- # decimal 1
00:26:12.589      17:10:05	-- scripts/common.sh@352 -- # local d=1
00:26:12.589      17:10:05	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:26:12.589      17:10:05	-- scripts/common.sh@354 -- # echo 1
00:26:12.589     17:10:05	-- scripts/common.sh@364 -- # ver1[v]=1
00:26:12.589      17:10:05	-- scripts/common.sh@365 -- # decimal 2
00:26:12.589      17:10:05	-- scripts/common.sh@352 -- # local d=2
00:26:12.589      17:10:05	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:26:12.589      17:10:05	-- scripts/common.sh@354 -- # echo 2
00:26:12.589     17:10:05	-- scripts/common.sh@365 -- # ver2[v]=2
00:26:12.589     17:10:05	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:26:12.589     17:10:05	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:26:12.589     17:10:05	-- scripts/common.sh@367 -- # return 0
00:26:12.589     17:10:05	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:26:12.589     17:10:05	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:26:12.589  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:26:12.589  		--rc genhtml_branch_coverage=1
00:26:12.589  		--rc genhtml_function_coverage=1
00:26:12.589  		--rc genhtml_legend=1
00:26:12.589  		--rc geninfo_all_blocks=1
00:26:12.589  		--rc geninfo_unexecuted_blocks=1
00:26:12.589  		
00:26:12.589  		'
00:26:12.589     17:10:05	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:26:12.589  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:26:12.589  		--rc genhtml_branch_coverage=1
00:26:12.589  		--rc genhtml_function_coverage=1
00:26:12.589  		--rc genhtml_legend=1
00:26:12.589  		--rc geninfo_all_blocks=1
00:26:12.589  		--rc geninfo_unexecuted_blocks=1
00:26:12.589  		
00:26:12.589  		'
00:26:12.589     17:10:05	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:26:12.589  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:26:12.589  		--rc genhtml_branch_coverage=1
00:26:12.589  		--rc genhtml_function_coverage=1
00:26:12.589  		--rc genhtml_legend=1
00:26:12.589  		--rc geninfo_all_blocks=1
00:26:12.589  		--rc geninfo_unexecuted_blocks=1
00:26:12.589  		
00:26:12.589  		'
00:26:12.589     17:10:05	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:26:12.589  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:26:12.589  		--rc genhtml_branch_coverage=1
00:26:12.589  		--rc genhtml_function_coverage=1
00:26:12.589  		--rc genhtml_legend=1
00:26:12.589  		--rc geninfo_all_blocks=1
00:26:12.589  		--rc geninfo_unexecuted_blocks=1
00:26:12.589  		
00:26:12.589  		'
00:26:12.589    17:10:05	-- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:26:12.589     17:10:05	-- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]]
00:26:12.589     17:10:05	-- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:26:12.589     17:10:05	-- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:26:12.589      17:10:05	-- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:26:12.589      17:10:05	-- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:26:12.590      17:10:05	-- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:26:12.590      17:10:05	-- paths/export.sh@5 -- # export PATH
00:26:12.590      17:10:05	-- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:26:12.590   17:10:05	-- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk
00:26:12.590   17:10:05	-- dd/sparse.sh@109 -- # aio_bdev=dd_aio
00:26:12.590   17:10:05	-- dd/sparse.sh@110 -- # file1=file_zero1
00:26:12.590   17:10:05	-- dd/sparse.sh@111 -- # file2=file_zero2
00:26:12.590   17:10:05	-- dd/sparse.sh@112 -- # file3=file_zero3
00:26:12.590   17:10:05	-- dd/sparse.sh@113 -- # lvstore=dd_lvstore
00:26:12.590   17:10:05	-- dd/sparse.sh@114 -- # lvol=dd_lvol
00:26:12.590   17:10:05	-- dd/sparse.sh@116 -- # trap cleanup EXIT
00:26:12.590   17:10:05	-- dd/sparse.sh@118 -- # prepare
00:26:12.590   17:10:05	-- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600
00:26:12.590   17:10:05	-- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1
00:26:12.854  1+0 records in
00:26:12.854  1+0 records out
00:26:12.854  4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0107489 s, 390 MB/s
00:26:12.854   17:10:05	-- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4
00:26:12.854  1+0 records in
00:26:12.854  1+0 records out
00:26:12.854  4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0125448 s, 334 MB/s
00:26:12.854   17:10:05	-- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8
00:26:12.854  1+0 records in
00:26:12.854  1+0 records out
00:26:12.854  4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0142549 s, 294 MB/s
00:26:12.854   17:10:05	-- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file
00:26:12.854   17:10:05	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:26:12.854   17:10:05	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:26:12.854   17:10:05	-- common/autotest_common.sh@10 -- # set +x
00:26:12.854  ************************************
00:26:12.854  START TEST dd_sparse_file_to_file
00:26:12.854  ************************************
00:26:12.854   17:10:05	-- common/autotest_common.sh@1114 -- # file_to_file
00:26:12.854   17:10:05	-- dd/sparse.sh@26 -- # local stat1_s stat1_b
00:26:12.854   17:10:05	-- dd/sparse.sh@27 -- # local stat2_s stat2_b
00:26:12.854   17:10:05	-- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096')
00:26:12.854   17:10:05	-- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0
00:26:12.854   17:10:05	-- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore')
00:26:12.854   17:10:05	-- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1
00:26:12.854   17:10:05	-- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62
00:26:12.854    17:10:05	-- dd/sparse.sh@41 -- # gen_conf
00:26:12.854    17:10:05	-- dd/common.sh@31 -- # xtrace_disable
00:26:12.854    17:10:05	-- common/autotest_common.sh@10 -- # set +x
00:26:12.854  {
00:26:12.854    "subsystems": [
00:26:12.854      {
00:26:12.854        "subsystem": "bdev",
00:26:12.854        "config": [
00:26:12.854          {
00:26:12.854            "params": {
00:26:12.854              "block_size": 4096,
00:26:12.854              "filename": "dd_sparse_aio_disk",
00:26:12.854              "name": "dd_aio"
00:26:12.854            },
00:26:12.854            "method": "bdev_aio_create"
00:26:12.854          },
00:26:12.854          {
00:26:12.854            "params": {
00:26:12.854              "lvs_name": "dd_lvstore",
00:26:12.854              "bdev_name": "dd_aio"
00:26:12.854            },
00:26:12.854            "method": "bdev_lvol_create_lvstore"
00:26:12.854          },
00:26:12.854          {
00:26:12.854            "method": "bdev_wait_for_examine"
00:26:12.854          }
00:26:12.854        ]
00:26:12.854      }
00:26:12.854    ]
00:26:12.854  }
00:26:12.854  [2024-11-19 17:10:05.558503] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:26:12.854  [2024-11-19 17:10:05.559120] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145373 ]
00:26:13.113  [2024-11-19 17:10:05.712052] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:26:13.113  [2024-11-19 17:10:05.760298] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:26:13.113  
[2024-11-19T17:10:06.544Z] Copying: 12/36 [MB] (average 923 MBps)
00:26:13.680  
00:26:13.680    17:10:06	-- dd/sparse.sh@47 -- # stat --printf=%s file_zero1
00:26:13.680   17:10:06	-- dd/sparse.sh@47 -- # stat1_s=37748736
00:26:13.680    17:10:06	-- dd/sparse.sh@48 -- # stat --printf=%s file_zero2
00:26:13.680   17:10:06	-- dd/sparse.sh@48 -- # stat2_s=37748736
00:26:13.680   17:10:06	-- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]]
00:26:13.680    17:10:06	-- dd/sparse.sh@52 -- # stat --printf=%b file_zero1
00:26:13.680   17:10:06	-- dd/sparse.sh@52 -- # stat1_b=24576
00:26:13.680    17:10:06	-- dd/sparse.sh@53 -- # stat --printf=%b file_zero2
00:26:13.680  ************************************
00:26:13.680  END TEST dd_sparse_file_to_file
00:26:13.680  ************************************
00:26:13.680   17:10:06	-- dd/sparse.sh@53 -- # stat2_b=24576
00:26:13.680   17:10:06	-- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]]
00:26:13.680  
00:26:13.680  real	0m0.831s
00:26:13.680  user	0m0.436s
00:26:13.680  sys	0m0.238s
00:26:13.680   17:10:06	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:26:13.680   17:10:06	-- common/autotest_common.sh@10 -- # set +x
00:26:13.680   17:10:06	-- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev
00:26:13.680   17:10:06	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:26:13.681   17:10:06	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:26:13.681   17:10:06	-- common/autotest_common.sh@10 -- # set +x
00:26:13.681  ************************************
00:26:13.681  START TEST dd_sparse_file_to_bdev
00:26:13.681  ************************************
00:26:13.681   17:10:06	-- common/autotest_common.sh@1114 -- # file_to_bdev
00:26:13.681   17:10:06	-- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096')
00:26:13.681   17:10:06	-- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0
00:26:13.681   17:10:06	-- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size']='37748736' ['thin_provision']='true')
00:26:13.681   17:10:06	-- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1
00:26:13.681   17:10:06	-- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62
00:26:13.681    17:10:06	-- dd/sparse.sh@73 -- # gen_conf
00:26:13.681    17:10:06	-- dd/common.sh@31 -- # xtrace_disable
00:26:13.681    17:10:06	-- common/autotest_common.sh@10 -- # set +x
00:26:13.681  [2024-11-19 17:10:06.451387] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:26:13.681  {
00:26:13.681    "subsystems": [
00:26:13.681      {
00:26:13.681        "subsystem": "bdev",
00:26:13.681        "config": [
00:26:13.681          {
00:26:13.681            "params": {
00:26:13.681              "block_size": 4096,
00:26:13.681              "filename": "dd_sparse_aio_disk",
00:26:13.681              "name": "dd_aio"
00:26:13.681            },
00:26:13.681            "method": "bdev_aio_create"
00:26:13.681          },
00:26:13.681          {
00:26:13.681            "params": {
00:26:13.681              "lvs_name": "dd_lvstore",
00:26:13.681              "lvol_name": "dd_lvol",
00:26:13.681              "size": 37748736,
00:26:13.681              "thin_provision": true
00:26:13.681            },
00:26:13.681            "method": "bdev_lvol_create"
00:26:13.681          },
00:26:13.681          {
00:26:13.681            "method": "bdev_wait_for_examine"
00:26:13.681          }
00:26:13.681        ]
00:26:13.681      }
00:26:13.681    ]
00:26:13.681  }
00:26:13.681  [2024-11-19 17:10:06.451990] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145426 ]
00:26:13.939  [2024-11-19 17:10:06.606129] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:26:13.939  [2024-11-19 17:10:06.652982] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:26:13.939  [2024-11-19 17:10:06.730497] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09
00:26:13.939  
[2024-11-19T17:10:06.803Z] Copying: 12/36 [MB] (average 545 MBps)[2024-11-19 17:10:06.771138] app.c: 883:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times
00:26:14.506  
00:26:14.506  
00:26:14.506  ************************************
00:26:14.506  END TEST dd_sparse_file_to_bdev
00:26:14.506  ************************************
00:26:14.506  
00:26:14.506  real	0m0.724s
00:26:14.506  user	0m0.400s
00:26:14.506  sys	0m0.214s
00:26:14.506   17:10:07	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:26:14.506   17:10:07	-- common/autotest_common.sh@10 -- # set +x
00:26:14.506   17:10:07	-- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file
00:26:14.506   17:10:07	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:26:14.506   17:10:07	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:26:14.506   17:10:07	-- common/autotest_common.sh@10 -- # set +x
00:26:14.506  ************************************
00:26:14.506  START TEST dd_sparse_bdev_to_file
00:26:14.506  ************************************
00:26:14.506   17:10:07	-- common/autotest_common.sh@1114 -- # bdev_to_file
00:26:14.506   17:10:07	-- dd/sparse.sh@81 -- # local stat2_s stat2_b
00:26:14.506   17:10:07	-- dd/sparse.sh@82 -- # local stat3_s stat3_b
00:26:14.506   17:10:07	-- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096')
00:26:14.506   17:10:07	-- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0
00:26:14.506   17:10:07	-- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62
00:26:14.506    17:10:07	-- dd/sparse.sh@91 -- # gen_conf
00:26:14.506    17:10:07	-- dd/common.sh@31 -- # xtrace_disable
00:26:14.506    17:10:07	-- common/autotest_common.sh@10 -- # set +x
00:26:14.506  {
00:26:14.506    "subsystems": [
00:26:14.506      {
00:26:14.506        "subsystem": "bdev",
00:26:14.506        "config": [
00:26:14.506          {
00:26:14.506            "params": {
00:26:14.506              "block_size": 4096,
00:26:14.506              "filename": "dd_sparse_aio_disk",
00:26:14.506              "name": "dd_aio"
00:26:14.506            },
00:26:14.506            "method": "bdev_aio_create"
00:26:14.506          },
00:26:14.506          {
00:26:14.506            "method": "bdev_wait_for_examine"
00:26:14.506          }
00:26:14.506        ]
00:26:14.506      }
00:26:14.506    ]
00:26:14.506  }
00:26:14.506  [2024-11-19 17:10:07.235158] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:26:14.506  [2024-11-19 17:10:07.235528] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145455 ]
00:26:14.765  [2024-11-19 17:10:07.392558] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:26:14.765  [2024-11-19 17:10:07.439046] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:26:14.765  
[2024-11-19T17:10:07.886Z] Copying: 12/36 [MB] (average 923 MBps)
00:26:15.022  
00:26:15.022    17:10:07	-- dd/sparse.sh@97 -- # stat --printf=%s file_zero2
00:26:15.023   17:10:07	-- dd/sparse.sh@97 -- # stat2_s=37748736
00:26:15.023    17:10:07	-- dd/sparse.sh@98 -- # stat --printf=%s file_zero3
00:26:15.023   17:10:07	-- dd/sparse.sh@98 -- # stat3_s=37748736
00:26:15.023   17:10:07	-- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]]
00:26:15.023    17:10:07	-- dd/sparse.sh@102 -- # stat --printf=%b file_zero2
00:26:15.281   17:10:07	-- dd/sparse.sh@102 -- # stat2_b=24576
00:26:15.281    17:10:07	-- dd/sparse.sh@103 -- # stat --printf=%b file_zero3
00:26:15.281  ************************************
00:26:15.281  END TEST dd_sparse_bdev_to_file
00:26:15.281  ************************************
00:26:15.281   17:10:07	-- dd/sparse.sh@103 -- # stat3_b=24576
00:26:15.281   17:10:07	-- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]]
00:26:15.281  
00:26:15.281  real	0m0.709s
00:26:15.281  user	0m0.359s
00:26:15.281  sys	0m0.244s
00:26:15.281   17:10:07	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:26:15.281   17:10:07	-- common/autotest_common.sh@10 -- # set +x
00:26:15.281   17:10:07	-- dd/sparse.sh@1 -- # cleanup
00:26:15.281   17:10:07	-- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk
00:26:15.281   17:10:07	-- dd/sparse.sh@12 -- # rm file_zero1
00:26:15.281   17:10:07	-- dd/sparse.sh@13 -- # rm file_zero2
00:26:15.281   17:10:07	-- dd/sparse.sh@14 -- # rm file_zero3
00:26:15.281  ************************************
00:26:15.281  END TEST spdk_dd_sparse
00:26:15.281  ************************************
00:26:15.281  
00:26:15.281  real	0m2.754s
00:26:15.281  user	0m1.456s
00:26:15.281  sys	0m0.942s
00:26:15.281   17:10:07	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:26:15.281   17:10:07	-- common/autotest_common.sh@10 -- # set +x
00:26:15.281   17:10:08	-- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh
00:26:15.281   17:10:08	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:26:15.281   17:10:08	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:26:15.281   17:10:08	-- common/autotest_common.sh@10 -- # set +x
00:26:15.281  ************************************
00:26:15.281  START TEST spdk_dd_negative
00:26:15.281  ************************************
00:26:15.281   17:10:08	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh
00:26:15.281  * Looking for test storage...
00:26:15.281  * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd
00:26:15.281     17:10:08	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:26:15.281      17:10:08	-- common/autotest_common.sh@1690 -- # lcov --version
00:26:15.281      17:10:08	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:26:15.540     17:10:08	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:26:15.540     17:10:08	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:26:15.540     17:10:08	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:26:15.540     17:10:08	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:26:15.540     17:10:08	-- scripts/common.sh@335 -- # IFS=.-:
00:26:15.540     17:10:08	-- scripts/common.sh@335 -- # read -ra ver1
00:26:15.540     17:10:08	-- scripts/common.sh@336 -- # IFS=.-:
00:26:15.540     17:10:08	-- scripts/common.sh@336 -- # read -ra ver2
00:26:15.540     17:10:08	-- scripts/common.sh@337 -- # local 'op=<'
00:26:15.540     17:10:08	-- scripts/common.sh@339 -- # ver1_l=2
00:26:15.540     17:10:08	-- scripts/common.sh@340 -- # ver2_l=1
00:26:15.540     17:10:08	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:26:15.540     17:10:08	-- scripts/common.sh@343 -- # case "$op" in
00:26:15.540     17:10:08	-- scripts/common.sh@344 -- # : 1
00:26:15.540     17:10:08	-- scripts/common.sh@363 -- # (( v = 0 ))
00:26:15.540     17:10:08	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:26:15.540      17:10:08	-- scripts/common.sh@364 -- # decimal 1
00:26:15.540      17:10:08	-- scripts/common.sh@352 -- # local d=1
00:26:15.540      17:10:08	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:26:15.540      17:10:08	-- scripts/common.sh@354 -- # echo 1
00:26:15.540     17:10:08	-- scripts/common.sh@364 -- # ver1[v]=1
00:26:15.540      17:10:08	-- scripts/common.sh@365 -- # decimal 2
00:26:15.540      17:10:08	-- scripts/common.sh@352 -- # local d=2
00:26:15.540      17:10:08	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:26:15.540      17:10:08	-- scripts/common.sh@354 -- # echo 2
00:26:15.540     17:10:08	-- scripts/common.sh@365 -- # ver2[v]=2
00:26:15.540     17:10:08	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:26:15.540     17:10:08	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:26:15.540     17:10:08	-- scripts/common.sh@367 -- # return 0
00:26:15.540     17:10:08	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:26:15.540     17:10:08	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:26:15.540  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:26:15.540  		--rc genhtml_branch_coverage=1
00:26:15.540  		--rc genhtml_function_coverage=1
00:26:15.540  		--rc genhtml_legend=1
00:26:15.540  		--rc geninfo_all_blocks=1
00:26:15.540  		--rc geninfo_unexecuted_blocks=1
00:26:15.540  		
00:26:15.540  		'
00:26:15.540     17:10:08	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:26:15.540  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:26:15.540  		--rc genhtml_branch_coverage=1
00:26:15.540  		--rc genhtml_function_coverage=1
00:26:15.540  		--rc genhtml_legend=1
00:26:15.540  		--rc geninfo_all_blocks=1
00:26:15.540  		--rc geninfo_unexecuted_blocks=1
00:26:15.540  		
00:26:15.540  		'
00:26:15.540     17:10:08	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:26:15.540  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:26:15.540  		--rc genhtml_branch_coverage=1
00:26:15.540  		--rc genhtml_function_coverage=1
00:26:15.540  		--rc genhtml_legend=1
00:26:15.540  		--rc geninfo_all_blocks=1
00:26:15.540  		--rc geninfo_unexecuted_blocks=1
00:26:15.540  		
00:26:15.540  		'
00:26:15.540     17:10:08	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:26:15.540  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:26:15.540  		--rc genhtml_branch_coverage=1
00:26:15.540  		--rc genhtml_function_coverage=1
00:26:15.540  		--rc genhtml_legend=1
00:26:15.540  		--rc geninfo_all_blocks=1
00:26:15.540  		--rc geninfo_unexecuted_blocks=1
00:26:15.540  		
00:26:15.540  		'
00:26:15.540    17:10:08	-- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:26:15.540     17:10:08	-- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]]
00:26:15.540     17:10:08	-- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:26:15.540     17:10:08	-- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:26:15.540      17:10:08	-- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:26:15.540      17:10:08	-- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:26:15.540      17:10:08	-- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:26:15.540      17:10:08	-- paths/export.sh@5 -- # export PATH
00:26:15.540      17:10:08	-- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:26:15.540   17:10:08	-- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:26:15.540   17:10:08	-- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:26:15.540   17:10:08	-- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:26:15.540   17:10:08	-- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:26:15.540   17:10:08	-- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments
00:26:15.540   17:10:08	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:26:15.540   17:10:08	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:26:15.540   17:10:08	-- common/autotest_common.sh@10 -- # set +x
00:26:15.540  ************************************
00:26:15.540  START TEST dd_invalid_arguments
00:26:15.540  ************************************
00:26:15.540   17:10:08	-- common/autotest_common.sh@1114 -- # invalid_arguments
00:26:15.540   17:10:08	-- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob=
00:26:15.540   17:10:08	-- common/autotest_common.sh@650 -- # local es=0
00:26:15.540   17:10:08	-- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob=
00:26:15.540   17:10:08	-- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:26:15.540   17:10:08	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:26:15.540    17:10:08	-- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:26:15.540   17:10:08	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:26:15.540    17:10:08	-- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:26:15.540   17:10:08	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:26:15.540   17:10:08	-- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:26:15.540   17:10:08	-- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:26:15.540   17:10:08	-- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob=
00:26:15.540  /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii='
00:26:15.540  /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options]
00:26:15.540  options:
00:26:15.540   -c, --config <config>     JSON config file (default none)
00:26:15.540       --json <config>       JSON config file (default none)
00:26:15.540       --json-ignore-init-errors
00:26:15.540                             don't exit on invalid config entry
00:26:15.540   -d, --limit-coredump      do not set max coredump size to RLIM_INFINITY
00:26:15.540   -g, --single-file-segments
00:26:15.540                             force creating just one hugetlbfs file
00:26:15.540   -h, --help                show this usage
00:26:15.541   -i, --shm-id <id>         shared memory ID (optional)
00:26:15.541   -m, --cpumask <mask or list>    core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK
00:26:15.541       --lcores <list>       lcore to CPU mapping list. The list is in the format:
00:26:15.541                             <lcores[@CPUs]>[<,lcores[@CPUs]>...]
00:26:15.541                             lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"'
00:26:15.541                             Within the group, '-' is used for range separator,
00:26:15.541                             ',' is used for single number separator.
00:26:15.541                             '( )' can be omitted for single element group,
00:26:15.541                             '@' can be omitted if cpus and lcores have the same value
00:26:15.541   -n, --mem-channels <num>  channel number of memory channels used for DPDK
00:26:15.541   -p, --main-core <id>      main (primary) core for DPDK
00:26:15.541   -r, --rpc-socket <path>   RPC listen address (default /var/tmp/spdk.sock)
00:26:15.541   -s, --mem-size <size>     memory size in MB for DPDK (default: 0MB)
00:26:15.541       --disable-cpumask-locks    Disable CPU core lock files.
00:26:15.541       --silence-noticelog   disable notice level logging to stderr
00:26:15.541       --msg-mempool-size <size>  global message memory pool size in count (default: 262143)
00:26:15.541   -u, --no-pci              disable PCI access
00:26:15.541       --wait-for-rpc        wait for RPCs to initialize subsystems
00:26:15.541       --max-delay <num>     maximum reactor delay (in microseconds)
00:26:15.541   -B, --pci-blocked <bdf>   pci addr to block (can be used more than once)
00:26:15.541   -A, --pci-allowed <bdf>   pci addr to allow (-B and -A cannot be used at the same time)
00:26:15.541   -R, --huge-unlink         unlink huge files after initialization
00:26:15.541   -v, --version             print SPDK version
00:26:15.541       --huge-dir <path>     use a specific hugetlbfs mount to reserve memory from
00:26:15.541       --iova-mode <pa/va>   set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA)
00:26:15.541       --base-virtaddr <addr>      the base virtual address for DPDK (default: 0x200000000000)
00:26:15.541       --num-trace-entries <num>   number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768)
00:26:15.541                                   Tracepoints vary in size and can use more than one trace entry.
00:26:15.541       --rpcs-allowed	   comma-separated list of permitted RPCS
00:26:15.541       --env-context         Opaque context for use of the env implementation
00:26:15.541       --vfio-vf-token       VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver
00:26:15.541       --no-huge             run without using hugepages
00:26:15.541   -L, --logflag <flag>    enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid5f, bdev_raid_sb, blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, json_util, log, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, nvme_cuse, nvme_vfio, opal, reactor, rpc, rpc_client, sock, sock_posix, thread, trace, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, virtio_vfio_user, vmd)
00:26:15.541   -e, --tpoint-group <group-name>[:<tpoint_mask>]
00:26:15.541                             group_name - tracepoint group name for spdk trace buffers (bdev, ftl, blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, all)
00:26:15.541                             tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1).
00:26:15.541                              Groups and [2024-11-19 17:10:08.309332] spdk_dd.c:1460:main: *ERROR*: Invalid arguments
00:26:15.541  masks can be combined (e.g. thread,bdev:0x1).
00:26:15.541                              All available tpoints can be found in /include/spdk_internal/trace_defs.h
00:26:15.541       --interrupt-mode      set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode)
00:26:15.541  [--------- DD Options ---------]
00:26:15.541   --if Input file. Must specify either --if or --ib.
00:26:15.541   --ib Input bdev. Must specifier either --if or --ib
00:26:15.541   --of Output file. Must specify either --of or --ob.
00:26:15.541   --ob Output bdev. Must specify either --of or --ob.
00:26:15.541   --iflag Input file flags.
00:26:15.541   --oflag Output file flags.
00:26:15.541   --bs I/O unit size (default: 4096)
00:26:15.541   --qd Queue depth (default: 2)
00:26:15.541   --count I/O unit count. The number of I/O units to copy. (default: all)
00:26:15.541   --skip Skip this many I/O units at start of input. (default: 0)
00:26:15.541   --seek Skip this many I/O units at start of output. (default: 0)
00:26:15.541   --aio Force usage of AIO. (by default io_uring is used if available)
00:26:15.541   --sparse Enable hole skipping in input target
00:26:15.541   Available iflag and oflag values:
00:26:15.541    append - append mode
00:26:15.541    direct - use direct I/O for data
00:26:15.541    directory - fail unless a directory
00:26:15.541    dsync - use synchronized I/O for data
00:26:15.541    noatime - do not update access time
00:26:15.541    noctty - do not assign controlling terminal from file
00:26:15.541    nofollow - do not follow symlinks
00:26:15.541    nonblock - use non-blocking I/O
00:26:15.541    sync - use synchronized I/O for data and metadata
00:26:15.541  ************************************
00:26:15.541  END TEST dd_invalid_arguments
00:26:15.541  ************************************
00:26:15.541   17:10:08	-- common/autotest_common.sh@653 -- # es=2
00:26:15.541   17:10:08	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:26:15.541   17:10:08	-- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:26:15.541   17:10:08	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:26:15.541  
00:26:15.541  real	0m0.124s
00:26:15.541  user	0m0.080s
00:26:15.541  sys	0m0.042s
00:26:15.541   17:10:08	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:26:15.541   17:10:08	-- common/autotest_common.sh@10 -- # set +x
00:26:15.799   17:10:08	-- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input
00:26:15.799   17:10:08	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:26:15.799   17:10:08	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:26:15.799   17:10:08	-- common/autotest_common.sh@10 -- # set +x
00:26:15.799  ************************************
00:26:15.799  START TEST dd_double_input
00:26:15.799  ************************************
00:26:15.799   17:10:08	-- common/autotest_common.sh@1114 -- # double_input
00:26:15.799   17:10:08	-- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob=
00:26:15.799   17:10:08	-- common/autotest_common.sh@650 -- # local es=0
00:26:15.799   17:10:08	-- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob=
00:26:15.799   17:10:08	-- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:26:15.799   17:10:08	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:26:15.799    17:10:08	-- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:26:15.799   17:10:08	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:26:15.799    17:10:08	-- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:26:15.799   17:10:08	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:26:15.799   17:10:08	-- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:26:15.799   17:10:08	-- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:26:15.799   17:10:08	-- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob=
00:26:15.799  [2024-11-19 17:10:08.480081] spdk_dd.c:1467:main: *ERROR*: You may specify either --if or --ib, but not both.
00:26:15.799   17:10:08	-- common/autotest_common.sh@653 -- # es=22
00:26:15.799   17:10:08	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:26:15.799   17:10:08	-- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:26:15.799   17:10:08	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:26:15.799  
00:26:15.799  real	0m0.104s
00:26:15.799  user	0m0.046s
00:26:15.799  sys	0m0.056s
00:26:15.799   17:10:08	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:26:15.799   17:10:08	-- common/autotest_common.sh@10 -- # set +x
00:26:15.799  ************************************
00:26:15.799  END TEST dd_double_input
00:26:15.799  ************************************
00:26:15.799   17:10:08	-- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output
00:26:15.799   17:10:08	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:26:15.799   17:10:08	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:26:15.799   17:10:08	-- common/autotest_common.sh@10 -- # set +x
00:26:15.799  ************************************
00:26:15.799  START TEST dd_double_output
00:26:15.799  ************************************
00:26:15.799   17:10:08	-- common/autotest_common.sh@1114 -- # double_output
00:26:15.799   17:10:08	-- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob=
00:26:15.799   17:10:08	-- common/autotest_common.sh@650 -- # local es=0
00:26:15.799   17:10:08	-- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob=
00:26:15.799   17:10:08	-- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:26:15.799   17:10:08	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:26:15.799    17:10:08	-- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:26:15.799   17:10:08	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:26:15.799    17:10:08	-- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:26:15.799   17:10:08	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:26:15.799   17:10:08	-- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:26:15.800   17:10:08	-- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:26:15.800   17:10:08	-- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob=
00:26:15.800  [2024-11-19 17:10:08.643587] spdk_dd.c:1473:main: *ERROR*: You may specify either --of or --ob, but not both.
00:26:16.082   17:10:08	-- common/autotest_common.sh@653 -- # es=22
00:26:16.082   17:10:08	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:26:16.082   17:10:08	-- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:26:16.082   17:10:08	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:26:16.082  
00:26:16.082  real	0m0.103s
00:26:16.082  user	0m0.036s
00:26:16.082  sys	0m0.066s
00:26:16.082   17:10:08	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:26:16.082   17:10:08	-- common/autotest_common.sh@10 -- # set +x
00:26:16.082  ************************************
00:26:16.082  END TEST dd_double_output
00:26:16.082  ************************************
00:26:16.082   17:10:08	-- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input
00:26:16.083   17:10:08	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:26:16.083   17:10:08	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:26:16.083   17:10:08	-- common/autotest_common.sh@10 -- # set +x
00:26:16.083  ************************************
00:26:16.083  START TEST dd_no_input
00:26:16.083  ************************************
00:26:16.083   17:10:08	-- common/autotest_common.sh@1114 -- # no_input
00:26:16.083   17:10:08	-- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob=
00:26:16.083   17:10:08	-- common/autotest_common.sh@650 -- # local es=0
00:26:16.083   17:10:08	-- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob=
00:26:16.083   17:10:08	-- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:26:16.083   17:10:08	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:26:16.083    17:10:08	-- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:26:16.083   17:10:08	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:26:16.083    17:10:08	-- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:26:16.083   17:10:08	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:26:16.083   17:10:08	-- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:26:16.083   17:10:08	-- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:26:16.083   17:10:08	-- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob=
00:26:16.083  [2024-11-19 17:10:08.820205] spdk_dd.c:1479:main: *ERROR*: You must specify either --if or --ib
00:26:16.083   17:10:08	-- common/autotest_common.sh@653 -- # es=22
00:26:16.083   17:10:08	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:26:16.083   17:10:08	-- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:26:16.083   17:10:08	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:26:16.083  
00:26:16.083  real	0m0.122s
00:26:16.083  user	0m0.061s
00:26:16.083  sys	0m0.059s
00:26:16.083   17:10:08	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:26:16.083   17:10:08	-- common/autotest_common.sh@10 -- # set +x
00:26:16.083  ************************************
00:26:16.083  END TEST dd_no_input
00:26:16.083  ************************************
00:26:16.345   17:10:08	-- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output
00:26:16.345   17:10:08	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:26:16.345   17:10:08	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:26:16.345   17:10:08	-- common/autotest_common.sh@10 -- # set +x
00:26:16.345  ************************************
00:26:16.345  START TEST dd_no_output
00:26:16.345  ************************************
00:26:16.345   17:10:08	-- common/autotest_common.sh@1114 -- # no_output
00:26:16.345   17:10:08	-- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:26:16.345   17:10:08	-- common/autotest_common.sh@650 -- # local es=0
00:26:16.345   17:10:08	-- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:26:16.345   17:10:08	-- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:26:16.345   17:10:08	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:26:16.345    17:10:08	-- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:26:16.345   17:10:08	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:26:16.345    17:10:08	-- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:26:16.345   17:10:08	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:26:16.345   17:10:08	-- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:26:16.345   17:10:08	-- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:26:16.345   17:10:08	-- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:26:16.345  [2024-11-19 17:10:08.989196] spdk_dd.c:1485:main: *ERROR*: You must specify either --of or --ob
00:26:16.345   17:10:09	-- common/autotest_common.sh@653 -- # es=22
00:26:16.345   17:10:09	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:26:16.345   17:10:09	-- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:26:16.345   17:10:09	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:26:16.345  
00:26:16.345  real	0m0.098s
00:26:16.345  user	0m0.040s
00:26:16.345  sys	0m0.056s
00:26:16.345   17:10:09	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:26:16.345   17:10:09	-- common/autotest_common.sh@10 -- # set +x
00:26:16.345  ************************************
00:26:16.345  END TEST dd_no_output
00:26:16.345  ************************************
00:26:16.345   17:10:09	-- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize
00:26:16.345   17:10:09	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:26:16.345   17:10:09	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:26:16.345   17:10:09	-- common/autotest_common.sh@10 -- # set +x
00:26:16.345  ************************************
00:26:16.345  START TEST dd_wrong_blocksize
00:26:16.345  ************************************
00:26:16.345   17:10:09	-- common/autotest_common.sh@1114 -- # wrong_blocksize
00:26:16.345   17:10:09	-- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0
00:26:16.345   17:10:09	-- common/autotest_common.sh@650 -- # local es=0
00:26:16.345   17:10:09	-- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0
00:26:16.345   17:10:09	-- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:26:16.345   17:10:09	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:26:16.345    17:10:09	-- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:26:16.345   17:10:09	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:26:16.345    17:10:09	-- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:26:16.345   17:10:09	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:26:16.345   17:10:09	-- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:26:16.345   17:10:09	-- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:26:16.345   17:10:09	-- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0
00:26:16.345  [2024-11-19 17:10:09.161648] spdk_dd.c:1491:main: *ERROR*: Invalid --bs value
00:26:16.603   17:10:09	-- common/autotest_common.sh@653 -- # es=22
00:26:16.603   17:10:09	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:26:16.603   17:10:09	-- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:26:16.603   17:10:09	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:26:16.603  
00:26:16.603  real	0m0.134s
00:26:16.603  user	0m0.069s
00:26:16.603  sys	0m0.062s
00:26:16.603   17:10:09	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:26:16.603   17:10:09	-- common/autotest_common.sh@10 -- # set +x
00:26:16.603  ************************************
00:26:16.603  END TEST dd_wrong_blocksize
00:26:16.603  ************************************
00:26:16.603   17:10:09	-- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize
00:26:16.603   17:10:09	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:26:16.603   17:10:09	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:26:16.603   17:10:09	-- common/autotest_common.sh@10 -- # set +x
00:26:16.603  ************************************
00:26:16.603  START TEST dd_smaller_blocksize
00:26:16.603  ************************************
00:26:16.603   17:10:09	-- common/autotest_common.sh@1114 -- # smaller_blocksize
00:26:16.603   17:10:09	-- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999
00:26:16.603   17:10:09	-- common/autotest_common.sh@650 -- # local es=0
00:26:16.603   17:10:09	-- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999
00:26:16.603   17:10:09	-- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:26:16.603   17:10:09	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:26:16.603    17:10:09	-- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:26:16.603   17:10:09	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:26:16.603    17:10:09	-- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:26:16.603   17:10:09	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:26:16.603   17:10:09	-- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:26:16.603   17:10:09	-- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:26:16.603   17:10:09	-- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999
00:26:16.603  [2024-11-19 17:10:09.357994] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:26:16.603  [2024-11-19 17:10:09.358544] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145732 ]
00:26:16.861  [2024-11-19 17:10:09.515837] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:26:16.861  [2024-11-19 17:10:09.565817] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:26:16.861  EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list
00:26:17.119  [2024-11-19 17:10:09.742332] spdk_dd.c:1168:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value
00:26:17.119  [2024-11-19 17:10:09.742697] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:26:17.119  [2024-11-19 17:10:09.859814] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy
00:26:17.377   17:10:09	-- common/autotest_common.sh@653 -- # es=244
00:26:17.377   17:10:09	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:26:17.377   17:10:09	-- common/autotest_common.sh@662 -- # es=116
00:26:17.377   17:10:09	-- common/autotest_common.sh@663 -- # case "$es" in
00:26:17.377   17:10:09	-- common/autotest_common.sh@670 -- # es=1
00:26:17.377   17:10:10	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:26:17.377  
00:26:17.377  real	0m0.709s
00:26:17.377  user	0m0.358s
00:26:17.377  sys	0m0.248s
00:26:17.377   17:10:10	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:26:17.377   17:10:10	-- common/autotest_common.sh@10 -- # set +x
00:26:17.377  ************************************
00:26:17.377  END TEST dd_smaller_blocksize
00:26:17.377  ************************************
00:26:17.377   17:10:10	-- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count
00:26:17.377   17:10:10	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:26:17.377   17:10:10	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:26:17.377   17:10:10	-- common/autotest_common.sh@10 -- # set +x
00:26:17.377  ************************************
00:26:17.377  START TEST dd_invalid_count
00:26:17.377  ************************************
00:26:17.377   17:10:10	-- common/autotest_common.sh@1114 -- # invalid_count
00:26:17.377   17:10:10	-- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9
00:26:17.377   17:10:10	-- common/autotest_common.sh@650 -- # local es=0
00:26:17.377   17:10:10	-- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9
00:26:17.377   17:10:10	-- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:26:17.377   17:10:10	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:26:17.377    17:10:10	-- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:26:17.377   17:10:10	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:26:17.377    17:10:10	-- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:26:17.377   17:10:10	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:26:17.377   17:10:10	-- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:26:17.377   17:10:10	-- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:26:17.377   17:10:10	-- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9
00:26:17.377  [2024-11-19 17:10:10.131881] spdk_dd.c:1497:main: *ERROR*: Invalid --count value
00:26:17.377   17:10:10	-- common/autotest_common.sh@653 -- # es=22
00:26:17.377   17:10:10	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:26:17.377   17:10:10	-- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:26:17.377   17:10:10	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:26:17.377  
00:26:17.377  real	0m0.115s
00:26:17.377  user	0m0.048s
00:26:17.377  sys	0m0.064s
00:26:17.377   17:10:10	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:26:17.377   17:10:10	-- common/autotest_common.sh@10 -- # set +x
00:26:17.377  ************************************
00:26:17.377  END TEST dd_invalid_count
00:26:17.377  ************************************
00:26:17.636   17:10:10	-- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag
00:26:17.636   17:10:10	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:26:17.636   17:10:10	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:26:17.636   17:10:10	-- common/autotest_common.sh@10 -- # set +x
00:26:17.636  ************************************
00:26:17.636  START TEST dd_invalid_oflag
00:26:17.636  ************************************
00:26:17.636   17:10:10	-- common/autotest_common.sh@1114 -- # invalid_oflag
00:26:17.636   17:10:10	-- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0
00:26:17.636   17:10:10	-- common/autotest_common.sh@650 -- # local es=0
00:26:17.636   17:10:10	-- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0
00:26:17.636   17:10:10	-- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:26:17.636   17:10:10	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:26:17.636    17:10:10	-- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:26:17.636   17:10:10	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:26:17.636    17:10:10	-- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:26:17.636   17:10:10	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:26:17.636   17:10:10	-- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:26:17.636   17:10:10	-- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:26:17.636   17:10:10	-- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0
00:26:17.636  [2024-11-19 17:10:10.314115] spdk_dd.c:1503:main: *ERROR*: --oflags may be used only with --of
00:26:17.636   17:10:10	-- common/autotest_common.sh@653 -- # es=22
00:26:17.636   17:10:10	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:26:17.636   17:10:10	-- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:26:17.636   17:10:10	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:26:17.636  
00:26:17.636  real	0m0.121s
00:26:17.636  user	0m0.046s
00:26:17.636  sys	0m0.073s
00:26:17.636   17:10:10	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:26:17.636   17:10:10	-- common/autotest_common.sh@10 -- # set +x
00:26:17.636  ************************************
00:26:17.636  END TEST dd_invalid_oflag
00:26:17.636  ************************************
00:26:17.636   17:10:10	-- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag
00:26:17.636   17:10:10	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:26:17.636   17:10:10	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:26:17.636   17:10:10	-- common/autotest_common.sh@10 -- # set +x
00:26:17.636  ************************************
00:26:17.636  START TEST dd_invalid_iflag
00:26:17.636  ************************************
00:26:17.636   17:10:10	-- common/autotest_common.sh@1114 -- # invalid_iflag
00:26:17.636   17:10:10	-- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0
00:26:17.636   17:10:10	-- common/autotest_common.sh@650 -- # local es=0
00:26:17.636   17:10:10	-- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0
00:26:17.636   17:10:10	-- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:26:17.636   17:10:10	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:26:17.636    17:10:10	-- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:26:17.636   17:10:10	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:26:17.636    17:10:10	-- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:26:17.636   17:10:10	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:26:17.636   17:10:10	-- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:26:17.636   17:10:10	-- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:26:17.636   17:10:10	-- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0
00:26:17.895  [2024-11-19 17:10:10.502596] spdk_dd.c:1509:main: *ERROR*: --iflags may be used only with --if
00:26:17.895   17:10:10	-- common/autotest_common.sh@653 -- # es=22
00:26:17.895   17:10:10	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:26:17.895   17:10:10	-- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:26:17.895   17:10:10	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:26:17.895  
00:26:17.895  real	0m0.123s
00:26:17.895  user	0m0.058s
00:26:17.895  sys	0m0.062s
00:26:17.895   17:10:10	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:26:17.895   17:10:10	-- common/autotest_common.sh@10 -- # set +x
00:26:17.895  ************************************
00:26:17.895  END TEST dd_invalid_iflag
00:26:17.895  ************************************
00:26:17.895   17:10:10	-- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag
00:26:17.895   17:10:10	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:26:17.895   17:10:10	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:26:17.895   17:10:10	-- common/autotest_common.sh@10 -- # set +x
00:26:17.895  ************************************
00:26:17.895  START TEST dd_unknown_flag
00:26:17.895  ************************************
00:26:17.895   17:10:10	-- common/autotest_common.sh@1114 -- # unknown_flag
00:26:17.895   17:10:10	-- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1
00:26:17.895   17:10:10	-- common/autotest_common.sh@650 -- # local es=0
00:26:17.895   17:10:10	-- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1
00:26:17.895   17:10:10	-- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:26:17.895   17:10:10	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:26:17.895    17:10:10	-- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:26:17.895   17:10:10	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:26:17.895    17:10:10	-- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:26:17.895   17:10:10	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:26:17.895   17:10:10	-- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:26:17.895   17:10:10	-- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:26:17.895   17:10:10	-- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1
00:26:17.895  [2024-11-19 17:10:10.695678] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:26:17.895  [2024-11-19 17:10:10.696411] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145843 ]
00:26:18.153  [2024-11-19 17:10:10.856238] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:26:18.153  [2024-11-19 17:10:10.909358] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:26:18.153  [2024-11-19 17:10:10.981138] spdk_dd.c: 985:parse_flags: *ERROR*: Unknown file flag: -1
00:26:18.153  [2024-11-19 17:10:10.981494] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Not a directory
00:26:18.153  [2024-11-19 17:10:10.981637] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Not a directory
00:26:18.154  [2024-11-19 17:10:10.981753] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:26:18.413  [2024-11-19 17:10:11.103754] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy
00:26:18.413   17:10:11	-- common/autotest_common.sh@653 -- # es=236
00:26:18.413   17:10:11	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:26:18.413   17:10:11	-- common/autotest_common.sh@662 -- # es=108
00:26:18.413   17:10:11	-- common/autotest_common.sh@663 -- # case "$es" in
00:26:18.413   17:10:11	-- common/autotest_common.sh@670 -- # es=1
00:26:18.413   17:10:11	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:26:18.413  
00:26:18.413  real	0m0.628s
00:26:18.413  user	0m0.290s
00:26:18.413  sys	0m0.234s
00:26:18.413   17:10:11	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:26:18.413   17:10:11	-- common/autotest_common.sh@10 -- # set +x
00:26:18.413  ************************************
00:26:18.413  END TEST dd_unknown_flag
00:26:18.413  ************************************
00:26:18.670   17:10:11	-- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json
00:26:18.670   17:10:11	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:26:18.670   17:10:11	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:26:18.670   17:10:11	-- common/autotest_common.sh@10 -- # set +x
00:26:18.670  ************************************
00:26:18.670  START TEST dd_invalid_json
00:26:18.670  ************************************
00:26:18.670   17:10:11	-- common/autotest_common.sh@1114 -- # invalid_json
00:26:18.670   17:10:11	-- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62
00:26:18.670    17:10:11	-- dd/negative_dd.sh@95 -- # :
00:26:18.670   17:10:11	-- common/autotest_common.sh@650 -- # local es=0
00:26:18.670   17:10:11	-- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62
00:26:18.670   17:10:11	-- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:26:18.670   17:10:11	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:26:18.670    17:10:11	-- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:26:18.670   17:10:11	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:26:18.670    17:10:11	-- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:26:18.670   17:10:11	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:26:18.670   17:10:11	-- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:26:18.670   17:10:11	-- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:26:18.670   17:10:11	-- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62
00:26:18.670  [2024-11-19 17:10:11.371515] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:26:18.670  [2024-11-19 17:10:11.371946] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145881 ]
00:26:18.929  [2024-11-19 17:10:11.527162] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:26:18.929  [2024-11-19 17:10:11.574220] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:26:18.929  [2024-11-19 17:10:11.574612] json_config.c: 529:app_json_config_read: *ERROR*: Parsing JSON configuration failed (-2)
00:26:18.929  [2024-11-19 17:10:11.574790] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:26:18.929  [2024-11-19 17:10:11.575028] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy
00:26:18.929  ************************************
00:26:18.929  END TEST dd_invalid_json
00:26:18.929  ************************************
00:26:18.929   17:10:11	-- common/autotest_common.sh@653 -- # es=234
00:26:18.929   17:10:11	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:26:18.929   17:10:11	-- common/autotest_common.sh@662 -- # es=106
00:26:18.929   17:10:11	-- common/autotest_common.sh@663 -- # case "$es" in
00:26:18.929   17:10:11	-- common/autotest_common.sh@670 -- # es=1
00:26:18.929   17:10:11	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:26:18.929  
00:26:18.929  real	0m0.412s
00:26:18.929  user	0m0.185s
00:26:18.929  sys	0m0.127s
00:26:18.929   17:10:11	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:26:18.929   17:10:11	-- common/autotest_common.sh@10 -- # set +x
00:26:18.929  ************************************
00:26:18.929  END TEST spdk_dd_negative
00:26:18.929  ************************************
00:26:18.929  
00:26:18.929  real	0m3.742s
00:26:18.929  user	0m1.768s
00:26:18.929  sys	0m1.652s
00:26:18.929   17:10:11	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:26:18.929   17:10:11	-- common/autotest_common.sh@10 -- # set +x
00:26:19.187  ************************************
00:26:19.187  END TEST spdk_dd
00:26:19.187  ************************************
00:26:19.187  
00:26:19.187  real	1m5.873s
00:26:19.187  user	0m36.522s
00:26:19.187  sys	0m19.094s
00:26:19.187   17:10:11	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:26:19.187   17:10:11	-- common/autotest_common.sh@10 -- # set +x
00:26:19.187   17:10:11	-- spdk/autotest.sh@204 -- # '[' 1 -eq 1 ']'
00:26:19.187   17:10:11	-- spdk/autotest.sh@205 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme
00:26:19.187   17:10:11	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:26:19.187   17:10:11	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:26:19.187   17:10:11	-- common/autotest_common.sh@10 -- # set +x
00:26:19.187  ************************************
00:26:19.187  START TEST blockdev_nvme
00:26:19.187  ************************************
00:26:19.187   17:10:11	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme
00:26:19.187  * Looking for test storage...
00:26:19.187  * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev
00:26:19.187    17:10:11	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:26:19.187     17:10:11	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:26:19.187     17:10:11	-- common/autotest_common.sh@1690 -- # lcov --version
00:26:19.446    17:10:12	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:26:19.446    17:10:12	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:26:19.446    17:10:12	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:26:19.447    17:10:12	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:26:19.447    17:10:12	-- scripts/common.sh@335 -- # IFS=.-:
00:26:19.447    17:10:12	-- scripts/common.sh@335 -- # read -ra ver1
00:26:19.447    17:10:12	-- scripts/common.sh@336 -- # IFS=.-:
00:26:19.447    17:10:12	-- scripts/common.sh@336 -- # read -ra ver2
00:26:19.447    17:10:12	-- scripts/common.sh@337 -- # local 'op=<'
00:26:19.447    17:10:12	-- scripts/common.sh@339 -- # ver1_l=2
00:26:19.447    17:10:12	-- scripts/common.sh@340 -- # ver2_l=1
00:26:19.447    17:10:12	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:26:19.447    17:10:12	-- scripts/common.sh@343 -- # case "$op" in
00:26:19.447    17:10:12	-- scripts/common.sh@344 -- # : 1
00:26:19.447    17:10:12	-- scripts/common.sh@363 -- # (( v = 0 ))
00:26:19.447    17:10:12	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:26:19.447     17:10:12	-- scripts/common.sh@364 -- # decimal 1
00:26:19.447     17:10:12	-- scripts/common.sh@352 -- # local d=1
00:26:19.447     17:10:12	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:26:19.447     17:10:12	-- scripts/common.sh@354 -- # echo 1
00:26:19.447    17:10:12	-- scripts/common.sh@364 -- # ver1[v]=1
00:26:19.447     17:10:12	-- scripts/common.sh@365 -- # decimal 2
00:26:19.447     17:10:12	-- scripts/common.sh@352 -- # local d=2
00:26:19.447     17:10:12	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:26:19.447     17:10:12	-- scripts/common.sh@354 -- # echo 2
00:26:19.447    17:10:12	-- scripts/common.sh@365 -- # ver2[v]=2
00:26:19.447    17:10:12	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:26:19.447    17:10:12	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:26:19.447    17:10:12	-- scripts/common.sh@367 -- # return 0
00:26:19.447    17:10:12	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:26:19.447    17:10:12	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:26:19.447  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:26:19.447  		--rc genhtml_branch_coverage=1
00:26:19.447  		--rc genhtml_function_coverage=1
00:26:19.447  		--rc genhtml_legend=1
00:26:19.447  		--rc geninfo_all_blocks=1
00:26:19.447  		--rc geninfo_unexecuted_blocks=1
00:26:19.447  		
00:26:19.447  		'
00:26:19.447    17:10:12	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:26:19.447  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:26:19.447  		--rc genhtml_branch_coverage=1
00:26:19.447  		--rc genhtml_function_coverage=1
00:26:19.447  		--rc genhtml_legend=1
00:26:19.447  		--rc geninfo_all_blocks=1
00:26:19.447  		--rc geninfo_unexecuted_blocks=1
00:26:19.447  		
00:26:19.447  		'
00:26:19.447    17:10:12	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:26:19.447  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:26:19.447  		--rc genhtml_branch_coverage=1
00:26:19.447  		--rc genhtml_function_coverage=1
00:26:19.447  		--rc genhtml_legend=1
00:26:19.447  		--rc geninfo_all_blocks=1
00:26:19.447  		--rc geninfo_unexecuted_blocks=1
00:26:19.447  		
00:26:19.447  		'
00:26:19.447    17:10:12	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:26:19.447  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:26:19.447  		--rc genhtml_branch_coverage=1
00:26:19.447  		--rc genhtml_function_coverage=1
00:26:19.447  		--rc genhtml_legend=1
00:26:19.447  		--rc geninfo_all_blocks=1
00:26:19.447  		--rc geninfo_unexecuted_blocks=1
00:26:19.447  		
00:26:19.447  		'
00:26:19.447   17:10:12	-- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh
00:26:19.447    17:10:12	-- bdev/nbd_common.sh@6 -- # set -e
00:26:19.447   17:10:12	-- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd
00:26:19.447   17:10:12	-- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:26:19.447   17:10:12	-- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json
00:26:19.447   17:10:12	-- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json
00:26:19.447   17:10:12	-- bdev/blockdev.sh@18 -- # :
00:26:19.447   17:10:12	-- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0
00:26:19.447   17:10:12	-- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1
00:26:19.447   17:10:12	-- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5
00:26:19.447    17:10:12	-- bdev/blockdev.sh@672 -- # uname -s
00:26:19.447   17:10:12	-- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']'
00:26:19.447   17:10:12	-- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0
00:26:19.447   17:10:12	-- bdev/blockdev.sh@680 -- # test_type=nvme
00:26:19.447   17:10:12	-- bdev/blockdev.sh@681 -- # crypto_device=
00:26:19.447   17:10:12	-- bdev/blockdev.sh@682 -- # dek=
00:26:19.447   17:10:12	-- bdev/blockdev.sh@683 -- # env_ctx=
00:26:19.447   17:10:12	-- bdev/blockdev.sh@684 -- # wait_for_rpc=
00:26:19.447   17:10:12	-- bdev/blockdev.sh@685 -- # '[' -n '' ']'
00:26:19.447   17:10:12	-- bdev/blockdev.sh@688 -- # [[ nvme == bdev ]]
00:26:19.447   17:10:12	-- bdev/blockdev.sh@688 -- # [[ nvme == crypto_* ]]
00:26:19.447   17:10:12	-- bdev/blockdev.sh@691 -- # start_spdk_tgt
00:26:19.447   17:10:12	-- bdev/blockdev.sh@45 -- # spdk_tgt_pid=145972
00:26:19.447   17:10:12	-- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' ''
00:26:19.447   17:10:12	-- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT
00:26:19.447   17:10:12	-- bdev/blockdev.sh@47 -- # waitforlisten 145972
00:26:19.447   17:10:12	-- common/autotest_common.sh@829 -- # '[' -z 145972 ']'
00:26:19.447   17:10:12	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:26:19.447   17:10:12	-- common/autotest_common.sh@834 -- # local max_retries=100
00:26:19.447   17:10:12	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:26:19.447  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:26:19.447   17:10:12	-- common/autotest_common.sh@838 -- # xtrace_disable
00:26:19.447   17:10:12	-- common/autotest_common.sh@10 -- # set +x
00:26:19.447  [2024-11-19 17:10:12.163466] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:26:19.447  [2024-11-19 17:10:12.163994] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145972 ]
00:26:19.706  [2024-11-19 17:10:12.324967] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:26:19.706  [2024-11-19 17:10:12.381571] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:26:19.706  [2024-11-19 17:10:12.381940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:26:20.272   17:10:13	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:26:20.272   17:10:13	-- common/autotest_common.sh@862 -- # return 0
00:26:20.272   17:10:13	-- bdev/blockdev.sh@692 -- # case "$test_type" in
00:26:20.272   17:10:13	-- bdev/blockdev.sh@697 -- # setup_nvme_conf
00:26:20.272   17:10:13	-- bdev/blockdev.sh@79 -- # local json
00:26:20.272   17:10:13	-- bdev/blockdev.sh@80 -- # mapfile -t json
00:26:20.272    17:10:13	-- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:26:20.533   17:10:13	-- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } } ] }'\'''
00:26:20.533   17:10:13	-- common/autotest_common.sh@561 -- # xtrace_disable
00:26:20.533   17:10:13	-- common/autotest_common.sh@10 -- # set +x
00:26:20.533   17:10:13	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:26:20.533   17:10:13	-- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine
00:26:20.533   17:10:13	-- common/autotest_common.sh@561 -- # xtrace_disable
00:26:20.533   17:10:13	-- common/autotest_common.sh@10 -- # set +x
00:26:20.533   17:10:13	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:26:20.533   17:10:13	-- bdev/blockdev.sh@738 -- # cat
00:26:20.533    17:10:13	-- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel
00:26:20.533    17:10:13	-- common/autotest_common.sh@561 -- # xtrace_disable
00:26:20.533    17:10:13	-- common/autotest_common.sh@10 -- # set +x
00:26:20.533    17:10:13	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:26:20.533    17:10:13	-- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev
00:26:20.533    17:10:13	-- common/autotest_common.sh@561 -- # xtrace_disable
00:26:20.533    17:10:13	-- common/autotest_common.sh@10 -- # set +x
00:26:20.533    17:10:13	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:26:20.533    17:10:13	-- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf
00:26:20.533    17:10:13	-- common/autotest_common.sh@561 -- # xtrace_disable
00:26:20.533    17:10:13	-- common/autotest_common.sh@10 -- # set +x
00:26:20.533    17:10:13	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:26:20.533   17:10:13	-- bdev/blockdev.sh@746 -- # mapfile -t bdevs
00:26:20.533    17:10:13	-- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs
00:26:20.533    17:10:13	-- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)'
00:26:20.533    17:10:13	-- common/autotest_common.sh@561 -- # xtrace_disable
00:26:20.533    17:10:13	-- common/autotest_common.sh@10 -- # set +x
00:26:20.533    17:10:13	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:26:20.533   17:10:13	-- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name
00:26:20.533    17:10:13	-- bdev/blockdev.sh@747 -- # jq -r .name
00:26:20.533    17:10:13	-- bdev/blockdev.sh@747 -- # printf '%s\n' '{' '  "name": "Nvme0n1",' '  "aliases": [' '    "b37001ce-0af6-415d-bf2a-a88b6a6cf796"' '  ],' '  "product_name": "NVMe disk",' '  "block_size": 4096,' '  "num_blocks": 1310720,' '  "uuid": "b37001ce-0af6-415d-bf2a-a88b6a6cf796",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": true,' '    "compare_and_write": false,' '    "abort": true,' '    "nvme_admin": true,' '    "nvme_io": true' '  },' '  "driver_specific": {' '    "nvme": [' '      {' '        "pci_address": "0000:00:06.0",' '        "trid": {' '          "trtype": "PCIe",' '          "traddr": "0000:00:06.0"' '        },' '        "ctrlr_data": {' '          "cntlid": 0,' '          "vendor_id": "0x1b36",' '          "model_number": "QEMU NVMe Ctrl",' '          "serial_number": "12340",' '          "firmware_revision": "8.0.0",' '          "subnqn": "nqn.2019-08.org.qemu:12340",' '          "oacs": {' '            "security": 0,' '            "format": 1,' '            "firmware": 0,' '            "ns_manage": 1' '          },' '          "multi_ctrlr": false,' '          "ana_reporting": false' '        },' '        "vs": {' '          "nvme_version": "1.4"' '        },' '        "ns_data": {' '          "id": 1,' '          "can_share": false' '        }' '      }' '    ],' '    "mp_policy": "active_passive"' '  }' '}'
00:26:20.792   17:10:13	-- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}")
00:26:20.792   17:10:13	-- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1
00:26:20.792   17:10:13	-- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT
00:26:20.792   17:10:13	-- bdev/blockdev.sh@752 -- # killprocess 145972
00:26:20.792   17:10:13	-- common/autotest_common.sh@936 -- # '[' -z 145972 ']'
00:26:20.792   17:10:13	-- common/autotest_common.sh@940 -- # kill -0 145972
00:26:20.792    17:10:13	-- common/autotest_common.sh@941 -- # uname
00:26:20.792   17:10:13	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:26:20.792    17:10:13	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 145972
00:26:20.792   17:10:13	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:26:20.792   17:10:13	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:26:20.792   17:10:13	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 145972'
00:26:20.792  killing process with pid 145972
00:26:20.792   17:10:13	-- common/autotest_common.sh@955 -- # kill 145972
00:26:20.792   17:10:13	-- common/autotest_common.sh@960 -- # wait 145972
00:26:21.051   17:10:13	-- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT
00:26:21.051   17:10:13	-- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 ''
00:26:21.051   17:10:13	-- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']'
00:26:21.051   17:10:13	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:26:21.051   17:10:13	-- common/autotest_common.sh@10 -- # set +x
00:26:21.051  ************************************
00:26:21.051  START TEST bdev_hello_world
00:26:21.051  ************************************
00:26:21.051   17:10:13	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 ''
00:26:21.310  [2024-11-19 17:10:13.906700] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:26:21.310  [2024-11-19 17:10:13.907731] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146044 ]
00:26:21.310  [2024-11-19 17:10:14.055214] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:26:21.310  [2024-11-19 17:10:14.106738] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:26:21.570  [2024-11-19 17:10:14.301356] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application
00:26:21.570  [2024-11-19 17:10:14.301642] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1
00:26:21.570  [2024-11-19 17:10:14.301731] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel
00:26:21.570  [2024-11-19 17:10:14.304244] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev
00:26:21.570  [2024-11-19 17:10:14.304779] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully
00:26:21.570  [2024-11-19 17:10:14.304929] hello_bdev.c:  84:hello_read: *NOTICE*: Reading io
00:26:21.570  [2024-11-19 17:10:14.305214] hello_bdev.c:  65:read_complete: *NOTICE*: Read string from bdev : Hello World!
00:26:21.570  
00:26:21.570  [2024-11-19 17:10:14.305345] hello_bdev.c:  74:read_complete: *NOTICE*: Stopping app
00:26:21.829  ************************************
00:26:21.829  END TEST bdev_hello_world
00:26:21.829  ************************************
00:26:21.829  
00:26:21.829  real	0m0.714s
00:26:21.829  user	0m0.442s
00:26:21.829  sys	0m0.170s
00:26:21.829   17:10:14	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:26:21.829   17:10:14	-- common/autotest_common.sh@10 -- # set +x
00:26:21.829   17:10:14	-- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds ''
00:26:21.829   17:10:14	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:26:21.829   17:10:14	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:26:21.829   17:10:14	-- common/autotest_common.sh@10 -- # set +x
00:26:21.829  ************************************
00:26:21.829  START TEST bdev_bounds
00:26:21.829  ************************************
00:26:21.829   17:10:14	-- common/autotest_common.sh@1114 -- # bdev_bounds ''
00:26:21.829   17:10:14	-- bdev/blockdev.sh@288 -- # bdevio_pid=146075
00:26:21.829   17:10:14	-- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json ''
00:26:21.829   17:10:14	-- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT
00:26:21.829   17:10:14	-- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 146075'
00:26:21.829  Process bdevio pid: 146075
00:26:21.829   17:10:14	-- bdev/blockdev.sh@291 -- # waitforlisten 146075
00:26:21.829   17:10:14	-- common/autotest_common.sh@829 -- # '[' -z 146075 ']'
00:26:21.829   17:10:14	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:26:21.829   17:10:14	-- common/autotest_common.sh@834 -- # local max_retries=100
00:26:21.829   17:10:14	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:26:21.829  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:26:21.829   17:10:14	-- common/autotest_common.sh@838 -- # xtrace_disable
00:26:21.829   17:10:14	-- common/autotest_common.sh@10 -- # set +x
00:26:22.088  [2024-11-19 17:10:14.687557] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:26:22.088  [2024-11-19 17:10:14.688077] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146075 ]
00:26:22.088  [2024-11-19 17:10:14.853745] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3
00:26:22.088  [2024-11-19 17:10:14.908246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:26:22.088  [2024-11-19 17:10:14.908321] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:26:22.088  [2024-11-19 17:10:14.908490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:26:23.075   17:10:15	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:26:23.075   17:10:15	-- common/autotest_common.sh@862 -- # return 0
00:26:23.075   17:10:15	-- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests
00:26:23.075  I/O targets:
00:26:23.075    Nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB)
00:26:23.075  
00:26:23.075  
00:26:23.075       CUnit - A unit testing framework for C - Version 2.1-3
00:26:23.075       http://cunit.sourceforge.net/
00:26:23.075  
00:26:23.075  
00:26:23.075  Suite: bdevio tests on: Nvme0n1
00:26:23.075    Test: blockdev write read block ...passed
00:26:23.075    Test: blockdev write zeroes read block ...passed
00:26:23.075    Test: blockdev write zeroes read no split ...passed
00:26:23.075    Test: blockdev write zeroes read split ...passed
00:26:23.075    Test: blockdev write zeroes read split partial ...passed
00:26:23.075    Test: blockdev reset ...[2024-11-19 17:10:15.853400] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller
00:26:23.075  [2024-11-19 17:10:15.855553] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful.
00:26:23.075  passed
00:26:23.075    Test: blockdev write read 8 blocks ...passed
00:26:23.075    Test: blockdev write read size > 128k ...passed
00:26:23.075    Test: blockdev write read invalid size ...passed
00:26:23.075    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:26:23.075    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:26:23.075    Test: blockdev write read max offset ...passed
00:26:23.075    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:26:23.075    Test: blockdev writev readv 8 blocks ...passed
00:26:23.075    Test: blockdev writev readv 30 x 1block ...passed
00:26:23.075    Test: blockdev writev readv block ...passed
00:26:23.075    Test: blockdev writev readv size > 128k ...passed
00:26:23.075    Test: blockdev writev readv size > 128k in two iovs ...passed
00:26:23.075    Test: blockdev comparev and writev ...[2024-11-19 17:10:15.861971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x9e0d000 len:0x1000
00:26:23.075  [2024-11-19 17:10:15.862189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1
00:26:23.075  passed
00:26:23.075    Test: blockdev nvme passthru rw ...passed
00:26:23.075    Test: blockdev nvme passthru vendor specific ...[2024-11-19 17:10:15.863296] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0
00:26:23.075  [2024-11-19 17:10:15.863480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1
00:26:23.075  passed
00:26:23.075    Test: blockdev nvme admin passthru ...passed
00:26:23.075    Test: blockdev copy ...passed
00:26:23.075  
00:26:23.075  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:26:23.075                suites      1      1    n/a      0        0
00:26:23.075                 tests     23     23     23      0        0
00:26:23.075               asserts    152    152    152      0      n/a
00:26:23.075  
00:26:23.075  Elapsed time =    0.056 seconds
00:26:23.075  0
00:26:23.075   17:10:15	-- bdev/blockdev.sh@293 -- # killprocess 146075
00:26:23.075   17:10:15	-- common/autotest_common.sh@936 -- # '[' -z 146075 ']'
00:26:23.075   17:10:15	-- common/autotest_common.sh@940 -- # kill -0 146075
00:26:23.075    17:10:15	-- common/autotest_common.sh@941 -- # uname
00:26:23.075   17:10:15	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:26:23.075    17:10:15	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 146075
00:26:23.075  killing process with pid 146075
00:26:23.075   17:10:15	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:26:23.075   17:10:15	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:26:23.075   17:10:15	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 146075'
00:26:23.075   17:10:15	-- common/autotest_common.sh@955 -- # kill 146075
00:26:23.075   17:10:15	-- common/autotest_common.sh@960 -- # wait 146075
00:26:23.334  ************************************
00:26:23.334  END TEST bdev_bounds
00:26:23.334  ************************************
00:26:23.334   17:10:16	-- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT
00:26:23.334  
00:26:23.334  real	0m1.521s
00:26:23.334  user	0m4.029s
00:26:23.334  sys	0m0.290s
00:26:23.334   17:10:16	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:26:23.334   17:10:16	-- common/autotest_common.sh@10 -- # set +x
00:26:23.334   17:10:16	-- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 ''
00:26:23.334   17:10:16	-- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']'
00:26:23.334   17:10:16	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:26:23.334   17:10:16	-- common/autotest_common.sh@10 -- # set +x
00:26:23.593  ************************************
00:26:23.593  START TEST bdev_nbd
00:26:23.593  ************************************
00:26:23.593   17:10:16	-- common/autotest_common.sh@1114 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 ''
00:26:23.593    17:10:16	-- bdev/blockdev.sh@298 -- # uname -s
00:26:23.593   17:10:16	-- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]]
00:26:23.593   17:10:16	-- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:26:23.593   17:10:16	-- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:26:23.593   17:10:16	-- bdev/blockdev.sh@302 -- # bdev_all=('Nvme0n1')
00:26:23.593   17:10:16	-- bdev/blockdev.sh@302 -- # local bdev_all
00:26:23.593   17:10:16	-- bdev/blockdev.sh@303 -- # local bdev_num=1
00:26:23.593   17:10:16	-- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]]
00:26:23.593   17:10:16	-- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9')
00:26:23.593   17:10:16	-- bdev/blockdev.sh@309 -- # local nbd_all
00:26:23.593   17:10:16	-- bdev/blockdev.sh@310 -- # bdev_num=1
00:26:23.593   17:10:16	-- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0')
00:26:23.593   17:10:16	-- bdev/blockdev.sh@312 -- # local nbd_list
00:26:23.593   17:10:16	-- bdev/blockdev.sh@313 -- # bdev_list=('Nvme0n1')
00:26:23.593   17:10:16	-- bdev/blockdev.sh@313 -- # local bdev_list
00:26:23.593   17:10:16	-- bdev/blockdev.sh@316 -- # nbd_pid=146132
00:26:23.593   17:10:16	-- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT
00:26:23.593   17:10:16	-- bdev/blockdev.sh@318 -- # waitforlisten 146132 /var/tmp/spdk-nbd.sock
00:26:23.593   17:10:16	-- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json ''
00:26:23.593   17:10:16	-- common/autotest_common.sh@829 -- # '[' -z 146132 ']'
00:26:23.593   17:10:16	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:26:23.593   17:10:16	-- common/autotest_common.sh@834 -- # local max_retries=100
00:26:23.593   17:10:16	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:26:23.593  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:26:23.593   17:10:16	-- common/autotest_common.sh@838 -- # xtrace_disable
00:26:23.593   17:10:16	-- common/autotest_common.sh@10 -- # set +x
00:26:23.593  [2024-11-19 17:10:16.253262] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:26:23.593  [2024-11-19 17:10:16.253742] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:26:23.593  [2024-11-19 17:10:16.405588] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:26:23.850  [2024-11-19 17:10:16.459817] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:26:24.416   17:10:17	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:26:24.416   17:10:17	-- common/autotest_common.sh@862 -- # return 0
00:26:24.416   17:10:17	-- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock Nvme0n1
00:26:24.416   17:10:17	-- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:26:24.416   17:10:17	-- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1')
00:26:24.416   17:10:17	-- bdev/nbd_common.sh@114 -- # local bdev_list
00:26:24.416   17:10:17	-- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock Nvme0n1
00:26:24.416   17:10:17	-- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:26:24.416   17:10:17	-- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1')
00:26:24.416   17:10:17	-- bdev/nbd_common.sh@23 -- # local bdev_list
00:26:24.416   17:10:17	-- bdev/nbd_common.sh@24 -- # local i
00:26:24.416   17:10:17	-- bdev/nbd_common.sh@25 -- # local nbd_device
00:26:24.416   17:10:17	-- bdev/nbd_common.sh@27 -- # (( i = 0 ))
00:26:24.416   17:10:17	-- bdev/nbd_common.sh@27 -- # (( i < 1 ))
00:26:24.416    17:10:17	-- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1
00:26:24.674   17:10:17	-- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0
00:26:24.674    17:10:17	-- bdev/nbd_common.sh@30 -- # basename /dev/nbd0
00:26:24.674   17:10:17	-- bdev/nbd_common.sh@30 -- # waitfornbd nbd0
00:26:24.674   17:10:17	-- common/autotest_common.sh@866 -- # local nbd_name=nbd0
00:26:24.674   17:10:17	-- common/autotest_common.sh@867 -- # local i
00:26:24.674   17:10:17	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:26:24.674   17:10:17	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:26:24.674   17:10:17	-- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions
00:26:24.674   17:10:17	-- common/autotest_common.sh@871 -- # break
00:26:24.674   17:10:17	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:26:24.674   17:10:17	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:26:24.674   17:10:17	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:26:24.674  1+0 records in
00:26:24.674  1+0 records out
00:26:24.674  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000507652 s, 8.1 MB/s
00:26:24.674    17:10:17	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:26:24.674   17:10:17	-- common/autotest_common.sh@884 -- # size=4096
00:26:24.674   17:10:17	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:26:24.674   17:10:17	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:26:24.674   17:10:17	-- common/autotest_common.sh@887 -- # return 0
00:26:24.674   17:10:17	-- bdev/nbd_common.sh@27 -- # (( i++ ))
00:26:24.674   17:10:17	-- bdev/nbd_common.sh@27 -- # (( i < 1 ))
00:26:24.674    17:10:17	-- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:26:24.930   17:10:17	-- bdev/nbd_common.sh@118 -- # nbd_disks_json='[
00:26:24.930    {
00:26:24.930      "nbd_device": "/dev/nbd0",
00:26:24.930      "bdev_name": "Nvme0n1"
00:26:24.930    }
00:26:24.930  ]'
00:26:24.930   17:10:17	-- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device'))
00:26:24.930    17:10:17	-- bdev/nbd_common.sh@119 -- # echo '[
00:26:24.930    {
00:26:24.930      "nbd_device": "/dev/nbd0",
00:26:24.930      "bdev_name": "Nvme0n1"
00:26:24.930    }
00:26:24.930  ]'
00:26:24.930    17:10:17	-- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device'
00:26:24.931   17:10:17	-- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0
00:26:24.931   17:10:17	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:26:24.931   17:10:17	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:26:24.931   17:10:17	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:26:24.931   17:10:17	-- bdev/nbd_common.sh@51 -- # local i
00:26:24.931   17:10:17	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:26:24.931   17:10:17	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:26:25.190    17:10:17	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:26:25.190   17:10:17	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:26:25.190   17:10:17	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:26:25.190   17:10:17	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:26:25.190   17:10:17	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:26:25.190   17:10:17	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:26:25.190   17:10:17	-- bdev/nbd_common.sh@41 -- # break
00:26:25.190   17:10:17	-- bdev/nbd_common.sh@45 -- # return 0
00:26:25.190    17:10:17	-- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:26:25.190    17:10:17	-- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:26:25.190     17:10:17	-- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:26:25.449    17:10:18	-- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:26:25.449     17:10:18	-- bdev/nbd_common.sh@64 -- # echo '[]'
00:26:25.449     17:10:18	-- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:26:25.449    17:10:18	-- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:26:25.449     17:10:18	-- bdev/nbd_common.sh@65 -- # echo ''
00:26:25.449     17:10:18	-- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:26:25.449     17:10:18	-- bdev/nbd_common.sh@65 -- # true
00:26:25.449    17:10:18	-- bdev/nbd_common.sh@65 -- # count=0
00:26:25.449    17:10:18	-- bdev/nbd_common.sh@66 -- # echo 0
00:26:25.449   17:10:18	-- bdev/nbd_common.sh@122 -- # count=0
00:26:25.449   17:10:18	-- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']'
00:26:25.449   17:10:18	-- bdev/nbd_common.sh@127 -- # return 0
00:26:25.449   17:10:18	-- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0
00:26:25.449   17:10:18	-- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:26:25.449   17:10:18	-- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1')
00:26:25.449   17:10:18	-- bdev/nbd_common.sh@91 -- # local bdev_list
00:26:25.449   17:10:18	-- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0')
00:26:25.449   17:10:18	-- bdev/nbd_common.sh@92 -- # local nbd_list
00:26:25.449   17:10:18	-- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0
00:26:25.449   17:10:18	-- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:26:25.449   17:10:18	-- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1')
00:26:25.449   17:10:18	-- bdev/nbd_common.sh@10 -- # local bdev_list
00:26:25.449   17:10:18	-- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0')
00:26:25.449   17:10:18	-- bdev/nbd_common.sh@11 -- # local nbd_list
00:26:25.449   17:10:18	-- bdev/nbd_common.sh@12 -- # local i
00:26:25.449   17:10:18	-- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:26:25.449   17:10:18	-- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:26:25.449   17:10:18	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0
00:26:25.709  /dev/nbd0
00:26:25.709    17:10:18	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:26:25.709   17:10:18	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:26:25.709   17:10:18	-- common/autotest_common.sh@866 -- # local nbd_name=nbd0
00:26:25.709   17:10:18	-- common/autotest_common.sh@867 -- # local i
00:26:25.709   17:10:18	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:26:25.709   17:10:18	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:26:25.709   17:10:18	-- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions
00:26:25.709   17:10:18	-- common/autotest_common.sh@871 -- # break
00:26:25.709   17:10:18	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:26:25.709   17:10:18	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:26:25.709   17:10:18	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:26:25.709  1+0 records in
00:26:25.709  1+0 records out
00:26:25.709  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000766662 s, 5.3 MB/s
00:26:25.709    17:10:18	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:26:25.709   17:10:18	-- common/autotest_common.sh@884 -- # size=4096
00:26:25.709   17:10:18	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:26:25.709   17:10:18	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:26:25.709   17:10:18	-- common/autotest_common.sh@887 -- # return 0
00:26:25.709   17:10:18	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:26:25.709   17:10:18	-- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:26:25.709    17:10:18	-- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:26:25.709    17:10:18	-- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:26:25.709     17:10:18	-- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:26:25.967    17:10:18	-- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:26:25.967    {
00:26:25.967      "nbd_device": "/dev/nbd0",
00:26:25.967      "bdev_name": "Nvme0n1"
00:26:25.967    }
00:26:25.967  ]'
00:26:25.967     17:10:18	-- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:26:25.967     17:10:18	-- bdev/nbd_common.sh@64 -- # echo '[
00:26:25.967    {
00:26:25.967      "nbd_device": "/dev/nbd0",
00:26:25.967      "bdev_name": "Nvme0n1"
00:26:25.967    }
00:26:25.967  ]'
00:26:25.967    17:10:18	-- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0
00:26:25.967     17:10:18	-- bdev/nbd_common.sh@65 -- # echo /dev/nbd0
00:26:25.967     17:10:18	-- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:26:25.967    17:10:18	-- bdev/nbd_common.sh@65 -- # count=1
00:26:25.967    17:10:18	-- bdev/nbd_common.sh@66 -- # echo 1
00:26:25.967   17:10:18	-- bdev/nbd_common.sh@95 -- # count=1
00:26:25.967   17:10:18	-- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']'
00:26:25.967   17:10:18	-- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write
00:26:25.967   17:10:18	-- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0')
00:26:25.967   17:10:18	-- bdev/nbd_common.sh@70 -- # local nbd_list
00:26:25.967   17:10:18	-- bdev/nbd_common.sh@71 -- # local operation=write
00:26:25.967   17:10:18	-- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest
00:26:25.967   17:10:18	-- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:26:25.967   17:10:18	-- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256
00:26:25.967  256+0 records in
00:26:25.967  256+0 records out
00:26:25.967  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00741688 s, 141 MB/s
00:26:25.967   17:10:18	-- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:26:25.967   17:10:18	-- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:26:25.967  256+0 records in
00:26:25.967  256+0 records out
00:26:25.967  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0491158 s, 21.3 MB/s
00:26:25.967   17:10:18	-- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify
00:26:25.967   17:10:18	-- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0')
00:26:25.967   17:10:18	-- bdev/nbd_common.sh@70 -- # local nbd_list
00:26:25.967   17:10:18	-- bdev/nbd_common.sh@71 -- # local operation=verify
00:26:25.967   17:10:18	-- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest
00:26:25.967   17:10:18	-- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:26:25.967   17:10:18	-- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:26:25.967   17:10:18	-- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:26:25.967   17:10:18	-- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0
00:26:26.226   17:10:18	-- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest
00:26:26.226   17:10:18	-- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0
00:26:26.226   17:10:18	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:26:26.226   17:10:18	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:26:26.226   17:10:18	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:26:26.226   17:10:18	-- bdev/nbd_common.sh@51 -- # local i
00:26:26.226   17:10:18	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:26:26.226   17:10:18	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:26:26.484    17:10:19	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:26:26.484   17:10:19	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:26:26.484   17:10:19	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:26:26.485   17:10:19	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:26:26.485   17:10:19	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:26:26.485   17:10:19	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:26:26.485   17:10:19	-- bdev/nbd_common.sh@41 -- # break
00:26:26.485   17:10:19	-- bdev/nbd_common.sh@45 -- # return 0
00:26:26.485    17:10:19	-- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:26:26.485    17:10:19	-- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:26:26.485     17:10:19	-- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:26:26.743    17:10:19	-- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:26:26.743     17:10:19	-- bdev/nbd_common.sh@64 -- # echo '[]'
00:26:26.743     17:10:19	-- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:26:26.743    17:10:19	-- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:26:26.743     17:10:19	-- bdev/nbd_common.sh@65 -- # echo ''
00:26:26.743     17:10:19	-- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:26:26.743     17:10:19	-- bdev/nbd_common.sh@65 -- # true
00:26:26.743    17:10:19	-- bdev/nbd_common.sh@65 -- # count=0
00:26:26.743    17:10:19	-- bdev/nbd_common.sh@66 -- # echo 0
00:26:26.743   17:10:19	-- bdev/nbd_common.sh@104 -- # count=0
00:26:26.743   17:10:19	-- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:26:26.743   17:10:19	-- bdev/nbd_common.sh@109 -- # return 0
00:26:26.743   17:10:19	-- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0
00:26:26.743   17:10:19	-- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:26:26.743   17:10:19	-- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0')
00:26:26.743   17:10:19	-- bdev/nbd_common.sh@132 -- # local nbd_list
00:26:26.743   17:10:19	-- bdev/nbd_common.sh@133 -- # local mkfs_ret
00:26:26.743   17:10:19	-- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512
00:26:27.002  malloc_lvol_verify
00:26:27.002   17:10:19	-- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs
00:26:27.002  4028e6a3-2e9f-4e9c-9084-9aae78582de8
00:26:27.002   17:10:19	-- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs
00:26:27.297  45b3e209-3374-45f6-8ef3-8135f8d86afa
00:26:27.297   17:10:20	-- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0
00:26:27.556  /dev/nbd0
00:26:27.556   17:10:20	-- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0
00:26:27.556  mke2fs 1.46.5 (30-Dec-2021)
00:26:27.556  
00:26:27.556  Filesystem too small for a journal
00:26:27.556  Discarding device blocks:    0/1024         done                            
00:26:27.556  Creating filesystem with 1024 4k blocks and 1024 inodes
00:26:27.556  
00:26:27.556  Allocating group tables: 0/1   done                            
00:26:27.556  Writing inode tables: 0/1   done                            
00:26:27.556  Writing superblocks and filesystem accounting information: 0/1   done
00:26:27.556  
00:26:27.556   17:10:20	-- bdev/nbd_common.sh@141 -- # mkfs_ret=0
00:26:27.556   17:10:20	-- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0
00:26:27.556   17:10:20	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:26:27.556   17:10:20	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:26:27.556   17:10:20	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:26:27.556   17:10:20	-- bdev/nbd_common.sh@51 -- # local i
00:26:27.556   17:10:20	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:26:27.556   17:10:20	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:26:27.814    17:10:20	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:26:27.814   17:10:20	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:26:27.814   17:10:20	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:26:27.814   17:10:20	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:26:27.814   17:10:20	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:26:27.814   17:10:20	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:26:28.072   17:10:20	-- bdev/nbd_common.sh@41 -- # break
00:26:28.072   17:10:20	-- bdev/nbd_common.sh@45 -- # return 0
00:26:28.072   17:10:20	-- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']'
00:26:28.072   17:10:20	-- bdev/nbd_common.sh@147 -- # return 0
00:26:28.072   17:10:20	-- bdev/blockdev.sh@324 -- # killprocess 146132
00:26:28.072   17:10:20	-- common/autotest_common.sh@936 -- # '[' -z 146132 ']'
00:26:28.072   17:10:20	-- common/autotest_common.sh@940 -- # kill -0 146132
00:26:28.072    17:10:20	-- common/autotest_common.sh@941 -- # uname
00:26:28.072   17:10:20	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:26:28.072    17:10:20	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 146132
00:26:28.072   17:10:20	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:26:28.072   17:10:20	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:26:28.072   17:10:20	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 146132'
00:26:28.072  killing process with pid 146132
00:26:28.072   17:10:20	-- common/autotest_common.sh@955 -- # kill 146132
00:26:28.072   17:10:20	-- common/autotest_common.sh@960 -- # wait 146132
00:26:28.331   17:10:20	-- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT
00:26:28.331  
00:26:28.331  real	0m4.776s
00:26:28.331  user	0m7.187s
00:26:28.331  sys	0m1.223s
00:26:28.331   17:10:20	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:26:28.331  ************************************
00:26:28.331  END TEST bdev_nbd
00:26:28.331  ************************************
00:26:28.331   17:10:20	-- common/autotest_common.sh@10 -- # set +x
00:26:28.331   17:10:21	-- bdev/blockdev.sh@761 -- # [[ y == y ]]
00:26:28.331   17:10:21	-- bdev/blockdev.sh@762 -- # '[' nvme = nvme ']'
00:26:28.331  skipping fio tests on NVMe due to multi-ns failures.
00:26:28.331   17:10:21	-- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.'
00:26:28.331   17:10:21	-- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT
00:26:28.331   17:10:21	-- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 ''
00:26:28.331   17:10:21	-- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']'
00:26:28.331   17:10:21	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:26:28.331   17:10:21	-- common/autotest_common.sh@10 -- # set +x
00:26:28.331  ************************************
00:26:28.331  START TEST bdev_verify
00:26:28.331  ************************************
00:26:28.331   17:10:21	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 ''
00:26:28.331  [2024-11-19 17:10:21.075764] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:26:28.331  [2024-11-19 17:10:21.075942] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146315 ]
00:26:28.589  [2024-11-19 17:10:21.221753] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2
00:26:28.589  [2024-11-19 17:10:21.268286] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:26:28.589  [2024-11-19 17:10:21.268289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:26:28.848  Running I/O for 5 seconds...
00:26:34.110  
00:26:34.110                                                                                                  Latency(us)
00:26:34.110  
[2024-11-19T17:10:26.974Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:26:34.110  
[2024-11-19T17:10:26.974Z]  Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:26:34.111  	 Verification LBA range: start 0x0 length 0xa0000
00:26:34.111  	 Nvme0n1             :       5.01   16785.00      65.57       0.00     0.00    7593.28    1053.26   15229.32
00:26:34.111  
[2024-11-19T17:10:26.975Z]  Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:26:34.111  	 Verification LBA range: start 0xa0000 length 0xa0000
00:26:34.111  	 Nvme0n1             :       5.01   16756.87      65.46       0.00     0.00    7606.24     368.64   17601.10
00:26:34.111  
[2024-11-19T17:10:26.975Z]  ===================================================================================================================
00:26:34.111  
[2024-11-19T17:10:26.975Z]  Total                       :              33541.87     131.02       0.00     0.00    7599.76     368.64   17601.10
00:26:42.221  
00:26:42.221  real	0m13.646s
00:26:42.221  user	0m26.542s
00:26:42.221  sys	0m0.267s
00:26:42.221   17:10:34	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:26:42.221   17:10:34	-- common/autotest_common.sh@10 -- # set +x
00:26:42.221  ************************************
00:26:42.221  END TEST bdev_verify
00:26:42.221  ************************************
00:26:42.221   17:10:34	-- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 ''
00:26:42.221   17:10:34	-- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']'
00:26:42.221   17:10:34	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:26:42.221   17:10:34	-- common/autotest_common.sh@10 -- # set +x
00:26:42.221  ************************************
00:26:42.221  START TEST bdev_verify_big_io
00:26:42.221  ************************************
00:26:42.221   17:10:34	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 ''
00:26:42.221  [2024-11-19 17:10:34.814780] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:26:42.221  [2024-11-19 17:10:34.815048] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146442 ]
00:26:42.221  [2024-11-19 17:10:34.974044] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2
00:26:42.221  [2024-11-19 17:10:35.026477] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:26:42.221  [2024-11-19 17:10:35.026482] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:26:42.479  Running I/O for 5 seconds...
00:26:47.752  
00:26:47.752                                                                                                  Latency(us)
00:26:47.752  
[2024-11-19T17:10:40.616Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:26:47.752  
[2024-11-19T17:10:40.616Z]  Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:26:47.752  	 Verification LBA range: start 0x0 length 0xa000
00:26:47.752  	 Nvme0n1             :       5.05    1973.08     123.32       0.00     0.00   64005.95     378.39   92374.55
00:26:47.752  
[2024-11-19T17:10:40.616Z]  Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:26:47.752  	 Verification LBA range: start 0xa000 length 0xa000
00:26:47.752  	 Nvme0n1             :       5.06    1994.98     124.69       0.00     0.00   63299.08     756.78  105856.24
00:26:47.752  
[2024-11-19T17:10:40.616Z]  ===================================================================================================================
00:26:47.752  
[2024-11-19T17:10:40.616Z]  Total                       :               3968.06     248.00       0.00     0.00   63650.31     378.39  105856.24
00:26:48.319  
00:26:48.319  real	0m6.128s
00:26:48.319  user	0m11.507s
00:26:48.319  sys	0m0.231s
00:26:48.319   17:10:40	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:26:48.319  ************************************
00:26:48.319  END TEST bdev_verify_big_io
00:26:48.319  ************************************
00:26:48.319   17:10:40	-- common/autotest_common.sh@10 -- # set +x
00:26:48.319   17:10:40	-- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:26:48.319   17:10:40	-- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']'
00:26:48.319   17:10:40	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:26:48.319   17:10:40	-- common/autotest_common.sh@10 -- # set +x
00:26:48.319  ************************************
00:26:48.319  START TEST bdev_write_zeroes
00:26:48.319  ************************************
00:26:48.319   17:10:40	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:26:48.319  [2024-11-19 17:10:41.004268] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:26:48.319  [2024-11-19 17:10:41.004501] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146538 ]
00:26:48.319  [2024-11-19 17:10:41.158095] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:26:48.577  [2024-11-19 17:10:41.199709] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:26:48.577  Running I/O for 1 seconds...
00:26:49.951  
00:26:49.951                                                                                                  Latency(us)
00:26:49.951  
[2024-11-19T17:10:42.815Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:26:49.951  
[2024-11-19T17:10:42.815Z]  Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:26:49.951  	 Nvme0n1             :       1.00   54026.36     211.04       0.00     0.00    2363.57     819.20   12857.54
00:26:49.951  
[2024-11-19T17:10:42.815Z]  ===================================================================================================================
00:26:49.951  
[2024-11-19T17:10:42.815Z]  Total                       :              54026.36     211.04       0.00     0.00    2363.57     819.20   12857.54
00:26:49.951  
00:26:49.951  real	0m1.714s
00:26:49.951  user	0m1.430s
00:26:49.951  sys	0m0.184s
00:26:49.951   17:10:42	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:26:49.951   17:10:42	-- common/autotest_common.sh@10 -- # set +x
00:26:49.951  ************************************
00:26:49.951  END TEST bdev_write_zeroes
00:26:49.951  ************************************
00:26:49.951   17:10:42	-- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:26:49.951   17:10:42	-- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']'
00:26:49.951   17:10:42	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:26:49.951   17:10:42	-- common/autotest_common.sh@10 -- # set +x
00:26:49.951  ************************************
00:26:49.951  START TEST bdev_json_nonenclosed
00:26:49.951  ************************************
00:26:49.951   17:10:42	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:26:49.951  [2024-11-19 17:10:42.762394] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:26:49.952  [2024-11-19 17:10:42.762545] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146588 ]
00:26:50.210  [2024-11-19 17:10:42.903098] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:26:50.210  [2024-11-19 17:10:42.946510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:26:50.210  [2024-11-19 17:10:42.946706] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}.
00:26:50.210  [2024-11-19 17:10:42.946749] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:26:50.469  
00:26:50.469  real	0m0.365s
00:26:50.469  user	0m0.160s
00:26:50.469  sys	0m0.105s
00:26:50.469   17:10:43	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:26:50.469  ************************************
00:26:50.469  END TEST bdev_json_nonenclosed
00:26:50.469  ************************************
00:26:50.469   17:10:43	-- common/autotest_common.sh@10 -- # set +x
00:26:50.470   17:10:43	-- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:26:50.470   17:10:43	-- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']'
00:26:50.470   17:10:43	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:26:50.470   17:10:43	-- common/autotest_common.sh@10 -- # set +x
00:26:50.470  ************************************
00:26:50.470  START TEST bdev_json_nonarray
00:26:50.470  ************************************
00:26:50.470   17:10:43	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:26:50.470  [2024-11-19 17:10:43.188269] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:26:50.470  [2024-11-19 17:10:43.188434] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146610 ]
00:26:50.729  [2024-11-19 17:10:43.328925] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:26:50.729  [2024-11-19 17:10:43.371857] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:26:50.729  [2024-11-19 17:10:43.372066] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array.
00:26:50.729  [2024-11-19 17:10:43.372105] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:26:50.729  
00:26:50.729  real	0m0.369s
00:26:50.729  user	0m0.153s
00:26:50.729  sys	0m0.116s
00:26:50.729   17:10:43	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:26:50.729   17:10:43	-- common/autotest_common.sh@10 -- # set +x
00:26:50.729  ************************************
00:26:50.729  END TEST bdev_json_nonarray
00:26:50.729  ************************************
00:26:50.729   17:10:43	-- bdev/blockdev.sh@785 -- # [[ nvme == bdev ]]
00:26:50.729   17:10:43	-- bdev/blockdev.sh@792 -- # [[ nvme == gpt ]]
00:26:50.729   17:10:43	-- bdev/blockdev.sh@796 -- # [[ nvme == crypto_sw ]]
00:26:50.729   17:10:43	-- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT
00:26:50.729   17:10:43	-- bdev/blockdev.sh@809 -- # cleanup
00:26:50.729   17:10:43	-- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile
00:26:50.729   17:10:43	-- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:26:50.729   17:10:43	-- bdev/blockdev.sh@24 -- # [[ nvme == rbd ]]
00:26:50.729   17:10:43	-- bdev/blockdev.sh@28 -- # [[ nvme == daos ]]
00:26:50.729   17:10:43	-- bdev/blockdev.sh@32 -- # [[ nvme = \g\p\t ]]
00:26:50.729   17:10:43	-- bdev/blockdev.sh@38 -- # [[ nvme == xnvme ]]
00:26:50.729  
00:26:50.729  real	0m31.686s
00:26:50.729  user	0m53.753s
00:26:50.729  sys	0m3.382s
00:26:50.729   17:10:43	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:26:50.729   17:10:43	-- common/autotest_common.sh@10 -- # set +x
00:26:50.729  ************************************
00:26:50.729  END TEST blockdev_nvme
00:26:50.729  ************************************
00:26:50.988    17:10:43	-- spdk/autotest.sh@206 -- # uname -s
00:26:50.988   17:10:43	-- spdk/autotest.sh@206 -- # [[ Linux == Linux ]]
00:26:50.988   17:10:43	-- spdk/autotest.sh@207 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt
00:26:50.988   17:10:43	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:26:50.988   17:10:43	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:26:50.988   17:10:43	-- common/autotest_common.sh@10 -- # set +x
00:26:50.988  ************************************
00:26:50.988  START TEST blockdev_nvme_gpt
00:26:50.988  ************************************
00:26:50.988   17:10:43	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt
00:26:50.988  * Looking for test storage...
00:26:50.988  * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev
00:26:50.988    17:10:43	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:26:50.988     17:10:43	-- common/autotest_common.sh@1690 -- # lcov --version
00:26:50.988     17:10:43	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:26:50.988    17:10:43	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:26:50.988    17:10:43	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:26:50.988    17:10:43	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:26:50.988    17:10:43	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:26:50.988    17:10:43	-- scripts/common.sh@335 -- # IFS=.-:
00:26:50.988    17:10:43	-- scripts/common.sh@335 -- # read -ra ver1
00:26:50.988    17:10:43	-- scripts/common.sh@336 -- # IFS=.-:
00:26:50.988    17:10:43	-- scripts/common.sh@336 -- # read -ra ver2
00:26:50.988    17:10:43	-- scripts/common.sh@337 -- # local 'op=<'
00:26:50.988    17:10:43	-- scripts/common.sh@339 -- # ver1_l=2
00:26:50.988    17:10:43	-- scripts/common.sh@340 -- # ver2_l=1
00:26:50.988    17:10:43	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:26:50.988    17:10:43	-- scripts/common.sh@343 -- # case "$op" in
00:26:50.988    17:10:43	-- scripts/common.sh@344 -- # : 1
00:26:50.988    17:10:43	-- scripts/common.sh@363 -- # (( v = 0 ))
00:26:50.988    17:10:43	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:26:50.988     17:10:43	-- scripts/common.sh@364 -- # decimal 1
00:26:50.988     17:10:43	-- scripts/common.sh@352 -- # local d=1
00:26:50.988     17:10:43	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:26:50.988     17:10:43	-- scripts/common.sh@354 -- # echo 1
00:26:50.988    17:10:43	-- scripts/common.sh@364 -- # ver1[v]=1
00:26:50.988     17:10:43	-- scripts/common.sh@365 -- # decimal 2
00:26:50.988     17:10:43	-- scripts/common.sh@352 -- # local d=2
00:26:50.988     17:10:43	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:26:50.989     17:10:43	-- scripts/common.sh@354 -- # echo 2
00:26:50.989    17:10:43	-- scripts/common.sh@365 -- # ver2[v]=2
00:26:50.989    17:10:43	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:26:50.989    17:10:43	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:26:50.989    17:10:43	-- scripts/common.sh@367 -- # return 0
00:26:50.989    17:10:43	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:26:50.989    17:10:43	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:26:50.989  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:26:50.989  		--rc genhtml_branch_coverage=1
00:26:50.989  		--rc genhtml_function_coverage=1
00:26:50.989  		--rc genhtml_legend=1
00:26:50.989  		--rc geninfo_all_blocks=1
00:26:50.989  		--rc geninfo_unexecuted_blocks=1
00:26:50.989  		
00:26:50.989  		'
00:26:50.989    17:10:43	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:26:50.989  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:26:50.989  		--rc genhtml_branch_coverage=1
00:26:50.989  		--rc genhtml_function_coverage=1
00:26:50.989  		--rc genhtml_legend=1
00:26:50.989  		--rc geninfo_all_blocks=1
00:26:50.989  		--rc geninfo_unexecuted_blocks=1
00:26:50.989  		
00:26:50.989  		'
00:26:50.989    17:10:43	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:26:50.989  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:26:50.989  		--rc genhtml_branch_coverage=1
00:26:50.989  		--rc genhtml_function_coverage=1
00:26:50.989  		--rc genhtml_legend=1
00:26:50.989  		--rc geninfo_all_blocks=1
00:26:50.989  		--rc geninfo_unexecuted_blocks=1
00:26:50.989  		
00:26:50.989  		'
00:26:50.989    17:10:43	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:26:50.989  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:26:50.989  		--rc genhtml_branch_coverage=1
00:26:50.989  		--rc genhtml_function_coverage=1
00:26:50.989  		--rc genhtml_legend=1
00:26:50.989  		--rc geninfo_all_blocks=1
00:26:50.989  		--rc geninfo_unexecuted_blocks=1
00:26:50.989  		
00:26:50.989  		'
00:26:50.989   17:10:43	-- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh
00:26:50.989    17:10:43	-- bdev/nbd_common.sh@6 -- # set -e
00:26:50.989   17:10:43	-- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd
00:26:50.989   17:10:43	-- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:26:50.989   17:10:43	-- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json
00:26:50.989   17:10:43	-- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json
00:26:50.989   17:10:43	-- bdev/blockdev.sh@18 -- # :
00:26:50.989   17:10:43	-- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0
00:26:50.989   17:10:43	-- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1
00:26:50.989   17:10:43	-- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5
00:26:50.989    17:10:43	-- bdev/blockdev.sh@672 -- # uname -s
00:26:50.989   17:10:43	-- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']'
00:26:50.989   17:10:43	-- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0
00:26:50.989   17:10:43	-- bdev/blockdev.sh@680 -- # test_type=gpt
00:26:50.989   17:10:43	-- bdev/blockdev.sh@681 -- # crypto_device=
00:26:50.989   17:10:43	-- bdev/blockdev.sh@682 -- # dek=
00:26:50.989   17:10:43	-- bdev/blockdev.sh@683 -- # env_ctx=
00:26:50.989   17:10:43	-- bdev/blockdev.sh@684 -- # wait_for_rpc=
00:26:50.989   17:10:43	-- bdev/blockdev.sh@685 -- # '[' -n '' ']'
00:26:50.989   17:10:43	-- bdev/blockdev.sh@688 -- # [[ gpt == bdev ]]
00:26:50.989   17:10:43	-- bdev/blockdev.sh@688 -- # [[ gpt == crypto_* ]]
00:26:50.989   17:10:43	-- bdev/blockdev.sh@691 -- # start_spdk_tgt
00:26:50.989   17:10:43	-- bdev/blockdev.sh@45 -- # spdk_tgt_pid=146702
00:26:50.989   17:10:43	-- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT
00:26:50.989   17:10:43	-- bdev/blockdev.sh@47 -- # waitforlisten 146702
00:26:50.989   17:10:43	-- common/autotest_common.sh@829 -- # '[' -z 146702 ']'
00:26:50.989   17:10:43	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:26:50.989   17:10:43	-- common/autotest_common.sh@834 -- # local max_retries=100
00:26:50.989  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:26:50.989   17:10:43	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:26:50.989   17:10:43	-- common/autotest_common.sh@838 -- # xtrace_disable
00:26:50.989   17:10:43	-- common/autotest_common.sh@10 -- # set +x
00:26:50.989   17:10:43	-- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' ''
00:26:51.248  [2024-11-19 17:10:43.906170] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:26:51.248  [2024-11-19 17:10:43.906400] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146702 ]
00:26:51.248  [2024-11-19 17:10:44.060113] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:26:51.508  [2024-11-19 17:10:44.111254] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:26:51.508  [2024-11-19 17:10:44.111504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:26:52.075   17:10:44	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:26:52.075   17:10:44	-- common/autotest_common.sh@862 -- # return 0
00:26:52.075   17:10:44	-- bdev/blockdev.sh@692 -- # case "$test_type" in
00:26:52.075   17:10:44	-- bdev/blockdev.sh@700 -- # setup_gpt_conf
00:26:52.075   17:10:44	-- bdev/blockdev.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:26:52.643  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev
00:26:52.643  Waiting for block devices as requested
00:26:52.643  0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme
00:26:52.643   17:10:45	-- bdev/blockdev.sh@103 -- # get_zoned_devs
00:26:52.643   17:10:45	-- common/autotest_common.sh@1664 -- # zoned_devs=()
00:26:52.643   17:10:45	-- common/autotest_common.sh@1664 -- # local -gA zoned_devs
00:26:52.643   17:10:45	-- common/autotest_common.sh@1665 -- # local nvme bdf
00:26:52.643   17:10:45	-- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme*
00:26:52.643   17:10:45	-- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1
00:26:52.643   17:10:45	-- common/autotest_common.sh@1657 -- # local device=nvme0n1
00:26:52.643   17:10:45	-- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]]
00:26:52.643   17:10:45	-- common/autotest_common.sh@1660 -- # [[ none != none ]]
00:26:52.643   17:10:45	-- bdev/blockdev.sh@105 -- # nvme_devs=('/sys/bus/pci/drivers/nvme/0000:00:06.0/nvme/nvme0/nvme0n1')
00:26:52.643   17:10:45	-- bdev/blockdev.sh@105 -- # local nvme_devs nvme_dev
00:26:52.643   17:10:45	-- bdev/blockdev.sh@106 -- # gpt_nvme=
00:26:52.643   17:10:45	-- bdev/blockdev.sh@108 -- # for nvme_dev in "${nvme_devs[@]}"
00:26:52.643   17:10:45	-- bdev/blockdev.sh@109 -- # [[ -z '' ]]
00:26:52.643   17:10:45	-- bdev/blockdev.sh@110 -- # dev=/dev/nvme0n1
00:26:52.643    17:10:45	-- bdev/blockdev.sh@111 -- # parted /dev/nvme0n1 -ms print
00:26:52.643   17:10:45	-- bdev/blockdev.sh@111 -- # pt='Error: /dev/nvme0n1: unrecognised disk label
00:26:52.643  BYT;
00:26:52.643  /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;'
00:26:52.643   17:10:45	-- bdev/blockdev.sh@112 -- # [[ Error: /dev/nvme0n1: unrecognised disk label
00:26:52.643  BYT;
00:26:52.643  /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]]
00:26:52.643   17:10:45	-- bdev/blockdev.sh@113 -- # gpt_nvme=/dev/nvme0n1
00:26:52.643   17:10:45	-- bdev/blockdev.sh@114 -- # break
00:26:52.643   17:10:45	-- bdev/blockdev.sh@117 -- # [[ -n /dev/nvme0n1 ]]
00:26:52.643   17:10:45	-- bdev/blockdev.sh@122 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030
00:26:52.643   17:10:45	-- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df
00:26:52.643   17:10:45	-- bdev/blockdev.sh@126 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100%
00:26:53.210    17:10:45	-- bdev/blockdev.sh@128 -- # get_spdk_gpt_old
00:26:53.210    17:10:45	-- scripts/common.sh@410 -- # local spdk_guid
00:26:53.210    17:10:45	-- scripts/common.sh@412 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]]
00:26:53.210    17:10:45	-- scripts/common.sh@414 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h
00:26:53.210    17:10:45	-- scripts/common.sh@415 -- # IFS='()'
00:26:53.210    17:10:45	-- scripts/common.sh@415 -- # read -r _ spdk_guid _
00:26:53.210     17:10:45	-- scripts/common.sh@415 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h
00:26:53.210    17:10:45	-- scripts/common.sh@416 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c
00:26:53.210    17:10:45	-- scripts/common.sh@416 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c
00:26:53.210    17:10:45	-- scripts/common.sh@418 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c
00:26:53.210   17:10:45	-- bdev/blockdev.sh@128 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c
00:26:53.210    17:10:45	-- bdev/blockdev.sh@129 -- # get_spdk_gpt
00:26:53.210    17:10:45	-- scripts/common.sh@422 -- # local spdk_guid
00:26:53.210    17:10:45	-- scripts/common.sh@424 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]]
00:26:53.210    17:10:45	-- scripts/common.sh@426 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h
00:26:53.210    17:10:45	-- scripts/common.sh@427 -- # IFS='()'
00:26:53.210    17:10:45	-- scripts/common.sh@427 -- # read -r _ spdk_guid _
00:26:53.210     17:10:45	-- scripts/common.sh@427 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h
00:26:53.210    17:10:45	-- scripts/common.sh@428 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b
00:26:53.210    17:10:45	-- scripts/common.sh@428 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b
00:26:53.210    17:10:45	-- scripts/common.sh@430 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b
00:26:53.210   17:10:45	-- bdev/blockdev.sh@129 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b
00:26:53.210   17:10:45	-- bdev/blockdev.sh@130 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1
00:26:54.145  The operation has completed successfully.
00:26:54.145   17:10:46	-- bdev/blockdev.sh@131 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1
00:26:55.077  The operation has completed successfully.
00:26:55.077   17:10:47	-- bdev/blockdev.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:26:55.640  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev
00:26:55.898  0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic
00:26:56.463   17:10:49	-- bdev/blockdev.sh@133 -- # rpc_cmd bdev_get_bdevs
00:26:56.463   17:10:49	-- common/autotest_common.sh@561 -- # xtrace_disable
00:26:56.463   17:10:49	-- common/autotest_common.sh@10 -- # set +x
00:26:56.464  []
00:26:56.464   17:10:49	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:26:56.464   17:10:49	-- bdev/blockdev.sh@134 -- # setup_nvme_conf
00:26:56.464   17:10:49	-- bdev/blockdev.sh@79 -- # local json
00:26:56.464   17:10:49	-- bdev/blockdev.sh@80 -- # mapfile -t json
00:26:56.464    17:10:49	-- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:26:56.722   17:10:49	-- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } } ] }'\'''
00:26:56.722   17:10:49	-- common/autotest_common.sh@561 -- # xtrace_disable
00:26:56.722   17:10:49	-- common/autotest_common.sh@10 -- # set +x
00:26:56.722   17:10:49	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:26:56.722   17:10:49	-- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine
00:26:56.722   17:10:49	-- common/autotest_common.sh@561 -- # xtrace_disable
00:26:56.722   17:10:49	-- common/autotest_common.sh@10 -- # set +x
00:26:56.722   17:10:49	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:26:56.722   17:10:49	-- bdev/blockdev.sh@738 -- # cat
00:26:56.722    17:10:49	-- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel
00:26:56.722    17:10:49	-- common/autotest_common.sh@561 -- # xtrace_disable
00:26:56.722    17:10:49	-- common/autotest_common.sh@10 -- # set +x
00:26:56.722    17:10:49	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:26:56.722    17:10:49	-- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev
00:26:56.722    17:10:49	-- common/autotest_common.sh@561 -- # xtrace_disable
00:26:56.722    17:10:49	-- common/autotest_common.sh@10 -- # set +x
00:26:56.722    17:10:49	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:26:56.722    17:10:49	-- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf
00:26:56.722    17:10:49	-- common/autotest_common.sh@561 -- # xtrace_disable
00:26:56.722    17:10:49	-- common/autotest_common.sh@10 -- # set +x
00:26:56.722    17:10:49	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:26:56.722   17:10:49	-- bdev/blockdev.sh@746 -- # mapfile -t bdevs
00:26:56.722    17:10:49	-- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs
00:26:56.722    17:10:49	-- common/autotest_common.sh@561 -- # xtrace_disable
00:26:56.722    17:10:49	-- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)'
00:26:56.722    17:10:49	-- common/autotest_common.sh@10 -- # set +x
00:26:56.722    17:10:49	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:26:56.722   17:10:49	-- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name
00:26:56.722    17:10:49	-- bdev/blockdev.sh@747 -- # jq -r .name
00:26:56.722    17:10:49	-- bdev/blockdev.sh@747 -- # printf '%s\n' '{' '  "name": "Nvme0n1p1",' '  "aliases": [' '    "6f89f330-603b-4116-ac73-2ca8eae53030"' '  ],' '  "product_name": "GPT Disk",' '  "block_size": 4096,' '  "num_blocks": 655104,' '  "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": true,' '    "compare_and_write": false,' '    "abort": true,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "driver_specific": {' '    "gpt": {' '      "base_bdev": "Nvme0n1",' '      "offset_blocks": 256,' '      "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' '      "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' '      "partition_name": "SPDK_TEST_first"' '    }' '  }' '}' '{' '  "name": "Nvme0n1p2",' '  "aliases": [' '    "abf1734f-66e5-4c0f-aa29-4021d4d307df"' '  ],' '  "product_name": "GPT Disk",' '  "block_size": 4096,' '  "num_blocks": 655103,' '  "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": true,' '    "compare_and_write": false,' '    "abort": true,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "driver_specific": {' '    "gpt": {' '      "base_bdev": "Nvme0n1",' '      "offset_blocks": 655360,' '      "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' '      "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' '      "partition_name": "SPDK_TEST_second"' '    }' '  }' '}'
00:26:56.981   17:10:49	-- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}")
00:26:56.981   17:10:49	-- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1p1
00:26:56.981   17:10:49	-- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT
00:26:56.981   17:10:49	-- bdev/blockdev.sh@752 -- # killprocess 146702
00:26:56.981   17:10:49	-- common/autotest_common.sh@936 -- # '[' -z 146702 ']'
00:26:56.981   17:10:49	-- common/autotest_common.sh@940 -- # kill -0 146702
00:26:56.981    17:10:49	-- common/autotest_common.sh@941 -- # uname
00:26:56.981   17:10:49	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:26:56.981    17:10:49	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 146702
00:26:56.981   17:10:49	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:26:56.981   17:10:49	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:26:56.982   17:10:49	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 146702'
00:26:56.982  killing process with pid 146702
00:26:56.982   17:10:49	-- common/autotest_common.sh@955 -- # kill 146702
00:26:56.982   17:10:49	-- common/autotest_common.sh@960 -- # wait 146702
00:26:57.243   17:10:50	-- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT
00:26:57.243   17:10:50	-- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 ''
00:26:57.243   17:10:50	-- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']'
00:26:57.243   17:10:50	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:26:57.243   17:10:50	-- common/autotest_common.sh@10 -- # set +x
00:26:57.243  ************************************
00:26:57.243  START TEST bdev_hello_world
00:26:57.243  ************************************
00:26:57.243   17:10:50	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 ''
00:26:57.243  [2024-11-19 17:10:50.091596] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:26:57.243  [2024-11-19 17:10:50.091892] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147115 ]
00:26:57.501  [2024-11-19 17:10:50.246653] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:26:57.501  [2024-11-19 17:10:50.289173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:26:57.760  [2024-11-19 17:10:50.480076] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application
00:26:57.760  [2024-11-19 17:10:50.480147] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1
00:26:57.760  [2024-11-19 17:10:50.480219] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel
00:26:57.760  [2024-11-19 17:10:50.482555] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev
00:26:57.760  [2024-11-19 17:10:50.483194] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully
00:26:57.760  [2024-11-19 17:10:50.483249] hello_bdev.c:  84:hello_read: *NOTICE*: Reading io
00:26:57.760  [2024-11-19 17:10:50.483492] hello_bdev.c:  65:read_complete: *NOTICE*: Read string from bdev : Hello World!
00:26:57.760  
00:26:57.760  [2024-11-19 17:10:50.483543] hello_bdev.c:  74:read_complete: *NOTICE*: Stopping app
00:26:58.019  ************************************
00:26:58.019  END TEST bdev_hello_world
00:26:58.019  ************************************
00:26:58.019  
00:26:58.019  real	0m0.711s
00:26:58.019  user	0m0.408s
00:26:58.019  sys	0m0.204s
00:26:58.019   17:10:50	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:26:58.019   17:10:50	-- common/autotest_common.sh@10 -- # set +x
00:26:58.019   17:10:50	-- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds ''
00:26:58.019   17:10:50	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:26:58.019   17:10:50	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:26:58.019   17:10:50	-- common/autotest_common.sh@10 -- # set +x
00:26:58.019  ************************************
00:26:58.019  START TEST bdev_bounds
00:26:58.019  ************************************
00:26:58.019   17:10:50	-- common/autotest_common.sh@1114 -- # bdev_bounds ''
00:26:58.019   17:10:50	-- bdev/blockdev.sh@288 -- # bdevio_pid=147140
00:26:58.019   17:10:50	-- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT
00:26:58.019  Process bdevio pid: 147140
00:26:58.019   17:10:50	-- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 147140'
00:26:58.019   17:10:50	-- bdev/blockdev.sh@291 -- # waitforlisten 147140
00:26:58.019   17:10:50	-- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json ''
00:26:58.019   17:10:50	-- common/autotest_common.sh@829 -- # '[' -z 147140 ']'
00:26:58.019   17:10:50	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:26:58.019   17:10:50	-- common/autotest_common.sh@834 -- # local max_retries=100
00:26:58.019  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:26:58.019   17:10:50	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:26:58.019   17:10:50	-- common/autotest_common.sh@838 -- # xtrace_disable
00:26:58.019   17:10:50	-- common/autotest_common.sh@10 -- # set +x
00:26:58.019  [2024-11-19 17:10:50.862473] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:26:58.019  [2024-11-19 17:10:50.862719] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147140 ]
00:26:58.279  [2024-11-19 17:10:51.025051] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3
00:26:58.279  [2024-11-19 17:10:51.076877] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:26:58.279  [2024-11-19 17:10:51.076943] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:26:58.279  [2024-11-19 17:10:51.076942] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:26:59.218   17:10:51	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:26:59.218   17:10:51	-- common/autotest_common.sh@862 -- # return 0
00:26:59.218   17:10:51	-- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests
00:26:59.218  I/O targets:
00:26:59.218    Nvme0n1p1: 655104 blocks of 4096 bytes (2559 MiB)
00:26:59.218    Nvme0n1p2: 655103 blocks of 4096 bytes (2559 MiB)
00:26:59.218  
00:26:59.218  
00:26:59.218       CUnit - A unit testing framework for C - Version 2.1-3
00:26:59.218       http://cunit.sourceforge.net/
00:26:59.218  
00:26:59.218  
00:26:59.218  Suite: bdevio tests on: Nvme0n1p2
00:26:59.218    Test: blockdev write read block ...passed
00:26:59.218    Test: blockdev write zeroes read block ...passed
00:26:59.218    Test: blockdev write zeroes read no split ...passed
00:26:59.218    Test: blockdev write zeroes read split ...passed
00:26:59.219    Test: blockdev write zeroes read split partial ...passed
00:26:59.219    Test: blockdev reset ...[2024-11-19 17:10:51.950095] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller
00:26:59.219  [2024-11-19 17:10:51.952245] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful.
00:26:59.219  passed
00:26:59.219    Test: blockdev write read 8 blocks ...passed
00:26:59.219    Test: blockdev write read size > 128k ...passed
00:26:59.219    Test: blockdev write read invalid size ...passed
00:26:59.219    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:26:59.219    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:26:59.219    Test: blockdev write read max offset ...passed
00:26:59.219    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:26:59.219    Test: blockdev writev readv 8 blocks ...passed
00:26:59.219    Test: blockdev writev readv 30 x 1block ...passed
00:26:59.219    Test: blockdev writev readv block ...passed
00:26:59.219    Test: blockdev writev readv size > 128k ...passed
00:26:59.219    Test: blockdev writev readv size > 128k in two iovs ...passed
00:26:59.219    Test: blockdev comparev and writev ...[2024-11-19 17:10:51.960466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x3860b000 len:0x1000
00:26:59.219  [2024-11-19 17:10:51.960691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1
00:26:59.219  passed
00:26:59.219    Test: blockdev nvme passthru rw ...passed
00:26:59.219    Test: blockdev nvme passthru vendor specific ...passed
00:26:59.219    Test: blockdev nvme admin passthru ...passed
00:26:59.219    Test: blockdev copy ...passed
00:26:59.219  Suite: bdevio tests on: Nvme0n1p1
00:26:59.219    Test: blockdev write read block ...passed
00:26:59.219    Test: blockdev write zeroes read block ...passed
00:26:59.219    Test: blockdev write zeroes read no split ...passed
00:26:59.219    Test: blockdev write zeroes read split ...passed
00:26:59.219    Test: blockdev write zeroes read split partial ...passed
00:26:59.219    Test: blockdev reset ...[2024-11-19 17:10:51.974711] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller
00:26:59.219  [2024-11-19 17:10:51.976886] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful.
00:26:59.219  passed
00:26:59.219    Test: blockdev write read 8 blocks ...passed
00:26:59.219    Test: blockdev write read size > 128k ...passed
00:26:59.219    Test: blockdev write read invalid size ...passed
00:26:59.219    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:26:59.219    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:26:59.219    Test: blockdev write read max offset ...passed
00:26:59.219    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:26:59.219    Test: blockdev writev readv 8 blocks ...passed
00:26:59.219    Test: blockdev writev readv 30 x 1block ...passed
00:26:59.219    Test: blockdev writev readv block ...passed
00:26:59.219    Test: blockdev writev readv size > 128k ...passed
00:26:59.219    Test: blockdev writev readv size > 128k in two iovs ...passed
00:26:59.219    Test: blockdev comparev and writev ...[2024-11-19 17:10:51.983773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x3860d000 len:0x1000
00:26:59.219  [2024-11-19 17:10:51.983903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1
00:26:59.219  passed
00:26:59.219    Test: blockdev nvme passthru rw ...passed
00:26:59.219    Test: blockdev nvme passthru vendor specific ...passed
00:26:59.219    Test: blockdev nvme admin passthru ...passed
00:26:59.219    Test: blockdev copy ...passed
00:26:59.219  
00:26:59.219  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:26:59.219                suites      2      2    n/a      0        0
00:26:59.219                 tests     46     46     46      0        0
00:26:59.219               asserts    284    284    284      0      n/a
00:26:59.219  
00:26:59.219  Elapsed time =    0.111 seconds
00:26:59.219  0
00:26:59.219   17:10:51	-- bdev/blockdev.sh@293 -- # killprocess 147140
00:26:59.219   17:10:51	-- common/autotest_common.sh@936 -- # '[' -z 147140 ']'
00:26:59.219   17:10:51	-- common/autotest_common.sh@940 -- # kill -0 147140
00:26:59.219    17:10:51	-- common/autotest_common.sh@941 -- # uname
00:26:59.219   17:10:52	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:26:59.219    17:10:52	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 147140
00:26:59.219   17:10:52	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:26:59.219   17:10:52	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:26:59.219  killing process with pid 147140
00:26:59.219   17:10:52	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 147140'
00:26:59.219   17:10:52	-- common/autotest_common.sh@955 -- # kill 147140
00:26:59.219   17:10:52	-- common/autotest_common.sh@960 -- # wait 147140
00:26:59.477   17:10:52	-- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT
00:26:59.477  
00:26:59.477  real	0m1.468s
00:26:59.477  user	0m3.761s
00:26:59.477  sys	0m0.318s
00:26:59.477   17:10:52	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:26:59.477   17:10:52	-- common/autotest_common.sh@10 -- # set +x
00:26:59.477  ************************************
00:26:59.477  END TEST bdev_bounds
00:26:59.477  ************************************
00:26:59.477   17:10:52	-- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' ''
00:26:59.477   17:10:52	-- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']'
00:26:59.477   17:10:52	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:26:59.477   17:10:52	-- common/autotest_common.sh@10 -- # set +x
00:26:59.736  ************************************
00:26:59.736  START TEST bdev_nbd
00:26:59.736  ************************************
00:26:59.736   17:10:52	-- common/autotest_common.sh@1114 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' ''
00:26:59.736    17:10:52	-- bdev/blockdev.sh@298 -- # uname -s
00:26:59.736   17:10:52	-- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]]
00:26:59.736   17:10:52	-- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:26:59.736   17:10:52	-- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:26:59.736   17:10:52	-- bdev/blockdev.sh@302 -- # bdev_all=('Nvme0n1p1' 'Nvme0n1p2')
00:26:59.736   17:10:52	-- bdev/blockdev.sh@302 -- # local bdev_all
00:26:59.736   17:10:52	-- bdev/blockdev.sh@303 -- # local bdev_num=2
00:26:59.736   17:10:52	-- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]]
00:26:59.736   17:10:52	-- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9')
00:26:59.736   17:10:52	-- bdev/blockdev.sh@309 -- # local nbd_all
00:26:59.736   17:10:52	-- bdev/blockdev.sh@310 -- # bdev_num=2
00:26:59.736   17:10:52	-- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:26:59.736   17:10:52	-- bdev/blockdev.sh@312 -- # local nbd_list
00:26:59.736   17:10:52	-- bdev/blockdev.sh@313 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2')
00:26:59.736   17:10:52	-- bdev/blockdev.sh@313 -- # local bdev_list
00:26:59.736   17:10:52	-- bdev/blockdev.sh@316 -- # nbd_pid=147197
00:26:59.736   17:10:52	-- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json ''
00:26:59.736   17:10:52	-- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT
00:26:59.736   17:10:52	-- bdev/blockdev.sh@318 -- # waitforlisten 147197 /var/tmp/spdk-nbd.sock
00:26:59.736   17:10:52	-- common/autotest_common.sh@829 -- # '[' -z 147197 ']'
00:26:59.736   17:10:52	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:26:59.736   17:10:52	-- common/autotest_common.sh@834 -- # local max_retries=100
00:26:59.736   17:10:52	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:26:59.736  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:26:59.736   17:10:52	-- common/autotest_common.sh@838 -- # xtrace_disable
00:26:59.736   17:10:52	-- common/autotest_common.sh@10 -- # set +x
00:26:59.736  [2024-11-19 17:10:52.398698] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:26:59.736  [2024-11-19 17:10:52.398916] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:26:59.736  [2024-11-19 17:10:52.539195] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:26:59.995  [2024-11-19 17:10:52.590769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:27:00.564   17:10:53	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:27:00.564   17:10:53	-- common/autotest_common.sh@862 -- # return 0
00:27:00.564   17:10:53	-- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2'
00:27:00.564   17:10:53	-- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:27:00.564   17:10:53	-- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2')
00:27:00.564   17:10:53	-- bdev/nbd_common.sh@114 -- # local bdev_list
00:27:00.564   17:10:53	-- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2'
00:27:00.564   17:10:53	-- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:27:00.564   17:10:53	-- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2')
00:27:00.564   17:10:53	-- bdev/nbd_common.sh@23 -- # local bdev_list
00:27:00.564   17:10:53	-- bdev/nbd_common.sh@24 -- # local i
00:27:00.564   17:10:53	-- bdev/nbd_common.sh@25 -- # local nbd_device
00:27:00.564   17:10:53	-- bdev/nbd_common.sh@27 -- # (( i = 0 ))
00:27:00.564   17:10:53	-- bdev/nbd_common.sh@27 -- # (( i < 2 ))
00:27:00.564    17:10:53	-- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1
00:27:00.823   17:10:53	-- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0
00:27:00.823    17:10:53	-- bdev/nbd_common.sh@30 -- # basename /dev/nbd0
00:27:00.823   17:10:53	-- bdev/nbd_common.sh@30 -- # waitfornbd nbd0
00:27:00.823   17:10:53	-- common/autotest_common.sh@866 -- # local nbd_name=nbd0
00:27:00.823   17:10:53	-- common/autotest_common.sh@867 -- # local i
00:27:00.823   17:10:53	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:27:00.823   17:10:53	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:27:00.823   17:10:53	-- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions
00:27:00.823   17:10:53	-- common/autotest_common.sh@871 -- # break
00:27:00.823   17:10:53	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:27:00.823   17:10:53	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:27:00.823   17:10:53	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:27:00.823  1+0 records in
00:27:00.823  1+0 records out
00:27:00.823  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000766557 s, 5.3 MB/s
00:27:00.823    17:10:53	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:27:00.823   17:10:53	-- common/autotest_common.sh@884 -- # size=4096
00:27:00.823   17:10:53	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:27:00.823   17:10:53	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:27:00.823   17:10:53	-- common/autotest_common.sh@887 -- # return 0
00:27:00.823   17:10:53	-- bdev/nbd_common.sh@27 -- # (( i++ ))
00:27:00.823   17:10:53	-- bdev/nbd_common.sh@27 -- # (( i < 2 ))
00:27:00.823    17:10:53	-- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2
00:27:01.083   17:10:53	-- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1
00:27:01.083    17:10:53	-- bdev/nbd_common.sh@30 -- # basename /dev/nbd1
00:27:01.083   17:10:53	-- bdev/nbd_common.sh@30 -- # waitfornbd nbd1
00:27:01.083   17:10:53	-- common/autotest_common.sh@866 -- # local nbd_name=nbd1
00:27:01.083   17:10:53	-- common/autotest_common.sh@867 -- # local i
00:27:01.083   17:10:53	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:27:01.083   17:10:53	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:27:01.083   17:10:53	-- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions
00:27:01.083   17:10:53	-- common/autotest_common.sh@871 -- # break
00:27:01.083   17:10:53	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:27:01.083   17:10:53	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:27:01.083   17:10:53	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:27:01.083  1+0 records in
00:27:01.083  1+0 records out
00:27:01.083  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00048082 s, 8.5 MB/s
00:27:01.083    17:10:53	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:27:01.083   17:10:53	-- common/autotest_common.sh@884 -- # size=4096
00:27:01.083   17:10:53	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:27:01.083   17:10:53	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:27:01.083   17:10:53	-- common/autotest_common.sh@887 -- # return 0
00:27:01.083   17:10:53	-- bdev/nbd_common.sh@27 -- # (( i++ ))
00:27:01.083   17:10:53	-- bdev/nbd_common.sh@27 -- # (( i < 2 ))
00:27:01.084    17:10:53	-- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:27:01.344   17:10:54	-- bdev/nbd_common.sh@118 -- # nbd_disks_json='[
00:27:01.344    {
00:27:01.344      "nbd_device": "/dev/nbd0",
00:27:01.344      "bdev_name": "Nvme0n1p1"
00:27:01.344    },
00:27:01.344    {
00:27:01.344      "nbd_device": "/dev/nbd1",
00:27:01.344      "bdev_name": "Nvme0n1p2"
00:27:01.344    }
00:27:01.344  ]'
00:27:01.344   17:10:54	-- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device'))
00:27:01.344    17:10:54	-- bdev/nbd_common.sh@119 -- # echo '[
00:27:01.344    {
00:27:01.344      "nbd_device": "/dev/nbd0",
00:27:01.344      "bdev_name": "Nvme0n1p1"
00:27:01.344    },
00:27:01.344    {
00:27:01.344      "nbd_device": "/dev/nbd1",
00:27:01.344      "bdev_name": "Nvme0n1p2"
00:27:01.344    }
00:27:01.344  ]'
00:27:01.344    17:10:54	-- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device'
00:27:01.344   17:10:54	-- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1'
00:27:01.344   17:10:54	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:27:01.344   17:10:54	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:27:01.344   17:10:54	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:27:01.344   17:10:54	-- bdev/nbd_common.sh@51 -- # local i
00:27:01.344   17:10:54	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:27:01.344   17:10:54	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:27:01.604    17:10:54	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:27:01.604   17:10:54	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:27:01.604   17:10:54	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:27:01.604   17:10:54	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:27:01.604   17:10:54	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:27:01.604   17:10:54	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:27:01.604   17:10:54	-- bdev/nbd_common.sh@41 -- # break
00:27:01.604   17:10:54	-- bdev/nbd_common.sh@45 -- # return 0
00:27:01.604   17:10:54	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:27:01.604   17:10:54	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:27:01.604    17:10:54	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:27:01.604   17:10:54	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:27:01.604   17:10:54	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:27:01.604   17:10:54	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:27:01.604   17:10:54	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:27:01.604   17:10:54	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:27:01.604   17:10:54	-- bdev/nbd_common.sh@41 -- # break
00:27:01.604   17:10:54	-- bdev/nbd_common.sh@45 -- # return 0
00:27:01.604    17:10:54	-- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:27:01.604    17:10:54	-- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:27:01.863     17:10:54	-- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:27:01.863    17:10:54	-- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:27:01.863     17:10:54	-- bdev/nbd_common.sh@64 -- # echo '[]'
00:27:01.863     17:10:54	-- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:27:01.863    17:10:54	-- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:27:01.863     17:10:54	-- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:27:01.863     17:10:54	-- bdev/nbd_common.sh@65 -- # echo ''
00:27:01.863     17:10:54	-- bdev/nbd_common.sh@65 -- # true
00:27:01.863    17:10:54	-- bdev/nbd_common.sh@65 -- # count=0
00:27:01.863    17:10:54	-- bdev/nbd_common.sh@66 -- # echo 0
00:27:01.863   17:10:54	-- bdev/nbd_common.sh@122 -- # count=0
00:27:01.863   17:10:54	-- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']'
00:27:01.863   17:10:54	-- bdev/nbd_common.sh@127 -- # return 0
00:27:01.863   17:10:54	-- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1'
00:27:01.863   17:10:54	-- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:27:01.863   17:10:54	-- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2')
00:27:01.863   17:10:54	-- bdev/nbd_common.sh@91 -- # local bdev_list
00:27:01.863   17:10:54	-- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:27:01.863   17:10:54	-- bdev/nbd_common.sh@92 -- # local nbd_list
00:27:01.863   17:10:54	-- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1'
00:27:01.863   17:10:54	-- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:27:01.863   17:10:54	-- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2')
00:27:01.863   17:10:54	-- bdev/nbd_common.sh@10 -- # local bdev_list
00:27:01.863   17:10:54	-- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:27:01.863   17:10:54	-- bdev/nbd_common.sh@11 -- # local nbd_list
00:27:01.863   17:10:54	-- bdev/nbd_common.sh@12 -- # local i
00:27:01.863   17:10:54	-- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:27:01.863   17:10:54	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:27:01.863   17:10:54	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0
00:27:02.122  /dev/nbd0
00:27:02.122    17:10:54	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:27:02.122   17:10:54	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:27:02.122   17:10:54	-- common/autotest_common.sh@866 -- # local nbd_name=nbd0
00:27:02.122   17:10:54	-- common/autotest_common.sh@867 -- # local i
00:27:02.122   17:10:54	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:27:02.122   17:10:54	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:27:02.122   17:10:54	-- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions
00:27:02.122   17:10:54	-- common/autotest_common.sh@871 -- # break
00:27:02.122   17:10:54	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:27:02.122   17:10:54	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:27:02.122   17:10:54	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:27:02.122  1+0 records in
00:27:02.122  1+0 records out
00:27:02.122  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000679175 s, 6.0 MB/s
00:27:02.122    17:10:54	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:27:02.122   17:10:54	-- common/autotest_common.sh@884 -- # size=4096
00:27:02.122   17:10:54	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:27:02.122   17:10:54	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:27:02.122   17:10:54	-- common/autotest_common.sh@887 -- # return 0
00:27:02.122   17:10:54	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:27:02.122   17:10:54	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:27:02.122   17:10:54	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1
00:27:02.380  /dev/nbd1
00:27:02.380    17:10:55	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:27:02.380   17:10:55	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:27:02.380   17:10:55	-- common/autotest_common.sh@866 -- # local nbd_name=nbd1
00:27:02.380   17:10:55	-- common/autotest_common.sh@867 -- # local i
00:27:02.380   17:10:55	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:27:02.380   17:10:55	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:27:02.380   17:10:55	-- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions
00:27:02.380   17:10:55	-- common/autotest_common.sh@871 -- # break
00:27:02.380   17:10:55	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:27:02.380   17:10:55	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:27:02.380   17:10:55	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:27:02.380  1+0 records in
00:27:02.380  1+0 records out
00:27:02.380  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000632292 s, 6.5 MB/s
00:27:02.380    17:10:55	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:27:02.380   17:10:55	-- common/autotest_common.sh@884 -- # size=4096
00:27:02.380   17:10:55	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:27:02.380   17:10:55	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:27:02.380   17:10:55	-- common/autotest_common.sh@887 -- # return 0
00:27:02.380   17:10:55	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:27:02.380   17:10:55	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:27:02.380    17:10:55	-- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:27:02.639    17:10:55	-- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:27:02.639     17:10:55	-- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:27:02.639    17:10:55	-- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:27:02.639    {
00:27:02.639      "nbd_device": "/dev/nbd0",
00:27:02.639      "bdev_name": "Nvme0n1p1"
00:27:02.639    },
00:27:02.639    {
00:27:02.639      "nbd_device": "/dev/nbd1",
00:27:02.639      "bdev_name": "Nvme0n1p2"
00:27:02.639    }
00:27:02.639  ]'
00:27:02.639     17:10:55	-- bdev/nbd_common.sh@64 -- # echo '[
00:27:02.639    {
00:27:02.639      "nbd_device": "/dev/nbd0",
00:27:02.639      "bdev_name": "Nvme0n1p1"
00:27:02.639    },
00:27:02.639    {
00:27:02.639      "nbd_device": "/dev/nbd1",
00:27:02.639      "bdev_name": "Nvme0n1p2"
00:27:02.639    }
00:27:02.639  ]'
00:27:02.639     17:10:55	-- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:27:02.639    17:10:55	-- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0
00:27:02.639  /dev/nbd1'
00:27:02.639     17:10:55	-- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0
00:27:02.639  /dev/nbd1'
00:27:02.639     17:10:55	-- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:27:02.639    17:10:55	-- bdev/nbd_common.sh@65 -- # count=2
00:27:02.639    17:10:55	-- bdev/nbd_common.sh@66 -- # echo 2
00:27:02.639   17:10:55	-- bdev/nbd_common.sh@95 -- # count=2
00:27:02.639   17:10:55	-- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']'
00:27:02.639   17:10:55	-- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write
00:27:02.639   17:10:55	-- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:27:02.639   17:10:55	-- bdev/nbd_common.sh@70 -- # local nbd_list
00:27:02.639   17:10:55	-- bdev/nbd_common.sh@71 -- # local operation=write
00:27:02.639   17:10:55	-- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest
00:27:02.639   17:10:55	-- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:27:02.639   17:10:55	-- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256
00:27:02.639  256+0 records in
00:27:02.639  256+0 records out
00:27:02.639  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00495218 s, 212 MB/s
00:27:02.639   17:10:55	-- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:27:02.639   17:10:55	-- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:27:02.897  256+0 records in
00:27:02.897  256+0 records out
00:27:02.897  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.076782 s, 13.7 MB/s
00:27:02.897   17:10:55	-- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:27:02.897   17:10:55	-- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct
00:27:02.897  256+0 records in
00:27:02.897  256+0 records out
00:27:02.897  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0714052 s, 14.7 MB/s
00:27:02.897   17:10:55	-- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify
00:27:02.897   17:10:55	-- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:27:02.897   17:10:55	-- bdev/nbd_common.sh@70 -- # local nbd_list
00:27:02.897   17:10:55	-- bdev/nbd_common.sh@71 -- # local operation=verify
00:27:02.897   17:10:55	-- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest
00:27:02.897   17:10:55	-- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:27:02.897   17:10:55	-- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:27:02.897   17:10:55	-- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:27:02.897   17:10:55	-- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0
00:27:02.897   17:10:55	-- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:27:02.897   17:10:55	-- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1
00:27:02.897   17:10:55	-- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest
00:27:02.897   17:10:55	-- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1'
00:27:02.897   17:10:55	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:27:02.897   17:10:55	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:27:02.897   17:10:55	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:27:02.898   17:10:55	-- bdev/nbd_common.sh@51 -- # local i
00:27:02.898   17:10:55	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:27:02.898   17:10:55	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:27:03.156    17:10:55	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:27:03.156   17:10:55	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:27:03.156   17:10:55	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:27:03.156   17:10:55	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:27:03.156   17:10:55	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:27:03.156   17:10:55	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:27:03.156   17:10:55	-- bdev/nbd_common.sh@41 -- # break
00:27:03.156   17:10:55	-- bdev/nbd_common.sh@45 -- # return 0
00:27:03.156   17:10:55	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:27:03.156   17:10:55	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:27:03.416    17:10:56	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:27:03.416   17:10:56	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:27:03.416   17:10:56	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:27:03.416   17:10:56	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:27:03.416   17:10:56	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:27:03.416   17:10:56	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:27:03.416   17:10:56	-- bdev/nbd_common.sh@41 -- # break
00:27:03.416   17:10:56	-- bdev/nbd_common.sh@45 -- # return 0
00:27:03.416    17:10:56	-- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:27:03.416    17:10:56	-- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:27:03.416     17:10:56	-- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:27:03.674    17:10:56	-- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:27:03.675     17:10:56	-- bdev/nbd_common.sh@64 -- # echo '[]'
00:27:03.675     17:10:56	-- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:27:03.675    17:10:56	-- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:27:03.675     17:10:56	-- bdev/nbd_common.sh@65 -- # echo ''
00:27:03.675     17:10:56	-- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:27:03.675     17:10:56	-- bdev/nbd_common.sh@65 -- # true
00:27:03.675    17:10:56	-- bdev/nbd_common.sh@65 -- # count=0
00:27:03.675    17:10:56	-- bdev/nbd_common.sh@66 -- # echo 0
00:27:03.675   17:10:56	-- bdev/nbd_common.sh@104 -- # count=0
00:27:03.675   17:10:56	-- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:27:03.675   17:10:56	-- bdev/nbd_common.sh@109 -- # return 0
00:27:03.675   17:10:56	-- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1'
00:27:03.675   17:10:56	-- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:27:03.675   17:10:56	-- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:27:03.675   17:10:56	-- bdev/nbd_common.sh@132 -- # local nbd_list
00:27:03.675   17:10:56	-- bdev/nbd_common.sh@133 -- # local mkfs_ret
00:27:03.675   17:10:56	-- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512
00:27:03.934  malloc_lvol_verify
00:27:03.934   17:10:56	-- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs
00:27:04.193  1be93c55-8373-4be1-8abf-c7a24b3dc40e
00:27:04.193   17:10:56	-- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs
00:27:04.452  e17b9ba8-0760-4e36-80d2-026b20ffe804
00:27:04.452   17:10:57	-- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0
00:27:04.711  /dev/nbd0
00:27:04.711   17:10:57	-- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0
00:27:04.711  mke2fs 1.46.5 (30-Dec-2021)
00:27:04.711  Discarding device blocks:    0/1024         done                            
00:27:04.711  Creating filesystem with 1024 4k blocks and 1024 inodes
00:27:04.711  
00:27:04.711  Allocating group tables: 0/1   done                            
00:27:04.711  Writing inode tables: 0/1   done                            
00:27:04.711  
00:27:04.711  Filesystem too small for a journal
00:27:04.711  Writing superblocks and filesystem accounting information: 0/1   done
00:27:04.711  
00:27:04.711   17:10:57	-- bdev/nbd_common.sh@141 -- # mkfs_ret=0
00:27:04.711   17:10:57	-- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0
00:27:04.711   17:10:57	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:27:04.711   17:10:57	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:27:04.711   17:10:57	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:27:04.711   17:10:57	-- bdev/nbd_common.sh@51 -- # local i
00:27:04.711   17:10:57	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:27:04.711   17:10:57	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:27:04.970    17:10:57	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:27:04.970   17:10:57	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:27:04.970   17:10:57	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:27:04.970   17:10:57	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:27:04.970   17:10:57	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:27:04.970   17:10:57	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:27:04.970   17:10:57	-- bdev/nbd_common.sh@41 -- # break
00:27:04.970   17:10:57	-- bdev/nbd_common.sh@45 -- # return 0
00:27:04.970   17:10:57	-- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']'
00:27:04.970   17:10:57	-- bdev/nbd_common.sh@147 -- # return 0
00:27:04.970   17:10:57	-- bdev/blockdev.sh@324 -- # killprocess 147197
00:27:04.970   17:10:57	-- common/autotest_common.sh@936 -- # '[' -z 147197 ']'
00:27:04.970   17:10:57	-- common/autotest_common.sh@940 -- # kill -0 147197
00:27:04.970    17:10:57	-- common/autotest_common.sh@941 -- # uname
00:27:04.970   17:10:57	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:27:04.970    17:10:57	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 147197
00:27:04.970   17:10:57	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:27:04.970   17:10:57	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:27:04.970   17:10:57	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 147197'
00:27:04.970  killing process with pid 147197
00:27:04.970   17:10:57	-- common/autotest_common.sh@955 -- # kill 147197
00:27:04.970   17:10:57	-- common/autotest_common.sh@960 -- # wait 147197
00:27:05.282   17:10:57	-- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT
00:27:05.282  
00:27:05.282  real	0m5.582s
00:27:05.282  user	0m8.196s
00:27:05.282  sys	0m1.672s
00:27:05.282   17:10:57	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:27:05.282   17:10:57	-- common/autotest_common.sh@10 -- # set +x
00:27:05.282  ************************************
00:27:05.282  END TEST bdev_nbd
00:27:05.282  ************************************
00:27:05.282   17:10:57	-- bdev/blockdev.sh@761 -- # [[ y == y ]]
00:27:05.282   17:10:57	-- bdev/blockdev.sh@762 -- # '[' gpt = nvme ']'
00:27:05.282   17:10:57	-- bdev/blockdev.sh@762 -- # '[' gpt = gpt ']'
00:27:05.282   17:10:57	-- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.'
00:27:05.282  skipping fio tests on NVMe due to multi-ns failures.
00:27:05.282   17:10:57	-- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT
00:27:05.282   17:10:57	-- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 ''
00:27:05.282   17:10:57	-- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']'
00:27:05.282   17:10:57	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:27:05.282   17:10:57	-- common/autotest_common.sh@10 -- # set +x
00:27:05.282  ************************************
00:27:05.282  START TEST bdev_verify
00:27:05.282  ************************************
00:27:05.282   17:10:57	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 ''
00:27:05.282  [2024-11-19 17:10:58.056813] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:27:05.282  [2024-11-19 17:10:58.057268] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147434 ]
00:27:05.556  [2024-11-19 17:10:58.213784] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2
00:27:05.556  [2024-11-19 17:10:58.258808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:27:05.556  [2024-11-19 17:10:58.258814] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:27:05.813  Running I/O for 5 seconds...
00:27:11.078  
00:27:11.078                                                                                                  Latency(us)
00:27:11.078  
[2024-11-19T17:11:03.942Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:27:11.078  
[2024-11-19T17:11:03.942Z]  Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:27:11.078  	 Verification LBA range: start 0x0 length 0x4ff80
00:27:11.078  	 Nvme0n1p1           :       5.01    8204.77      32.05       0.00     0.00   15560.92    1622.80   21096.35
00:27:11.078  
[2024-11-19T17:11:03.942Z]  Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:27:11.078  	 Verification LBA range: start 0x4ff80 length 0x4ff80
00:27:11.078  	 Nvme0n1p1           :       5.02    8202.23      32.04       0.00     0.00   15565.54    1146.88   22719.15
00:27:11.078  
[2024-11-19T17:11:03.942Z]  Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:27:11.078  	 Verification LBA range: start 0x0 length 0x4ff7f
00:27:11.078  	 Nvme0n1p2           :       5.02    8209.92      32.07       0.00     0.00   15543.67     345.23   19473.55
00:27:11.078  
[2024-11-19T17:11:03.942Z]  Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:27:11.078  	 Verification LBA range: start 0x4ff7f length 0x4ff7f
00:27:11.078  	 Nvme0n1p2           :       5.02    8200.07      32.03       0.00     0.00   15551.89    1716.42   19473.55
00:27:11.078  
[2024-11-19T17:11:03.942Z]  ===================================================================================================================
00:27:11.078  
[2024-11-19T17:11:03.942Z]  Total                       :              32816.98     128.19       0.00     0.00   15555.50     345.23   22719.15
00:27:15.269  ************************************
00:27:15.269  END TEST bdev_verify
00:27:15.269  ************************************
00:27:15.269  
00:27:15.269  real	0m9.886s
00:27:15.269  user	0m18.986s
00:27:15.269  sys	0m0.269s
00:27:15.269   17:11:07	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:27:15.269   17:11:07	-- common/autotest_common.sh@10 -- # set +x
00:27:15.269   17:11:07	-- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 ''
00:27:15.269   17:11:07	-- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']'
00:27:15.269   17:11:07	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:27:15.269   17:11:07	-- common/autotest_common.sh@10 -- # set +x
00:27:15.269  ************************************
00:27:15.269  START TEST bdev_verify_big_io
00:27:15.269  ************************************
00:27:15.269   17:11:07	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 ''
00:27:15.269  [2024-11-19 17:11:08.009972] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:27:15.269  [2024-11-19 17:11:08.010429] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147537 ]
00:27:15.528  [2024-11-19 17:11:08.166415] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2
00:27:15.528  [2024-11-19 17:11:08.219060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:27:15.528  [2024-11-19 17:11:08.219061] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:27:15.787  Running I/O for 5 seconds...
00:27:21.061  
00:27:21.061                                                                                                  Latency(us)
00:27:21.061  
[2024-11-19T17:11:13.925Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:27:21.061  
[2024-11-19T17:11:13.925Z]  Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:27:21.061  	 Verification LBA range: start 0x0 length 0x4ff8
00:27:21.061  	 Nvme0n1p1           :       5.10     962.83      60.18       0.00     0.00  131574.41    2574.63  188743.68
00:27:21.061  
[2024-11-19T17:11:13.925Z]  Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:27:21.061  	 Verification LBA range: start 0x4ff8 length 0x4ff8
00:27:21.061  	 Nvme0n1p1           :       5.10     970.88      60.68       0.00     0.00  130193.17    3073.95  190740.97
00:27:21.061  
[2024-11-19T17:11:13.925Z]  Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:27:21.061  	 Verification LBA range: start 0x0 length 0x4ff7
00:27:21.061  	 Nvme0n1p2           :       5.10     962.43      60.15       0.00     0.00  130058.73    2699.46  147799.28
00:27:21.061  
[2024-11-19T17:11:13.925Z]  Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:27:21.061  	 Verification LBA range: start 0x4ff7 length 0x4ff7
00:27:21.061  	 Nvme0n1p2           :       5.10     978.67      61.17       0.00     0.00  127817.76     772.39  140808.78
00:27:21.061  
[2024-11-19T17:11:13.925Z]  ===================================================================================================================
00:27:21.061  
[2024-11-19T17:11:13.925Z]  Total                       :               3874.82     242.18       0.00     0.00  129902.79     772.39  190740.97
00:27:21.320  ************************************
00:27:21.320  END TEST bdev_verify_big_io
00:27:21.320  ************************************
00:27:21.320  
00:27:21.320  real	0m6.213s
00:27:21.320  user	0m11.699s
00:27:21.320  sys	0m0.211s
00:27:21.320   17:11:14	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:27:21.320   17:11:14	-- common/autotest_common.sh@10 -- # set +x
00:27:21.579   17:11:14	-- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:27:21.579   17:11:14	-- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']'
00:27:21.579   17:11:14	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:27:21.579   17:11:14	-- common/autotest_common.sh@10 -- # set +x
00:27:21.579  ************************************
00:27:21.579  START TEST bdev_write_zeroes
00:27:21.579  ************************************
00:27:21.579   17:11:14	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:27:21.579  [2024-11-19 17:11:14.272515] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:27:21.579  [2024-11-19 17:11:14.272954] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147633 ]
00:27:21.579  [2024-11-19 17:11:14.426969] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:27:21.836  [2024-11-19 17:11:14.470805] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:27:21.836  Running I/O for 1 seconds...
00:27:23.251  
00:27:23.251                                                                                                  Latency(us)
00:27:23.251  
[2024-11-19T17:11:16.115Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:27:23.251  
[2024-11-19T17:11:16.115Z]  Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:27:23.251  	 Nvme0n1p1           :       1.01   29052.08     113.48       0.00     0.00    4397.62    2496.61   14480.34
00:27:23.251  
[2024-11-19T17:11:16.115Z]  Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:27:23.251  	 Nvme0n1p2           :       1.01   28966.20     113.15       0.00     0.00    4404.99    2059.70   13731.35
00:27:23.251  
[2024-11-19T17:11:16.115Z]  ===================================================================================================================
00:27:23.251  
[2024-11-19T17:11:16.115Z]  Total                       :              58018.28     226.63       0.00     0.00    4401.30    2059.70   14480.34
00:27:23.251  ************************************
00:27:23.251  END TEST bdev_write_zeroes
00:27:23.251  ************************************
00:27:23.251  
00:27:23.251  real	0m1.722s
00:27:23.251  user	0m1.453s
00:27:23.251  sys	0m0.169s
00:27:23.251   17:11:15	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:27:23.251   17:11:15	-- common/autotest_common.sh@10 -- # set +x
00:27:23.251   17:11:15	-- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:27:23.251   17:11:15	-- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']'
00:27:23.251   17:11:15	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:27:23.251   17:11:15	-- common/autotest_common.sh@10 -- # set +x
00:27:23.251  ************************************
00:27:23.251  START TEST bdev_json_nonenclosed
00:27:23.251  ************************************
00:27:23.251   17:11:15	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:27:23.251  [2024-11-19 17:11:16.052128] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:27:23.251  [2024-11-19 17:11:16.052560] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147678 ]
00:27:23.511  [2024-11-19 17:11:16.206745] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:27:23.511  [2024-11-19 17:11:16.248420] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:27:23.511  [2024-11-19 17:11:16.248822] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}.
00:27:23.511  [2024-11-19 17:11:16.248965] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:27:23.769  
00:27:23.769  real	0m0.395s
00:27:23.769  user	0m0.184s
00:27:23.769  sys	0m0.110s
00:27:23.770   17:11:16	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:27:23.770   17:11:16	-- common/autotest_common.sh@10 -- # set +x
00:27:23.770  ************************************
00:27:23.770  END TEST bdev_json_nonenclosed
00:27:23.770  ************************************
00:27:23.770   17:11:16	-- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:27:23.770   17:11:16	-- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']'
00:27:23.770   17:11:16	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:27:23.770   17:11:16	-- common/autotest_common.sh@10 -- # set +x
00:27:23.770  ************************************
00:27:23.770  START TEST bdev_json_nonarray
00:27:23.770  ************************************
00:27:23.770   17:11:16	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:27:23.770  [2024-11-19 17:11:16.503273] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:27:23.770  [2024-11-19 17:11:16.504182] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147708 ]
00:27:24.028  [2024-11-19 17:11:16.660503] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:27:24.028  [2024-11-19 17:11:16.704474] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:27:24.028  [2024-11-19 17:11:16.704849] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array.
00:27:24.028  [2024-11-19 17:11:16.704963] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:27:24.028  
00:27:24.028  real	0m0.395s
00:27:24.028  user	0m0.175s
00:27:24.028  sys	0m0.119s
00:27:24.028   17:11:16	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:27:24.028   17:11:16	-- common/autotest_common.sh@10 -- # set +x
00:27:24.028  ************************************
00:27:24.028  END TEST bdev_json_nonarray
00:27:24.028  ************************************
00:27:24.287   17:11:16	-- bdev/blockdev.sh@785 -- # [[ gpt == bdev ]]
00:27:24.287   17:11:16	-- bdev/blockdev.sh@792 -- # [[ gpt == gpt ]]
00:27:24.287   17:11:16	-- bdev/blockdev.sh@793 -- # run_test bdev_gpt_uuid bdev_gpt_uuid
00:27:24.287   17:11:16	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:27:24.287   17:11:16	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:27:24.287   17:11:16	-- common/autotest_common.sh@10 -- # set +x
00:27:24.287  ************************************
00:27:24.287  START TEST bdev_gpt_uuid
00:27:24.287  ************************************
00:27:24.287   17:11:16	-- common/autotest_common.sh@1114 -- # bdev_gpt_uuid
00:27:24.287   17:11:16	-- bdev/blockdev.sh@612 -- # local bdev
00:27:24.287   17:11:16	-- bdev/blockdev.sh@614 -- # start_spdk_tgt
00:27:24.287   17:11:16	-- bdev/blockdev.sh@45 -- # spdk_tgt_pid=147731
00:27:24.287   17:11:16	-- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' ''
00:27:24.287   17:11:16	-- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT
00:27:24.287   17:11:16	-- bdev/blockdev.sh@47 -- # waitforlisten 147731
00:27:24.287   17:11:16	-- common/autotest_common.sh@829 -- # '[' -z 147731 ']'
00:27:24.287   17:11:16	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:27:24.287   17:11:16	-- common/autotest_common.sh@834 -- # local max_retries=100
00:27:24.287   17:11:16	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:27:24.287  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:27:24.287   17:11:16	-- common/autotest_common.sh@838 -- # xtrace_disable
00:27:24.287   17:11:16	-- common/autotest_common.sh@10 -- # set +x
00:27:24.287  [2024-11-19 17:11:16.980152] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:27:24.287  [2024-11-19 17:11:16.980869] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147731 ]
00:27:24.287  [2024-11-19 17:11:17.132762] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:27:24.553  [2024-11-19 17:11:17.179730] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:27:24.553  [2024-11-19 17:11:17.180150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:27:25.121   17:11:17	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:27:25.121   17:11:17	-- common/autotest_common.sh@862 -- # return 0
00:27:25.121   17:11:17	-- bdev/blockdev.sh@616 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:27:25.121   17:11:17	-- common/autotest_common.sh@561 -- # xtrace_disable
00:27:25.121   17:11:17	-- common/autotest_common.sh@10 -- # set +x
00:27:25.121  Some configs were skipped because the RPC state that can call them passed over.
00:27:25.121   17:11:17	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:27:25.121   17:11:17	-- bdev/blockdev.sh@617 -- # rpc_cmd bdev_wait_for_examine
00:27:25.379   17:11:17	-- common/autotest_common.sh@561 -- # xtrace_disable
00:27:25.379   17:11:17	-- common/autotest_common.sh@10 -- # set +x
00:27:25.379   17:11:17	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:27:25.379    17:11:17	-- bdev/blockdev.sh@619 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030
00:27:25.379    17:11:17	-- common/autotest_common.sh@561 -- # xtrace_disable
00:27:25.379    17:11:17	-- common/autotest_common.sh@10 -- # set +x
00:27:25.379    17:11:17	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:27:25.379   17:11:17	-- bdev/blockdev.sh@619 -- # bdev='[
00:27:25.379  {
00:27:25.379  "name": "Nvme0n1p1",
00:27:25.379  "aliases": [
00:27:25.379  "6f89f330-603b-4116-ac73-2ca8eae53030"
00:27:25.379  ],
00:27:25.379  "product_name": "GPT Disk",
00:27:25.379  "block_size": 4096,
00:27:25.379  "num_blocks": 655104,
00:27:25.379  "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",
00:27:25.379  "assigned_rate_limits": {
00:27:25.379  "rw_ios_per_sec": 0,
00:27:25.379  "rw_mbytes_per_sec": 0,
00:27:25.379  "r_mbytes_per_sec": 0,
00:27:25.379  "w_mbytes_per_sec": 0
00:27:25.379  },
00:27:25.379  "claimed": false,
00:27:25.379  "zoned": false,
00:27:25.379  "supported_io_types": {
00:27:25.379  "read": true,
00:27:25.379  "write": true,
00:27:25.379  "unmap": true,
00:27:25.379  "write_zeroes": true,
00:27:25.379  "flush": true,
00:27:25.379  "reset": true,
00:27:25.379  "compare": true,
00:27:25.379  "compare_and_write": false,
00:27:25.379  "abort": true,
00:27:25.379  "nvme_admin": false,
00:27:25.379  "nvme_io": false
00:27:25.379  },
00:27:25.379  "driver_specific": {
00:27:25.379  "gpt": {
00:27:25.379  "base_bdev": "Nvme0n1",
00:27:25.379  "offset_blocks": 256,
00:27:25.379  "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",
00:27:25.379  "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",
00:27:25.379  "partition_name": "SPDK_TEST_first"
00:27:25.379  }
00:27:25.379  }
00:27:25.379  }
00:27:25.379  ]'
00:27:25.379    17:11:17	-- bdev/blockdev.sh@620 -- # jq -r length
00:27:25.379   17:11:18	-- bdev/blockdev.sh@620 -- # [[ 1 == \1 ]]
00:27:25.379    17:11:18	-- bdev/blockdev.sh@621 -- # jq -r '.[0].aliases[0]'
00:27:25.379   17:11:18	-- bdev/blockdev.sh@621 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]]
00:27:25.379    17:11:18	-- bdev/blockdev.sh@622 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid'
00:27:25.379   17:11:18	-- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]]
00:27:25.379    17:11:18	-- bdev/blockdev.sh@624 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df
00:27:25.379    17:11:18	-- common/autotest_common.sh@561 -- # xtrace_disable
00:27:25.379    17:11:18	-- common/autotest_common.sh@10 -- # set +x
00:27:25.379    17:11:18	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:27:25.379   17:11:18	-- bdev/blockdev.sh@624 -- # bdev='[
00:27:25.379  {
00:27:25.379  "name": "Nvme0n1p2",
00:27:25.379  "aliases": [
00:27:25.379  "abf1734f-66e5-4c0f-aa29-4021d4d307df"
00:27:25.379  ],
00:27:25.379  "product_name": "GPT Disk",
00:27:25.379  "block_size": 4096,
00:27:25.379  "num_blocks": 655103,
00:27:25.379  "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",
00:27:25.379  "assigned_rate_limits": {
00:27:25.379  "rw_ios_per_sec": 0,
00:27:25.379  "rw_mbytes_per_sec": 0,
00:27:25.379  "r_mbytes_per_sec": 0,
00:27:25.379  "w_mbytes_per_sec": 0
00:27:25.379  },
00:27:25.379  "claimed": false,
00:27:25.379  "zoned": false,
00:27:25.379  "supported_io_types": {
00:27:25.379  "read": true,
00:27:25.379  "write": true,
00:27:25.379  "unmap": true,
00:27:25.379  "write_zeroes": true,
00:27:25.379  "flush": true,
00:27:25.379  "reset": true,
00:27:25.379  "compare": true,
00:27:25.379  "compare_and_write": false,
00:27:25.379  "abort": true,
00:27:25.379  "nvme_admin": false,
00:27:25.379  "nvme_io": false
00:27:25.379  },
00:27:25.379  "driver_specific": {
00:27:25.379  "gpt": {
00:27:25.379  "base_bdev": "Nvme0n1",
00:27:25.379  "offset_blocks": 655360,
00:27:25.379  "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",
00:27:25.379  "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",
00:27:25.379  "partition_name": "SPDK_TEST_second"
00:27:25.379  }
00:27:25.379  }
00:27:25.379  }
00:27:25.379  ]'
00:27:25.379    17:11:18	-- bdev/blockdev.sh@625 -- # jq -r length
00:27:25.379   17:11:18	-- bdev/blockdev.sh@625 -- # [[ 1 == \1 ]]
00:27:25.379    17:11:18	-- bdev/blockdev.sh@626 -- # jq -r '.[0].aliases[0]'
00:27:25.379   17:11:18	-- bdev/blockdev.sh@626 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]]
00:27:25.379    17:11:18	-- bdev/blockdev.sh@627 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid'
00:27:25.638   17:11:18	-- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]]
00:27:25.638   17:11:18	-- bdev/blockdev.sh@629 -- # killprocess 147731
00:27:25.638   17:11:18	-- common/autotest_common.sh@936 -- # '[' -z 147731 ']'
00:27:25.638   17:11:18	-- common/autotest_common.sh@940 -- # kill -0 147731
00:27:25.638    17:11:18	-- common/autotest_common.sh@941 -- # uname
00:27:25.638   17:11:18	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:27:25.638    17:11:18	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 147731
00:27:25.639   17:11:18	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:27:25.639   17:11:18	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:27:25.639   17:11:18	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 147731'
00:27:25.639  killing process with pid 147731
00:27:25.639   17:11:18	-- common/autotest_common.sh@955 -- # kill 147731
00:27:25.639   17:11:18	-- common/autotest_common.sh@960 -- # wait 147731
00:27:25.897  
00:27:25.897  real	0m1.797s
00:27:25.897  user	0m1.934s
00:27:25.897  sys	0m0.464s
00:27:25.897   17:11:18	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:27:25.897   17:11:18	-- common/autotest_common.sh@10 -- # set +x
00:27:25.897  ************************************
00:27:25.897  END TEST bdev_gpt_uuid
00:27:25.897  ************************************
00:27:26.155   17:11:18	-- bdev/blockdev.sh@796 -- # [[ gpt == crypto_sw ]]
00:27:26.155   17:11:18	-- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT
00:27:26.155   17:11:18	-- bdev/blockdev.sh@809 -- # cleanup
00:27:26.155   17:11:18	-- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile
00:27:26.155   17:11:18	-- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:27:26.155   17:11:18	-- bdev/blockdev.sh@24 -- # [[ gpt == rbd ]]
00:27:26.155   17:11:18	-- bdev/blockdev.sh@28 -- # [[ gpt == daos ]]
00:27:26.155   17:11:18	-- bdev/blockdev.sh@32 -- # [[ gpt = \g\p\t ]]
00:27:26.155   17:11:18	-- bdev/blockdev.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:27:26.413  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev
00:27:26.413  Waiting for block devices as requested
00:27:26.413  0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme
00:27:26.672   17:11:19	-- bdev/blockdev.sh@34 -- # [[ -b /dev/nvme0n1 ]]
00:27:26.672   17:11:19	-- bdev/blockdev.sh@35 -- # wipefs --all /dev/nvme0n1
00:27:26.672  /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54
00:27:26.672  /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54
00:27:26.672  /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa
00:27:26.672  /dev/nvme0n1: calling ioctl to re-read partition table: Success
00:27:26.672   17:11:19	-- bdev/blockdev.sh@38 -- # [[ gpt == xnvme ]]
00:27:26.672  
00:27:26.672  real	0m35.705s
00:27:26.672  user	0m54.426s
00:27:26.672  sys	0m6.204s
00:27:26.672  ************************************
00:27:26.672  END TEST blockdev_nvme_gpt
00:27:26.672  ************************************
00:27:26.672   17:11:19	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:27:26.672   17:11:19	-- common/autotest_common.sh@10 -- # set +x
00:27:26.672   17:11:19	-- spdk/autotest.sh@209 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh
00:27:26.672   17:11:19	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:27:26.672   17:11:19	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:27:26.672   17:11:19	-- common/autotest_common.sh@10 -- # set +x
00:27:26.672  ************************************
00:27:26.672  START TEST nvme
00:27:26.672  ************************************
00:27:26.672   17:11:19	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh
00:27:26.672  * Looking for test storage...
00:27:26.672  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme
00:27:26.672    17:11:19	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:27:26.672     17:11:19	-- common/autotest_common.sh@1690 -- # lcov --version
00:27:26.672     17:11:19	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:27:26.932    17:11:19	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:27:26.932    17:11:19	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:27:26.932    17:11:19	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:27:26.932    17:11:19	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:27:26.932    17:11:19	-- scripts/common.sh@335 -- # IFS=.-:
00:27:26.932    17:11:19	-- scripts/common.sh@335 -- # read -ra ver1
00:27:26.932    17:11:19	-- scripts/common.sh@336 -- # IFS=.-:
00:27:26.932    17:11:19	-- scripts/common.sh@336 -- # read -ra ver2
00:27:26.932    17:11:19	-- scripts/common.sh@337 -- # local 'op=<'
00:27:26.932    17:11:19	-- scripts/common.sh@339 -- # ver1_l=2
00:27:26.932    17:11:19	-- scripts/common.sh@340 -- # ver2_l=1
00:27:26.932    17:11:19	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:27:26.932    17:11:19	-- scripts/common.sh@343 -- # case "$op" in
00:27:26.932    17:11:19	-- scripts/common.sh@344 -- # : 1
00:27:26.932    17:11:19	-- scripts/common.sh@363 -- # (( v = 0 ))
00:27:26.932    17:11:19	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:27:26.932     17:11:19	-- scripts/common.sh@364 -- # decimal 1
00:27:26.932     17:11:19	-- scripts/common.sh@352 -- # local d=1
00:27:26.932     17:11:19	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:27:26.932     17:11:19	-- scripts/common.sh@354 -- # echo 1
00:27:26.932    17:11:19	-- scripts/common.sh@364 -- # ver1[v]=1
00:27:26.932     17:11:19	-- scripts/common.sh@365 -- # decimal 2
00:27:26.932     17:11:19	-- scripts/common.sh@352 -- # local d=2
00:27:26.932     17:11:19	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:27:26.932     17:11:19	-- scripts/common.sh@354 -- # echo 2
00:27:26.932    17:11:19	-- scripts/common.sh@365 -- # ver2[v]=2
00:27:26.932    17:11:19	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:27:26.932    17:11:19	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:27:26.932    17:11:19	-- scripts/common.sh@367 -- # return 0
00:27:26.932    17:11:19	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:27:26.932    17:11:19	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:27:26.932  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:27:26.932  		--rc genhtml_branch_coverage=1
00:27:26.932  		--rc genhtml_function_coverage=1
00:27:26.932  		--rc genhtml_legend=1
00:27:26.932  		--rc geninfo_all_blocks=1
00:27:26.932  		--rc geninfo_unexecuted_blocks=1
00:27:26.932  		
00:27:26.932  		'
00:27:26.932    17:11:19	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:27:26.932  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:27:26.932  		--rc genhtml_branch_coverage=1
00:27:26.932  		--rc genhtml_function_coverage=1
00:27:26.932  		--rc genhtml_legend=1
00:27:26.932  		--rc geninfo_all_blocks=1
00:27:26.932  		--rc geninfo_unexecuted_blocks=1
00:27:26.932  		
00:27:26.932  		'
00:27:26.932    17:11:19	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:27:26.932  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:27:26.932  		--rc genhtml_branch_coverage=1
00:27:26.932  		--rc genhtml_function_coverage=1
00:27:26.932  		--rc genhtml_legend=1
00:27:26.932  		--rc geninfo_all_blocks=1
00:27:26.932  		--rc geninfo_unexecuted_blocks=1
00:27:26.932  		
00:27:26.932  		'
00:27:26.932    17:11:19	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:27:26.932  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:27:26.932  		--rc genhtml_branch_coverage=1
00:27:26.932  		--rc genhtml_function_coverage=1
00:27:26.932  		--rc genhtml_legend=1
00:27:26.932  		--rc geninfo_all_blocks=1
00:27:26.932  		--rc geninfo_unexecuted_blocks=1
00:27:26.932  		
00:27:26.932  		'
00:27:26.932   17:11:19	-- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:27:27.191  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev
00:27:27.450  0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic
00:27:28.387    17:11:21	-- nvme/nvme.sh@79 -- # uname
00:27:28.387   17:11:21	-- nvme/nvme.sh@79 -- # '[' Linux = Linux ']'
00:27:28.387   17:11:21	-- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT
00:27:28.387   17:11:21	-- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE'
00:27:28.387   17:11:21	-- common/autotest_common.sh@1068 -- # _start_stub '-s 4096 -i 0 -m 0xE'
00:27:28.387   17:11:21	-- common/autotest_common.sh@1054 -- # _randomize_va_space=2
00:27:28.387   17:11:21	-- common/autotest_common.sh@1055 -- # echo 0
00:27:28.387   17:11:21	-- common/autotest_common.sh@1056 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE
00:27:28.387   17:11:21	-- common/autotest_common.sh@1057 -- # stubpid=148136
00:27:28.387  Waiting for stub to ready for secondary processes...
00:27:28.387   17:11:21	-- common/autotest_common.sh@1058 -- # echo Waiting for stub to ready for secondary processes...
00:27:28.387   17:11:21	-- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']'
00:27:28.387   17:11:21	-- common/autotest_common.sh@1061 -- # [[ -e /proc/148136 ]]
00:27:28.387   17:11:21	-- common/autotest_common.sh@1062 -- # sleep 1s
00:27:28.387  [2024-11-19 17:11:21.144077] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:27:28.387  [2024-11-19 17:11:21.144335] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:27:29.323   17:11:22	-- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']'
00:27:29.323   17:11:22	-- common/autotest_common.sh@1061 -- # [[ -e /proc/148136 ]]
00:27:29.323   17:11:22	-- common/autotest_common.sh@1062 -- # sleep 1s
00:27:29.581  [2024-11-19 17:11:22.200809] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3
00:27:29.581  [2024-11-19 17:11:22.234312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:27:29.582  [2024-11-19 17:11:22.234353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3
00:27:29.582  [2024-11-19 17:11:22.234363] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:27:29.582  [2024-11-19 17:11:22.244779] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller
00:27:29.582  [2024-11-19 17:11:22.256427] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created
00:27:29.582  [2024-11-19 17:11:22.257153] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created
00:27:30.515   17:11:23	-- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']'
00:27:30.515  done.
00:27:30.515   17:11:23	-- common/autotest_common.sh@1064 -- # echo done.
00:27:30.515   17:11:23	-- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5
00:27:30.515   17:11:23	-- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']'
00:27:30.515   17:11:23	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:27:30.515   17:11:23	-- common/autotest_common.sh@10 -- # set +x
00:27:30.515  ************************************
00:27:30.515  START TEST nvme_reset
00:27:30.515  ************************************
00:27:30.515   17:11:23	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5
00:27:30.772  Initializing NVMe Controllers
00:27:30.772  Skipping QEMU NVMe SSD at 0000:00:06.0
00:27:30.772  No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting
00:27:30.772  
00:27:30.772  real	0m0.307s
00:27:30.772  user	0m0.081s
00:27:30.772  sys	0m0.162s
00:27:30.772   17:11:23	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:27:30.772   17:11:23	-- common/autotest_common.sh@10 -- # set +x
00:27:30.772  ************************************
00:27:30.772  END TEST nvme_reset
00:27:30.772  ************************************
00:27:30.772   17:11:23	-- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify
00:27:30.772   17:11:23	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:27:30.772   17:11:23	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:27:30.772   17:11:23	-- common/autotest_common.sh@10 -- # set +x
00:27:30.772  ************************************
00:27:30.772  START TEST nvme_identify
00:27:30.772  ************************************
00:27:30.772   17:11:23	-- common/autotest_common.sh@1114 -- # nvme_identify
00:27:30.772   17:11:23	-- nvme/nvme.sh@12 -- # bdfs=()
00:27:30.772   17:11:23	-- nvme/nvme.sh@12 -- # local bdfs bdf
00:27:30.772   17:11:23	-- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs))
00:27:30.772    17:11:23	-- nvme/nvme.sh@13 -- # get_nvme_bdfs
00:27:30.772    17:11:23	-- common/autotest_common.sh@1508 -- # bdfs=()
00:27:30.772    17:11:23	-- common/autotest_common.sh@1508 -- # local bdfs
00:27:30.772    17:11:23	-- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:27:30.772     17:11:23	-- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:27:30.772     17:11:23	-- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr'
00:27:30.772    17:11:23	-- common/autotest_common.sh@1510 -- # (( 1 == 0 ))
00:27:30.772    17:11:23	-- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0
00:27:30.772   17:11:23	-- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0
00:27:31.064  [2024-11-19 17:11:23.786353] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:06.0] process 148177 terminated unexpected
00:27:31.064  =====================================================
00:27:31.064  NVMe Controller at 0000:00:06.0 [1b36:0010]
00:27:31.064  =====================================================
00:27:31.064  Controller Capabilities/Features
00:27:31.064  ================================
00:27:31.064  Vendor ID:                             1b36
00:27:31.064  Subsystem Vendor ID:                   1af4
00:27:31.064  Serial Number:                         12340
00:27:31.064  Model Number:                          QEMU NVMe Ctrl
00:27:31.064  Firmware Version:                      8.0.0
00:27:31.064  Recommended Arb Burst:                 6
00:27:31.064  IEEE OUI Identifier:                   00 54 52
00:27:31.064  Multi-path I/O
00:27:31.064    May have multiple subsystem ports:   No
00:27:31.064    May have multiple controllers:       No
00:27:31.064    Associated with SR-IOV VF:           No
00:27:31.064  Max Data Transfer Size:                524288
00:27:31.065  Max Number of Namespaces:              256
00:27:31.065  Max Number of I/O Queues:              64
00:27:31.065  NVMe Specification Version (VS):       1.4
00:27:31.065  NVMe Specification Version (Identify): 1.4
00:27:31.065  Maximum Queue Entries:                 2048
00:27:31.065  Contiguous Queues Required:            Yes
00:27:31.065  Arbitration Mechanisms Supported
00:27:31.065    Weighted Round Robin:                Not Supported
00:27:31.065    Vendor Specific:                     Not Supported
00:27:31.065  Reset Timeout:                         7500 ms
00:27:31.065  Doorbell Stride:                       4 bytes
00:27:31.065  NVM Subsystem Reset:                   Not Supported
00:27:31.065  Command Sets Supported
00:27:31.065    NVM Command Set:                     Supported
00:27:31.065  Boot Partition:                        Not Supported
00:27:31.065  Memory Page Size Minimum:              4096 bytes
00:27:31.065  Memory Page Size Maximum:              65536 bytes
00:27:31.065  Persistent Memory Region:              Not Supported
00:27:31.065  Optional Asynchronous Events Supported
00:27:31.065    Namespace Attribute Notices:         Supported
00:27:31.065    Firmware Activation Notices:         Not Supported
00:27:31.065    ANA Change Notices:                  Not Supported
00:27:31.065    PLE Aggregate Log Change Notices:    Not Supported
00:27:31.065    LBA Status Info Alert Notices:       Not Supported
00:27:31.065    EGE Aggregate Log Change Notices:    Not Supported
00:27:31.065    Normal NVM Subsystem Shutdown event: Not Supported
00:27:31.065    Zone Descriptor Change Notices:      Not Supported
00:27:31.065    Discovery Log Change Notices:        Not Supported
00:27:31.065  Controller Attributes
00:27:31.065    128-bit Host Identifier:             Not Supported
00:27:31.065    Non-Operational Permissive Mode:     Not Supported
00:27:31.065    NVM Sets:                            Not Supported
00:27:31.065    Read Recovery Levels:                Not Supported
00:27:31.065    Endurance Groups:                    Not Supported
00:27:31.065    Predictable Latency Mode:            Not Supported
00:27:31.065    Traffic Based Keep ALive:            Not Supported
00:27:31.065    Namespace Granularity:               Not Supported
00:27:31.065    SQ Associations:                     Not Supported
00:27:31.065    UUID List:                           Not Supported
00:27:31.065    Multi-Domain Subsystem:              Not Supported
00:27:31.065    Fixed Capacity Management:           Not Supported
00:27:31.065    Variable Capacity Management:        Not Supported
00:27:31.065    Delete Endurance Group:              Not Supported
00:27:31.065    Delete NVM Set:                      Not Supported
00:27:31.065    Extended LBA Formats Supported:      Supported
00:27:31.065    Flexible Data Placement Supported:   Not Supported
00:27:31.065  
00:27:31.065  Controller Memory Buffer Support
00:27:31.065  ================================
00:27:31.065  Supported:                             No
00:27:31.065  
00:27:31.065  Persistent Memory Region Support
00:27:31.065  ================================
00:27:31.065  Supported:                             No
00:27:31.065  
00:27:31.065  Admin Command Set Attributes
00:27:31.065  ============================
00:27:31.065  Security Send/Receive:                 Not Supported
00:27:31.065  Format NVM:                            Supported
00:27:31.065  Firmware Activate/Download:            Not Supported
00:27:31.065  Namespace Management:                  Supported
00:27:31.065  Device Self-Test:                      Not Supported
00:27:31.065  Directives:                            Supported
00:27:31.065  NVMe-MI:                               Not Supported
00:27:31.065  Virtualization Management:             Not Supported
00:27:31.065  Doorbell Buffer Config:                Supported
00:27:31.065  Get LBA Status Capability:             Not Supported
00:27:31.065  Command & Feature Lockdown Capability: Not Supported
00:27:31.065  Abort Command Limit:                   4
00:27:31.065  Async Event Request Limit:             4
00:27:31.065  Number of Firmware Slots:              N/A
00:27:31.065  Firmware Slot 1 Read-Only:             N/A
00:27:31.065  Firmware Activation Without Reset:     N/A
00:27:31.065  Multiple Update Detection Support:     N/A
00:27:31.065  Firmware Update Granularity:           No Information Provided
00:27:31.065  Per-Namespace SMART Log:               Yes
00:27:31.065  Asymmetric Namespace Access Log Page:  Not Supported
00:27:31.065  Subsystem NQN:                         nqn.2019-08.org.qemu:12340
00:27:31.065  Command Effects Log Page:              Supported
00:27:31.065  Get Log Page Extended Data:            Supported
00:27:31.065  Telemetry Log Pages:                   Not Supported
00:27:31.065  Persistent Event Log Pages:            Not Supported
00:27:31.065  Supported Log Pages Log Page:          May Support
00:27:31.065  Commands Supported & Effects Log Page: Not Supported
00:27:31.065  Feature Identifiers & Effects Log Page:May Support
00:27:31.065  NVMe-MI Commands & Effects Log Page:   May Support
00:27:31.065  Data Area 4 for Telemetry Log:         Not Supported
00:27:31.065  Error Log Page Entries Supported:      1
00:27:31.065  Keep Alive:                            Not Supported
00:27:31.065  
00:27:31.065  NVM Command Set Attributes
00:27:31.065  ==========================
00:27:31.065  Submission Queue Entry Size
00:27:31.065    Max:                       64
00:27:31.065    Min:                       64
00:27:31.065  Completion Queue Entry Size
00:27:31.065    Max:                       16
00:27:31.065    Min:                       16
00:27:31.065  Number of Namespaces:        256
00:27:31.065  Compare Command:             Supported
00:27:31.065  Write Uncorrectable Command: Not Supported
00:27:31.065  Dataset Management Command:  Supported
00:27:31.065  Write Zeroes Command:        Supported
00:27:31.065  Set Features Save Field:     Supported
00:27:31.065  Reservations:                Not Supported
00:27:31.065  Timestamp:                   Supported
00:27:31.065  Copy:                        Supported
00:27:31.065  Volatile Write Cache:        Present
00:27:31.065  Atomic Write Unit (Normal):  1
00:27:31.065  Atomic Write Unit (PFail):   1
00:27:31.065  Atomic Compare & Write Unit: 1
00:27:31.065  Fused Compare & Write:       Not Supported
00:27:31.065  Scatter-Gather List
00:27:31.065    SGL Command Set:           Supported
00:27:31.065    SGL Keyed:                 Not Supported
00:27:31.065    SGL Bit Bucket Descriptor: Not Supported
00:27:31.065    SGL Metadata Pointer:      Not Supported
00:27:31.065    Oversized SGL:             Not Supported
00:27:31.065    SGL Metadata Address:      Not Supported
00:27:31.065    SGL Offset:                Not Supported
00:27:31.065    Transport SGL Data Block:  Not Supported
00:27:31.065  Replay Protected Memory Block:  Not Supported
00:27:31.065  
00:27:31.065  Firmware Slot Information
00:27:31.065  =========================
00:27:31.065  Active slot:                 1
00:27:31.065  Slot 1 Firmware Revision:    1.0
00:27:31.065  
00:27:31.065  
00:27:31.065  Commands Supported and Effects
00:27:31.065  ==============================
00:27:31.065  Admin Commands
00:27:31.065  --------------
00:27:31.065     Delete I/O Submission Queue (00h): Supported 
00:27:31.065     Create I/O Submission Queue (01h): Supported 
00:27:31.065                    Get Log Page (02h): Supported 
00:27:31.065     Delete I/O Completion Queue (04h): Supported 
00:27:31.065     Create I/O Completion Queue (05h): Supported 
00:27:31.065                        Identify (06h): Supported 
00:27:31.065                           Abort (08h): Supported 
00:27:31.065                    Set Features (09h): Supported 
00:27:31.065                    Get Features (0Ah): Supported 
00:27:31.065      Asynchronous Event Request (0Ch): Supported 
00:27:31.065            Namespace Attachment (15h): Supported NS-Inventory-Change 
00:27:31.065                  Directive Send (19h): Supported 
00:27:31.065               Directive Receive (1Ah): Supported 
00:27:31.065       Virtualization Management (1Ch): Supported 
00:27:31.065          Doorbell Buffer Config (7Ch): Supported 
00:27:31.065                      Format NVM (80h): Supported LBA-Change 
00:27:31.065  I/O Commands
00:27:31.065  ------------
00:27:31.065                           Flush (00h): Supported LBA-Change 
00:27:31.065                           Write (01h): Supported LBA-Change 
00:27:31.065                            Read (02h): Supported 
00:27:31.065                         Compare (05h): Supported 
00:27:31.065                    Write Zeroes (08h): Supported LBA-Change 
00:27:31.065              Dataset Management (09h): Supported LBA-Change 
00:27:31.065                         Unknown (0Ch): Supported 
00:27:31.065                         Unknown (12h): Supported 
00:27:31.065                            Copy (19h): Supported LBA-Change 
00:27:31.065                         Unknown (1Dh): Supported LBA-Change 
00:27:31.065  
00:27:31.065  Error Log
00:27:31.065  =========
00:27:31.065  
00:27:31.065  Arbitration
00:27:31.065  ===========
00:27:31.065  Arbitration Burst:           no limit
00:27:31.065  
00:27:31.065  Power Management
00:27:31.065  ================
00:27:31.065  Number of Power States:          1
00:27:31.065  Current Power State:             Power State #0
00:27:31.065  Power State #0:
00:27:31.065    Max Power:                     25.00 W
00:27:31.065    Non-Operational State:         Operational
00:27:31.065    Entry Latency:                 16 microseconds
00:27:31.065    Exit Latency:                  4 microseconds
00:27:31.065    Relative Read Throughput:      0
00:27:31.065    Relative Read Latency:         0
00:27:31.065    Relative Write Throughput:     0
00:27:31.065    Relative Write Latency:        0
00:27:31.065    Idle Power:                     Not Reported
00:27:31.065    Active Power:                   Not Reported
00:27:31.065  Non-Operational Permissive Mode: Not Supported
00:27:31.065  
00:27:31.065  Health Information
00:27:31.065  ==================
00:27:31.065  Critical Warnings:
00:27:31.065    Available Spare Space:     OK
00:27:31.065    Temperature:               OK
00:27:31.065    Device Reliability:        OK
00:27:31.065    Read Only:                 No
00:27:31.065    Volatile Memory Backup:    OK
00:27:31.065  Current Temperature:         323 Kelvin (50 Celsius)
00:27:31.065  Temperature Threshold:       343 Kelvin (70 Celsius)
00:27:31.065  Available Spare:             0%
00:27:31.066  Available Spare Threshold:   0%
00:27:31.066  Life Percentage Used:        0%
00:27:31.066  Data Units Read:             8250
00:27:31.066  Data Units Written:          4028
00:27:31.066  Host Read Commands:          376410
00:27:31.066  Host Write Commands:         203375
00:27:31.066  Controller Busy Time:        0 minutes
00:27:31.066  Power Cycles:                0
00:27:31.066  Power On Hours:              0 hours
00:27:31.066  Unsafe Shutdowns:            0
00:27:31.066  Unrecoverable Media Errors:  0
00:27:31.066  Lifetime Error Log Entries:  0
00:27:31.066  Warning Temperature Time:    0 minutes
00:27:31.066  Critical Temperature Time:   0 minutes
00:27:31.066  
00:27:31.066  Number of Queues
00:27:31.066  ================
00:27:31.066  Number of I/O Submission Queues:      64
00:27:31.066  Number of I/O Completion Queues:      64
00:27:31.066  
00:27:31.066  ZNS Specific Controller Data
00:27:31.066  ============================
00:27:31.066  Zone Append Size Limit:      0
00:27:31.066  
00:27:31.066  
00:27:31.066  Active Namespaces
00:27:31.066  =================
00:27:31.066  Namespace ID:1
00:27:31.066  Error Recovery Timeout:                Unlimited
00:27:31.066  Command Set Identifier:                NVM (00h)
00:27:31.066  Deallocate:                            Supported
00:27:31.066  Deallocated/Unwritten Error:           Supported
00:27:31.066  Deallocated Read Value:                All 0x00
00:27:31.066  Deallocate in Write Zeroes:            Not Supported
00:27:31.066  Deallocated Guard Field:               0xFFFF
00:27:31.066  Flush:                                 Supported
00:27:31.066  Reservation:                           Not Supported
00:27:31.066  Namespace Sharing Capabilities:        Private
00:27:31.066  Size (in LBAs):                        1310720 (5GiB)
00:27:31.066  Capacity (in LBAs):                    1310720 (5GiB)
00:27:31.066  Utilization (in LBAs):                 1310720 (5GiB)
00:27:31.066  Thin Provisioning:                     Not Supported
00:27:31.066  Per-NS Atomic Units:                   No
00:27:31.066  Maximum Single Source Range Length:    128
00:27:31.066  Maximum Copy Length:                   128
00:27:31.066  Maximum Source Range Count:            128
00:27:31.066  NGUID/EUI64 Never Reused:              No
00:27:31.066  Namespace Write Protected:             No
00:27:31.066  Number of LBA Formats:                 8
00:27:31.066  Current LBA Format:                    LBA Format #04
00:27:31.066  LBA Format #00: Data Size:   512  Metadata Size:     0
00:27:31.066  LBA Format #01: Data Size:   512  Metadata Size:     8
00:27:31.066  LBA Format #02: Data Size:   512  Metadata Size:    16
00:27:31.066  LBA Format #03: Data Size:   512  Metadata Size:    64
00:27:31.066  LBA Format #04: Data Size:  4096  Metadata Size:     0
00:27:31.066  LBA Format #05: Data Size:  4096  Metadata Size:     8
00:27:31.066  LBA Format #06: Data Size:  4096  Metadata Size:    16
00:27:31.066  LBA Format #07: Data Size:  4096  Metadata Size:    64
00:27:31.066  
00:27:31.066   17:11:23	-- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}"
00:27:31.066   17:11:23	-- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0
00:27:31.330  =====================================================
00:27:31.330  NVMe Controller at 0000:00:06.0 [1b36:0010]
00:27:31.330  =====================================================
00:27:31.330  Controller Capabilities/Features
00:27:31.330  ================================
00:27:31.330  Vendor ID:                             1b36
00:27:31.330  Subsystem Vendor ID:                   1af4
00:27:31.330  Serial Number:                         12340
00:27:31.330  Model Number:                          QEMU NVMe Ctrl
00:27:31.330  Firmware Version:                      8.0.0
00:27:31.330  Recommended Arb Burst:                 6
00:27:31.330  IEEE OUI Identifier:                   00 54 52
00:27:31.330  Multi-path I/O
00:27:31.330    May have multiple subsystem ports:   No
00:27:31.330    May have multiple controllers:       No
00:27:31.330    Associated with SR-IOV VF:           No
00:27:31.330  Max Data Transfer Size:                524288
00:27:31.330  Max Number of Namespaces:              256
00:27:31.330  Max Number of I/O Queues:              64
00:27:31.330  NVMe Specification Version (VS):       1.4
00:27:31.330  NVMe Specification Version (Identify): 1.4
00:27:31.330  Maximum Queue Entries:                 2048
00:27:31.330  Contiguous Queues Required:            Yes
00:27:31.330  Arbitration Mechanisms Supported
00:27:31.330    Weighted Round Robin:                Not Supported
00:27:31.330    Vendor Specific:                     Not Supported
00:27:31.330  Reset Timeout:                         7500 ms
00:27:31.330  Doorbell Stride:                       4 bytes
00:27:31.330  NVM Subsystem Reset:                   Not Supported
00:27:31.330  Command Sets Supported
00:27:31.330    NVM Command Set:                     Supported
00:27:31.330  Boot Partition:                        Not Supported
00:27:31.330  Memory Page Size Minimum:              4096 bytes
00:27:31.330  Memory Page Size Maximum:              65536 bytes
00:27:31.330  Persistent Memory Region:              Not Supported
00:27:31.330  Optional Asynchronous Events Supported
00:27:31.330    Namespace Attribute Notices:         Supported
00:27:31.330    Firmware Activation Notices:         Not Supported
00:27:31.330    ANA Change Notices:                  Not Supported
00:27:31.330    PLE Aggregate Log Change Notices:    Not Supported
00:27:31.330    LBA Status Info Alert Notices:       Not Supported
00:27:31.330    EGE Aggregate Log Change Notices:    Not Supported
00:27:31.330    Normal NVM Subsystem Shutdown event: Not Supported
00:27:31.330    Zone Descriptor Change Notices:      Not Supported
00:27:31.330    Discovery Log Change Notices:        Not Supported
00:27:31.330  Controller Attributes
00:27:31.330    128-bit Host Identifier:             Not Supported
00:27:31.330    Non-Operational Permissive Mode:     Not Supported
00:27:31.330    NVM Sets:                            Not Supported
00:27:31.330    Read Recovery Levels:                Not Supported
00:27:31.330    Endurance Groups:                    Not Supported
00:27:31.330    Predictable Latency Mode:            Not Supported
00:27:31.330    Traffic Based Keep ALive:            Not Supported
00:27:31.330    Namespace Granularity:               Not Supported
00:27:31.330    SQ Associations:                     Not Supported
00:27:31.330    UUID List:                           Not Supported
00:27:31.330    Multi-Domain Subsystem:              Not Supported
00:27:31.330    Fixed Capacity Management:           Not Supported
00:27:31.330    Variable Capacity Management:        Not Supported
00:27:31.330    Delete Endurance Group:              Not Supported
00:27:31.330    Delete NVM Set:                      Not Supported
00:27:31.330    Extended LBA Formats Supported:      Supported
00:27:31.330    Flexible Data Placement Supported:   Not Supported
00:27:31.330  
00:27:31.330  Controller Memory Buffer Support
00:27:31.330  ================================
00:27:31.330  Supported:                             No
00:27:31.330  
00:27:31.330  Persistent Memory Region Support
00:27:31.330  ================================
00:27:31.330  Supported:                             No
00:27:31.330  
00:27:31.330  Admin Command Set Attributes
00:27:31.330  ============================
00:27:31.330  Security Send/Receive:                 Not Supported
00:27:31.330  Format NVM:                            Supported
00:27:31.330  Firmware Activate/Download:            Not Supported
00:27:31.330  Namespace Management:                  Supported
00:27:31.330  Device Self-Test:                      Not Supported
00:27:31.330  Directives:                            Supported
00:27:31.330  NVMe-MI:                               Not Supported
00:27:31.330  Virtualization Management:             Not Supported
00:27:31.330  Doorbell Buffer Config:                Supported
00:27:31.330  Get LBA Status Capability:             Not Supported
00:27:31.330  Command & Feature Lockdown Capability: Not Supported
00:27:31.330  Abort Command Limit:                   4
00:27:31.330  Async Event Request Limit:             4
00:27:31.330  Number of Firmware Slots:              N/A
00:27:31.330  Firmware Slot 1 Read-Only:             N/A
00:27:31.330  Firmware Activation Without Reset:     N/A
00:27:31.330  Multiple Update Detection Support:     N/A
00:27:31.330  Firmware Update Granularity:           No Information Provided
00:27:31.330  Per-Namespace SMART Log:               Yes
00:27:31.330  Asymmetric Namespace Access Log Page:  Not Supported
00:27:31.330  Subsystem NQN:                         nqn.2019-08.org.qemu:12340
00:27:31.330  Command Effects Log Page:              Supported
00:27:31.330  Get Log Page Extended Data:            Supported
00:27:31.330  Telemetry Log Pages:                   Not Supported
00:27:31.330  Persistent Event Log Pages:            Not Supported
00:27:31.330  Supported Log Pages Log Page:          May Support
00:27:31.330  Commands Supported & Effects Log Page: Not Supported
00:27:31.330  Feature Identifiers & Effects Log Page:May Support
00:27:31.330  NVMe-MI Commands & Effects Log Page:   May Support
00:27:31.330  Data Area 4 for Telemetry Log:         Not Supported
00:27:31.330  Error Log Page Entries Supported:      1
00:27:31.330  Keep Alive:                            Not Supported
00:27:31.330  
00:27:31.330  NVM Command Set Attributes
00:27:31.330  ==========================
00:27:31.330  Submission Queue Entry Size
00:27:31.330    Max:                       64
00:27:31.330    Min:                       64
00:27:31.330  Completion Queue Entry Size
00:27:31.330    Max:                       16
00:27:31.330    Min:                       16
00:27:31.330  Number of Namespaces:        256
00:27:31.330  Compare Command:             Supported
00:27:31.330  Write Uncorrectable Command: Not Supported
00:27:31.330  Dataset Management Command:  Supported
00:27:31.330  Write Zeroes Command:        Supported
00:27:31.330  Set Features Save Field:     Supported
00:27:31.330  Reservations:                Not Supported
00:27:31.330  Timestamp:                   Supported
00:27:31.330  Copy:                        Supported
00:27:31.330  Volatile Write Cache:        Present
00:27:31.330  Atomic Write Unit (Normal):  1
00:27:31.330  Atomic Write Unit (PFail):   1
00:27:31.330  Atomic Compare & Write Unit: 1
00:27:31.330  Fused Compare & Write:       Not Supported
00:27:31.330  Scatter-Gather List
00:27:31.330    SGL Command Set:           Supported
00:27:31.330    SGL Keyed:                 Not Supported
00:27:31.330    SGL Bit Bucket Descriptor: Not Supported
00:27:31.330    SGL Metadata Pointer:      Not Supported
00:27:31.330    Oversized SGL:             Not Supported
00:27:31.330    SGL Metadata Address:      Not Supported
00:27:31.330    SGL Offset:                Not Supported
00:27:31.330    Transport SGL Data Block:  Not Supported
00:27:31.331  Replay Protected Memory Block:  Not Supported
00:27:31.331  
00:27:31.331  Firmware Slot Information
00:27:31.331  =========================
00:27:31.331  Active slot:                 1
00:27:31.331  Slot 1 Firmware Revision:    1.0
00:27:31.331  
00:27:31.331  
00:27:31.331  Commands Supported and Effects
00:27:31.331  ==============================
00:27:31.331  Admin Commands
00:27:31.331  --------------
00:27:31.331     Delete I/O Submission Queue (00h): Supported 
00:27:31.331     Create I/O Submission Queue (01h): Supported 
00:27:31.331                    Get Log Page (02h): Supported 
00:27:31.331     Delete I/O Completion Queue (04h): Supported 
00:27:31.331     Create I/O Completion Queue (05h): Supported 
00:27:31.331                        Identify (06h): Supported 
00:27:31.331                           Abort (08h): Supported 
00:27:31.331                    Set Features (09h): Supported 
00:27:31.331                    Get Features (0Ah): Supported 
00:27:31.331      Asynchronous Event Request (0Ch): Supported 
00:27:31.331            Namespace Attachment (15h): Supported NS-Inventory-Change 
00:27:31.331                  Directive Send (19h): Supported 
00:27:31.331               Directive Receive (1Ah): Supported 
00:27:31.331       Virtualization Management (1Ch): Supported 
00:27:31.331          Doorbell Buffer Config (7Ch): Supported 
00:27:31.331                      Format NVM (80h): Supported LBA-Change 
00:27:31.331  I/O Commands
00:27:31.331  ------------
00:27:31.331                           Flush (00h): Supported LBA-Change 
00:27:31.331                           Write (01h): Supported LBA-Change 
00:27:31.331                            Read (02h): Supported 
00:27:31.331                         Compare (05h): Supported 
00:27:31.331                    Write Zeroes (08h): Supported LBA-Change 
00:27:31.331              Dataset Management (09h): Supported LBA-Change 
00:27:31.331                         Unknown (0Ch): Supported 
00:27:31.331                         Unknown (12h): Supported 
00:27:31.331                            Copy (19h): Supported LBA-Change 
00:27:31.331                         Unknown (1Dh): Supported LBA-Change 
00:27:31.331  
00:27:31.331  Error Log
00:27:31.331  =========
00:27:31.331  
00:27:31.331  Arbitration
00:27:31.331  ===========
00:27:31.331  Arbitration Burst:           no limit
00:27:31.331  
00:27:31.331  Power Management
00:27:31.331  ================
00:27:31.331  Number of Power States:          1
00:27:31.331  Current Power State:             Power State #0
00:27:31.331  Power State #0:
00:27:31.331    Max Power:                     25.00 W
00:27:31.331    Non-Operational State:         Operational
00:27:31.331    Entry Latency:                 16 microseconds
00:27:31.331    Exit Latency:                  4 microseconds
00:27:31.331    Relative Read Throughput:      0
00:27:31.331    Relative Read Latency:         0
00:27:31.331    Relative Write Throughput:     0
00:27:31.331    Relative Write Latency:        0
00:27:31.331    Idle Power:                     Not Reported
00:27:31.331    Active Power:                   Not Reported
00:27:31.331  Non-Operational Permissive Mode: Not Supported
00:27:31.331  
00:27:31.331  Health Information
00:27:31.331  ==================
00:27:31.331  Critical Warnings:
00:27:31.331    Available Spare Space:     OK
00:27:31.331    Temperature:               OK
00:27:31.331    Device Reliability:        OK
00:27:31.331    Read Only:                 No
00:27:31.331    Volatile Memory Backup:    OK
00:27:31.331  Current Temperature:         323 Kelvin (50 Celsius)
00:27:31.331  Temperature Threshold:       343 Kelvin (70 Celsius)
00:27:31.331  Available Spare:             0%
00:27:31.331  Available Spare Threshold:   0%
00:27:31.331  Life Percentage Used:        0%
00:27:31.331  Data Units Read:             8250
00:27:31.331  Data Units Written:          4028
00:27:31.331  Host Read Commands:          376410
00:27:31.331  Host Write Commands:         203375
00:27:31.331  Controller Busy Time:        0 minutes
00:27:31.331  Power Cycles:                0
00:27:31.331  Power On Hours:              0 hours
00:27:31.331  Unsafe Shutdowns:            0
00:27:31.331  Unrecoverable Media Errors:  0
00:27:31.331  Lifetime Error Log Entries:  0
00:27:31.331  Warning Temperature Time:    0 minutes
00:27:31.331  Critical Temperature Time:   0 minutes
00:27:31.331  
00:27:31.331  Number of Queues
00:27:31.331  ================
00:27:31.331  Number of I/O Submission Queues:      64
00:27:31.331  Number of I/O Completion Queues:      64
00:27:31.331  
00:27:31.331  ZNS Specific Controller Data
00:27:31.331  ============================
00:27:31.331  Zone Append Size Limit:      0
00:27:31.331  
00:27:31.331  
00:27:31.331  Active Namespaces
00:27:31.331  =================
00:27:31.331  Namespace ID:1
00:27:31.331  Error Recovery Timeout:                Unlimited
00:27:31.331  Command Set Identifier:                NVM (00h)
00:27:31.331  Deallocate:                            Supported
00:27:31.331  Deallocated/Unwritten Error:           Supported
00:27:31.331  Deallocated Read Value:                All 0x00
00:27:31.331  Deallocate in Write Zeroes:            Not Supported
00:27:31.331  Deallocated Guard Field:               0xFFFF
00:27:31.331  Flush:                                 Supported
00:27:31.331  Reservation:                           Not Supported
00:27:31.331  Namespace Sharing Capabilities:        Private
00:27:31.331  Size (in LBAs):                        1310720 (5GiB)
00:27:31.331  Capacity (in LBAs):                    1310720 (5GiB)
00:27:31.331  Utilization (in LBAs):                 1310720 (5GiB)
00:27:31.331  Thin Provisioning:                     Not Supported
00:27:31.331  Per-NS Atomic Units:                   No
00:27:31.331  Maximum Single Source Range Length:    128
00:27:31.331  Maximum Copy Length:                   128
00:27:31.331  Maximum Source Range Count:            128
00:27:31.331  NGUID/EUI64 Never Reused:              No
00:27:31.331  Namespace Write Protected:             No
00:27:31.331  Number of LBA Formats:                 8
00:27:31.331  Current LBA Format:                    LBA Format #04
00:27:31.331  LBA Format #00: Data Size:   512  Metadata Size:     0
00:27:31.331  LBA Format #01: Data Size:   512  Metadata Size:     8
00:27:31.331  LBA Format #02: Data Size:   512  Metadata Size:    16
00:27:31.331  LBA Format #03: Data Size:   512  Metadata Size:    64
00:27:31.331  LBA Format #04: Data Size:  4096  Metadata Size:     0
00:27:31.331  LBA Format #05: Data Size:  4096  Metadata Size:     8
00:27:31.331  LBA Format #06: Data Size:  4096  Metadata Size:    16
00:27:31.331  LBA Format #07: Data Size:  4096  Metadata Size:    64
00:27:31.331  
00:27:31.331  
00:27:31.331  real	0m0.674s
00:27:31.331  user	0m0.257s
00:27:31.331  sys	0m0.339s
00:27:31.331   17:11:24	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:27:31.331   17:11:24	-- common/autotest_common.sh@10 -- # set +x
00:27:31.331  ************************************
00:27:31.331  END TEST nvme_identify
00:27:31.331  ************************************
00:27:31.589   17:11:24	-- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf
00:27:31.589   17:11:24	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:27:31.589   17:11:24	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:27:31.589   17:11:24	-- common/autotest_common.sh@10 -- # set +x
00:27:31.589  ************************************
00:27:31.589  START TEST nvme_perf
00:27:31.589  ************************************
00:27:31.589   17:11:24	-- common/autotest_common.sh@1114 -- # nvme_perf
00:27:31.589   17:11:24	-- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N
00:27:32.963  Initializing NVMe Controllers
00:27:32.963  Attached to NVMe Controller at 0000:00:06.0 [1b36:0010]
00:27:32.963  Associating PCIE (0000:00:06.0) NSID 1 with lcore 0
00:27:32.963  Initialization complete. Launching workers.
00:27:32.963  ========================================================
00:27:32.963                                                                             Latency(us)
00:27:32.963  Device Information                     :       IOPS      MiB/s    Average        min        max
00:27:32.963  PCIE (0000:00:06.0) NSID 1 from core  0:   52480.00     615.00    2440.81    1304.36    5298.77
00:27:32.963  ========================================================
00:27:32.963  Total                                  :   52480.00     615.00    2440.81    1304.36    5298.77
00:27:32.963  
00:27:32.963  Summary latency data for PCIE (0000:00:06.0) NSID 1                  from core 0:
00:27:32.963  =================================================================================
00:27:32.963    1.00000% :  1505.768us
00:27:32.963   10.00000% :  1724.221us
00:27:32.963   25.00000% :  1989.486us
00:27:32.963   50.00000% :  2434.194us
00:27:32.963   75.00000% :  2871.101us
00:27:32.963   90.00000% :  3136.366us
00:27:32.963   95.00000% :  3370.423us
00:27:32.963   98.00000% :  3666.895us
00:27:32.963   99.00000% :  3791.726us
00:27:32.963   99.50000% :  3947.764us
00:27:32.963   99.90000% :  4649.935us
00:27:32.963   99.99000% :  5211.672us
00:27:32.963   99.99900% :  5305.295us
00:27:32.963   99.99990% :  5305.295us
00:27:32.963   99.99999% :  5305.295us
00:27:32.963  
00:27:32.963  Latency histogram for PCIE (0000:00:06.0) NSID 1                  from core 0:
00:27:32.963  ==============================================================================
00:27:32.963         Range in us     Cumulative    IO count
00:27:32.963   1302.918 -  1310.720:    0.0038%  (        2)
00:27:32.963   1318.522 -  1326.324:    0.0057%  (        1)
00:27:32.963   1326.324 -  1334.126:    0.0076%  (        1)
00:27:32.963   1334.126 -  1341.928:    0.0133%  (        3)
00:27:32.963   1341.928 -  1349.730:    0.0152%  (        1)
00:27:32.963   1349.730 -  1357.531:    0.0171%  (        1)
00:27:32.963   1357.531 -  1365.333:    0.0229%  (        3)
00:27:32.963   1365.333 -  1373.135:    0.0248%  (        1)
00:27:32.963   1373.135 -  1380.937:    0.0305%  (        3)
00:27:32.963   1380.937 -  1388.739:    0.0324%  (        1)
00:27:32.963   1388.739 -  1396.541:    0.0534%  (       11)
00:27:32.963   1396.541 -  1404.343:    0.0800%  (       14)
00:27:32.963   1404.343 -  1412.145:    0.0991%  (       10)
00:27:32.963   1412.145 -  1419.947:    0.1181%  (       10)
00:27:32.963   1419.947 -  1427.749:    0.1601%  (       22)
00:27:32.963   1427.749 -  1435.550:    0.1982%  (       20)
00:27:32.963   1435.550 -  1443.352:    0.2458%  (       25)
00:27:32.963   1443.352 -  1451.154:    0.2954%  (       26)
00:27:32.963   1451.154 -  1458.956:    0.3773%  (       43)
00:27:32.963   1458.956 -  1466.758:    0.4764%  (       52)
00:27:32.963   1466.758 -  1474.560:    0.5774%  (       53)
00:27:32.963   1474.560 -  1482.362:    0.6917%  (       60)
00:27:32.963   1482.362 -  1490.164:    0.8308%  (       73)
00:27:32.963   1490.164 -  1497.966:    0.9870%  (       82)
00:27:32.963   1497.966 -  1505.768:    1.1528%  (       87)
00:27:32.963   1505.768 -  1513.570:    1.3491%  (      103)
00:27:32.963   1513.570 -  1521.371:    1.5377%  (       99)
00:27:32.963   1521.371 -  1529.173:    1.7664%  (      120)
00:27:32.963   1529.173 -  1536.975:    1.9989%  (      122)
00:27:32.963   1536.975 -  1544.777:    2.2428%  (      128)
00:27:32.963   1544.777 -  1552.579:    2.5095%  (      140)
00:27:32.963   1552.579 -  1560.381:    2.7782%  (      141)
00:27:32.963   1560.381 -  1568.183:    3.0431%  (      139)
00:27:32.963   1568.183 -  1575.985:    3.3727%  (      173)
00:27:32.963   1575.985 -  1583.787:    3.6795%  (      161)
00:27:32.963   1583.787 -  1591.589:    3.9806%  (      158)
00:27:32.963   1591.589 -  1599.390:    4.2854%  (      160)
00:27:32.963   1599.390 -  1607.192:    4.5884%  (      159)
00:27:32.963   1607.192 -  1614.994:    4.9466%  (      188)
00:27:32.963   1614.994 -  1622.796:    5.2744%  (      172)
00:27:32.963   1622.796 -  1630.598:    5.6002%  (      171)
00:27:32.963   1630.598 -  1638.400:    5.9756%  (      197)
00:27:32.963   1638.400 -  1646.202:    6.3243%  (      183)
00:27:32.963   1646.202 -  1654.004:    6.6559%  (      174)
00:27:32.963   1654.004 -  1661.806:    7.0617%  (      213)
00:27:32.963   1661.806 -  1669.608:    7.4314%  (      194)
00:27:32.963   1669.608 -  1677.410:    7.7820%  (      184)
00:27:32.963   1677.410 -  1685.211:    8.2031%  (      221)
00:27:32.964   1685.211 -  1693.013:    8.5766%  (      196)
00:27:32.964   1693.013 -  1700.815:    8.9787%  (      211)
00:27:32.964   1700.815 -  1708.617:    9.4188%  (      231)
00:27:32.964   1708.617 -  1716.419:    9.7942%  (      197)
00:27:32.964   1716.419 -  1724.221:   10.2210%  (      224)
00:27:32.964   1724.221 -  1732.023:   10.6574%  (      229)
00:27:32.964   1732.023 -  1739.825:   11.1052%  (      235)
00:27:32.964   1739.825 -  1747.627:   11.5168%  (      216)
00:27:32.964   1747.627 -  1755.429:   12.0027%  (      255)
00:27:32.964   1755.429 -  1763.230:   12.4047%  (      211)
00:27:32.964   1763.230 -  1771.032:   12.8506%  (      234)
00:27:32.964   1771.032 -  1778.834:   13.3079%  (      240)
00:27:32.964   1778.834 -  1786.636:   13.7367%  (      225)
00:27:32.964   1786.636 -  1794.438:   14.1883%  (      237)
00:27:32.964   1794.438 -  1802.240:   14.6437%  (      239)
00:27:32.964   1802.240 -  1810.042:   15.0724%  (      225)
00:27:32.964   1810.042 -  1817.844:   15.5373%  (      244)
00:27:32.964   1817.844 -  1825.646:   15.9718%  (      228)
00:27:32.964   1825.646 -  1833.448:   16.4139%  (      232)
00:27:32.964   1833.448 -  1841.250:   16.8636%  (      236)
00:27:32.964   1841.250 -  1849.051:   17.3075%  (      233)
00:27:32.964   1849.051 -  1856.853:   17.7458%  (      230)
00:27:32.964   1856.853 -  1864.655:   18.2031%  (      240)
00:27:32.964   1864.655 -  1872.457:   18.6509%  (      235)
00:27:32.964   1872.457 -  1880.259:   19.1254%  (      249)
00:27:32.964   1880.259 -  1888.061:   19.5541%  (      225)
00:27:32.964   1888.061 -  1895.863:   20.0076%  (      238)
00:27:32.964   1895.863 -  1903.665:   20.4630%  (      239)
00:27:32.964   1903.665 -  1911.467:   20.9242%  (      242)
00:27:32.964   1911.467 -  1919.269:   21.3453%  (      221)
00:27:32.964   1919.269 -  1927.070:   21.8197%  (      249)
00:27:32.964   1927.070 -  1934.872:   22.2351%  (      218)
00:27:32.964   1934.872 -  1942.674:   22.7039%  (      246)
00:27:32.964   1942.674 -  1950.476:   23.1441%  (      231)
00:27:32.964   1950.476 -  1958.278:   23.5633%  (      220)
00:27:32.964   1958.278 -  1966.080:   24.0187%  (      239)
00:27:32.964   1966.080 -  1973.882:   24.4817%  (      243)
00:27:32.964   1973.882 -  1981.684:   24.9085%  (      224)
00:27:32.964   1981.684 -  1989.486:   25.3697%  (      242)
00:27:32.964   1989.486 -  1997.288:   25.8060%  (      229)
00:27:32.964   1997.288 -  2012.891:   26.6959%  (      467)
00:27:32.964   2012.891 -  2028.495:   27.6048%  (      477)
00:27:32.964   2028.495 -  2044.099:   28.4718%  (      455)
00:27:32.964   2044.099 -  2059.703:   29.3826%  (      478)
00:27:32.964   2059.703 -  2075.307:   30.2687%  (      465)
00:27:32.964   2075.307 -  2090.910:   31.1509%  (      463)
00:27:32.964   2090.910 -  2106.514:   32.0084%  (      450)
00:27:32.964   2106.514 -  2122.118:   32.9040%  (      470)
00:27:32.964   2122.118 -  2137.722:   33.8472%  (      495)
00:27:32.964   2137.722 -  2153.326:   34.7409%  (      469)
00:27:32.964   2153.326 -  2168.930:   35.5945%  (      448)
00:27:32.964   2168.930 -  2184.533:   36.4958%  (      473)
00:27:32.964   2184.533 -  2200.137:   37.3399%  (      443)
00:27:32.964   2200.137 -  2215.741:   38.2279%  (      466)
00:27:32.964   2215.741 -  2231.345:   39.1101%  (      463)
00:27:32.964   2231.345 -  2246.949:   39.9962%  (      465)
00:27:32.964   2246.949 -  2262.552:   40.8784%  (      463)
00:27:32.964   2262.552 -  2278.156:   41.7893%  (      478)
00:27:32.964   2278.156 -  2293.760:   42.6582%  (      456)
00:27:32.964   2293.760 -  2309.364:   43.5575%  (      472)
00:27:32.964   2309.364 -  2324.968:   44.4627%  (      475)
00:27:32.964   2324.968 -  2340.571:   45.3678%  (      475)
00:27:32.964   2340.571 -  2356.175:   46.2519%  (      464)
00:27:32.964   2356.175 -  2371.779:   47.1532%  (      473)
00:27:32.964   2371.779 -  2387.383:   48.0335%  (      462)
00:27:32.964   2387.383 -  2402.987:   48.9005%  (      455)
00:27:32.964   2402.987 -  2418.590:   49.8095%  (      477)
00:27:32.964   2418.590 -  2434.194:   50.7088%  (      472)
00:27:32.964   2434.194 -  2449.798:   51.6044%  (      470)
00:27:32.964   2449.798 -  2465.402:   52.4676%  (      453)
00:27:32.964   2465.402 -  2481.006:   53.3689%  (      473)
00:27:32.964   2481.006 -  2496.610:   54.2550%  (      465)
00:27:32.964   2496.610 -  2512.213:   55.1448%  (      467)
00:27:32.964   2512.213 -  2527.817:   56.0213%  (      460)
00:27:32.964   2527.817 -  2543.421:   56.9322%  (      478)
00:27:32.964   2543.421 -  2559.025:   57.8316%  (      472)
00:27:32.964   2559.025 -  2574.629:   58.7233%  (      468)
00:27:32.964   2574.629 -  2590.232:   59.5789%  (      449)
00:27:32.964   2590.232 -  2605.836:   60.4745%  (      470)
00:27:32.964   2605.836 -  2621.440:   61.4082%  (      490)
00:27:32.964   2621.440 -  2637.044:   62.2904%  (      463)
00:27:32.964   2637.044 -  2652.648:   63.1726%  (      463)
00:27:32.964   2652.648 -  2668.251:   64.0511%  (      461)
00:27:32.964   2668.251 -  2683.855:   64.9733%  (      484)
00:27:32.964   2683.855 -  2699.459:   65.8670%  (      469)
00:27:32.964   2699.459 -  2715.063:   66.7530%  (      465)
00:27:32.964   2715.063 -  2730.667:   67.6753%  (      484)
00:27:32.964   2730.667 -  2746.270:   68.5957%  (      483)
00:27:32.964   2746.270 -  2761.874:   69.4970%  (      473)
00:27:32.964   2761.874 -  2777.478:   70.4097%  (      479)
00:27:32.964   2777.478 -  2793.082:   71.3186%  (      477)
00:27:32.964   2793.082 -  2808.686:   72.2275%  (      477)
00:27:32.964   2808.686 -  2824.290:   73.1631%  (      491)
00:27:32.964   2824.290 -  2839.893:   74.0568%  (      469)
00:27:32.964   2839.893 -  2855.497:   74.9447%  (      466)
00:27:32.964   2855.497 -  2871.101:   75.8937%  (      498)
00:27:32.964   2871.101 -  2886.705:   76.8140%  (      483)
00:27:32.964   2886.705 -  2902.309:   77.7153%  (      473)
00:27:32.964   2902.309 -  2917.912:   78.6547%  (      493)
00:27:32.964   2917.912 -  2933.516:   79.5960%  (      494)
00:27:32.964   2933.516 -  2949.120:   80.5088%  (      479)
00:27:32.964   2949.120 -  2964.724:   81.4310%  (      484)
00:27:32.964   2964.724 -  2980.328:   82.3628%  (      489)
00:27:32.964   2980.328 -  2995.931:   83.2870%  (      485)
00:27:32.964   2995.931 -  3011.535:   84.1997%  (      479)
00:27:32.964   3011.535 -  3027.139:   85.1448%  (      496)
00:27:32.964   3027.139 -  3042.743:   86.0328%  (      466)
00:27:32.964   3042.743 -  3058.347:   86.8807%  (      445)
00:27:32.964   3058.347 -  3073.950:   87.7210%  (      441)
00:27:32.964   3073.950 -  3089.554:   88.5404%  (      430)
00:27:32.964   3089.554 -  3105.158:   89.2950%  (      396)
00:27:32.964   3105.158 -  3120.762:   89.9695%  (      354)
00:27:32.964   3120.762 -  3136.366:   90.5812%  (      321)
00:27:32.964   3136.366 -  3151.970:   91.1185%  (      282)
00:27:32.964   3151.970 -  3167.573:   91.5987%  (      252)
00:27:32.964   3167.573 -  3183.177:   92.0160%  (      219)
00:27:32.964   3183.177 -  3198.781:   92.4066%  (      205)
00:27:32.964   3198.781 -  3214.385:   92.7420%  (      176)
00:27:32.964   3214.385 -  3229.989:   93.0602%  (      167)
00:27:32.964   3229.989 -  3245.592:   93.3613%  (      158)
00:27:32.964   3245.592 -  3261.196:   93.6414%  (      147)
00:27:32.964   3261.196 -  3276.800:   93.8872%  (      129)
00:27:32.964   3276.800 -  3292.404:   94.1273%  (      126)
00:27:32.964   3292.404 -  3308.008:   94.3445%  (      114)
00:27:32.964   3308.008 -  3323.611:   94.5484%  (      107)
00:27:32.964   3323.611 -  3339.215:   94.7370%  (       99)
00:27:32.964   3339.215 -  3354.819:   94.9333%  (      103)
00:27:32.964   3354.819 -  3370.423:   95.1124%  (       94)
00:27:32.964   3370.423 -  3386.027:   95.2763%  (       86)
00:27:32.964   3386.027 -  3401.630:   95.4364%  (       84)
00:27:32.964   3401.630 -  3417.234:   95.5945%  (       83)
00:27:32.964   3417.234 -  3432.838:   95.7470%  (       80)
00:27:32.964   3432.838 -  3448.442:   95.9032%  (       82)
00:27:32.964   3448.442 -  3464.046:   96.0633%  (       84)
00:27:32.964   3464.046 -  3479.650:   96.2233%  (       84)
00:27:32.964   3479.650 -  3495.253:   96.3796%  (       82)
00:27:32.964   3495.253 -  3510.857:   96.5396%  (       84)
00:27:32.964   3510.857 -  3526.461:   96.6883%  (       78)
00:27:32.964   3526.461 -  3542.065:   96.8445%  (       82)
00:27:32.964   3542.065 -  3557.669:   97.0046%  (       84)
00:27:32.964   3557.669 -  3573.272:   97.1608%  (       82)
00:27:32.964   3573.272 -  3588.876:   97.3152%  (       81)
00:27:32.964   3588.876 -  3604.480:   97.4733%  (       83)
00:27:32.964   3604.480 -  3620.084:   97.6181%  (       76)
00:27:32.964   3620.084 -  3635.688:   97.7763%  (       83)
00:27:32.964   3635.688 -  3651.291:   97.9287%  (       80)
00:27:32.964   3651.291 -  3666.895:   98.0850%  (       82)
00:27:32.964   3666.895 -  3682.499:   98.2336%  (       78)
00:27:32.964   3682.499 -  3698.103:   98.3861%  (       80)
00:27:32.964   3698.103 -  3713.707:   98.5213%  (       71)
00:27:32.964   3713.707 -  3729.310:   98.6490%  (       67)
00:27:32.964   3729.310 -  3744.914:   98.7652%  (       61)
00:27:32.964   3744.914 -  3760.518:   98.8700%  (       55)
00:27:32.964   3760.518 -  3776.122:   98.9653%  (       50)
00:27:32.964   3776.122 -  3791.726:   99.0644%  (       52)
00:27:32.964   3791.726 -  3807.330:   99.1425%  (       41)
00:27:32.964   3807.330 -  3822.933:   99.2111%  (       36)
00:27:32.964   3822.933 -  3838.537:   99.2721%  (       32)
00:27:32.964   3838.537 -  3854.141:   99.3178%  (       24)
00:27:32.964   3854.141 -  3869.745:   99.3598%  (       22)
00:27:32.964   3869.745 -  3885.349:   99.3979%  (       20)
00:27:32.964   3885.349 -  3900.952:   99.4264%  (       15)
00:27:32.964   3900.952 -  3916.556:   99.4588%  (       17)
00:27:32.964   3916.556 -  3932.160:   99.4817%  (       12)
00:27:32.964   3932.160 -  3947.764:   99.5046%  (       12)
00:27:32.964   3947.764 -  3963.368:   99.5255%  (       11)
00:27:32.964   3963.368 -  3978.971:   99.5465%  (       11)
00:27:32.964   3978.971 -  3994.575:   99.5655%  (       10)
00:27:32.964   3994.575 -  4025.783:   99.5979%  (       17)
00:27:32.964   4025.783 -  4056.990:   99.6303%  (       17)
00:27:32.964   4056.990 -  4088.198:   99.6589%  (       15)
00:27:32.964   4088.198 -  4119.406:   99.6837%  (       13)
00:27:32.964   4119.406 -  4150.613:   99.7085%  (       13)
00:27:32.964   4150.613 -  4181.821:   99.7313%  (       12)
00:27:32.964   4181.821 -  4213.029:   99.7466%  (        8)
00:27:32.964   4213.029 -  4244.236:   99.7618%  (        8)
00:27:32.964   4244.236 -  4275.444:   99.7713%  (        5)
00:27:32.964   4275.444 -  4306.651:   99.7847%  (        7)
00:27:32.964   4306.651 -  4337.859:   99.7999%  (        8)
00:27:32.964   4337.859 -  4369.067:   99.8133%  (        7)
00:27:32.964   4369.067 -  4400.274:   99.8266%  (        7)
00:27:32.964   4400.274 -  4431.482:   99.8399%  (        7)
00:27:32.965   4431.482 -  4462.690:   99.8514%  (        6)
00:27:32.965   4462.690 -  4493.897:   99.8628%  (        6)
00:27:32.965   4493.897 -  4525.105:   99.8723%  (        5)
00:27:32.965   4525.105 -  4556.312:   99.8819%  (        5)
00:27:32.965   4556.312 -  4587.520:   99.8895%  (        4)
00:27:32.965   4587.520 -  4618.728:   99.8952%  (        3)
00:27:32.965   4618.728 -  4649.935:   99.9009%  (        3)
00:27:32.965   4649.935 -  4681.143:   99.9085%  (        4)
00:27:32.965   4681.143 -  4712.350:   99.9162%  (        4)
00:27:32.965   4712.350 -  4743.558:   99.9238%  (        4)
00:27:32.965   4743.558 -  4774.766:   99.9314%  (        4)
00:27:32.965   4774.766 -  4805.973:   99.9390%  (        4)
00:27:32.965   4805.973 -  4837.181:   99.9447%  (        3)
00:27:32.965   4837.181 -  4868.389:   99.9505%  (        3)
00:27:32.965   4868.389 -  4899.596:   99.9543%  (        2)
00:27:32.965   4899.596 -  4930.804:   99.9581%  (        2)
00:27:32.965   4930.804 -  4962.011:   99.9619%  (        2)
00:27:32.965   4962.011 -  4993.219:   99.9638%  (        1)
00:27:32.965   4993.219 -  5024.427:   99.9676%  (        2)
00:27:32.965   5024.427 -  5055.634:   99.9714%  (        2)
00:27:32.965   5055.634 -  5086.842:   99.9752%  (        2)
00:27:32.965   5086.842 -  5118.050:   99.9790%  (        2)
00:27:32.965   5118.050 -  5149.257:   99.9829%  (        2)
00:27:32.965   5149.257 -  5180.465:   99.9867%  (        2)
00:27:32.965   5180.465 -  5211.672:   99.9905%  (        2)
00:27:32.965   5211.672 -  5242.880:   99.9943%  (        2)
00:27:32.965   5242.880 -  5274.088:   99.9962%  (        1)
00:27:32.965   5274.088 -  5305.295:  100.0000%  (        2)
00:27:32.965  
00:27:32.965   17:11:25	-- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0
00:27:34.338  Initializing NVMe Controllers
00:27:34.338  Attached to NVMe Controller at 0000:00:06.0 [1b36:0010]
00:27:34.338  Associating PCIE (0000:00:06.0) NSID 1 with lcore 0
00:27:34.338  Initialization complete. Launching workers.
00:27:34.338  ========================================================
00:27:34.338                                                                             Latency(us)
00:27:34.338  Device Information                     :       IOPS      MiB/s    Average        min        max
00:27:34.338  PCIE (0000:00:06.0) NSID 1 from core  0:   53478.00     626.70    2393.44    1206.25    8375.57
00:27:34.338  ========================================================
00:27:34.338  Total                                  :   53478.00     626.70    2393.44    1206.25    8375.57
00:27:34.338  
00:27:34.338  Summary latency data for PCIE (0000:00:06.0) NSID 1                  from core 0:
00:27:34.338  =================================================================================
00:27:34.338    1.00000% :  1614.994us
00:27:34.338   10.00000% :  1841.250us
00:27:34.338   25.00000% :  2012.891us
00:27:34.338   50.00000% :  2309.364us
00:27:34.338   75.00000% :  2668.251us
00:27:34.338   90.00000% :  3136.366us
00:27:34.338   95.00000% :  3417.234us
00:27:34.339   98.00000% :  3604.480us
00:27:34.339   99.00000% :  3760.518us
00:27:34.339   99.50000% :  4181.821us
00:27:34.339   99.90000% :  6397.562us
00:27:34.339   99.99000% :  8363.642us
00:27:34.339   99.99900% :  8426.057us
00:27:34.339   99.99990% :  8426.057us
00:27:34.339   99.99999% :  8426.057us
00:27:34.339  
00:27:34.339  Latency histogram for PCIE (0000:00:06.0) NSID 1                  from core 0:
00:27:34.339  ==============================================================================
00:27:34.339         Range in us     Cumulative    IO count
00:27:34.339   1201.493 -  1209.295:    0.0019%  (        1)
00:27:34.339   1209.295 -  1217.097:    0.0037%  (        1)
00:27:34.339   1217.097 -  1224.899:    0.0056%  (        1)
00:27:34.339   1232.701 -  1240.503:    0.0112%  (        3)
00:27:34.339   1240.503 -  1248.305:    0.0187%  (        4)
00:27:34.339   1248.305 -  1256.107:    0.0280%  (        5)
00:27:34.339   1256.107 -  1263.909:    0.0299%  (        1)
00:27:34.339   1263.909 -  1271.710:    0.0449%  (        8)
00:27:34.339   1271.710 -  1279.512:    0.0524%  (        4)
00:27:34.339   1279.512 -  1287.314:    0.0617%  (        5)
00:27:34.339   1287.314 -  1295.116:    0.0673%  (        3)
00:27:34.339   1295.116 -  1302.918:    0.0729%  (        3)
00:27:34.339   1302.918 -  1310.720:    0.0804%  (        4)
00:27:34.339   1310.720 -  1318.522:    0.0916%  (        6)
00:27:34.339   1318.522 -  1326.324:    0.1028%  (        6)
00:27:34.339   1326.324 -  1334.126:    0.1159%  (        7)
00:27:34.339   1341.928 -  1349.730:    0.1178%  (        1)
00:27:34.339   1349.730 -  1357.531:    0.1253%  (        4)
00:27:34.339   1357.531 -  1365.333:    0.1365%  (        6)
00:27:34.339   1365.333 -  1373.135:    0.1459%  (        5)
00:27:34.339   1373.135 -  1380.937:    0.1552%  (        5)
00:27:34.339   1380.937 -  1388.739:    0.1702%  (        8)
00:27:34.339   1388.739 -  1396.541:    0.1758%  (        3)
00:27:34.339   1396.541 -  1404.343:    0.1870%  (        6)
00:27:34.339   1404.343 -  1412.145:    0.1982%  (        6)
00:27:34.339   1412.145 -  1419.947:    0.2076%  (        5)
00:27:34.339   1419.947 -  1427.749:    0.2132%  (        3)
00:27:34.339   1427.749 -  1435.550:    0.2319%  (       10)
00:27:34.339   1435.550 -  1443.352:    0.2487%  (        9)
00:27:34.339   1443.352 -  1451.154:    0.2581%  (        5)
00:27:34.339   1451.154 -  1458.956:    0.2711%  (        7)
00:27:34.339   1458.956 -  1466.758:    0.2842%  (        7)
00:27:34.339   1466.758 -  1474.560:    0.3029%  (       10)
00:27:34.339   1474.560 -  1482.362:    0.3198%  (        9)
00:27:34.339   1482.362 -  1490.164:    0.3385%  (       10)
00:27:34.339   1490.164 -  1497.966:    0.3534%  (        8)
00:27:34.339   1497.966 -  1505.768:    0.3796%  (       14)
00:27:34.339   1505.768 -  1513.570:    0.4020%  (       12)
00:27:34.339   1513.570 -  1521.371:    0.4245%  (       12)
00:27:34.339   1521.371 -  1529.173:    0.4544%  (       16)
00:27:34.339   1529.173 -  1536.975:    0.4881%  (       18)
00:27:34.339   1536.975 -  1544.777:    0.5217%  (       18)
00:27:34.339   1544.777 -  1552.579:    0.5647%  (       23)
00:27:34.339   1552.579 -  1560.381:    0.5890%  (       13)
00:27:34.339   1560.381 -  1568.183:    0.6395%  (       27)
00:27:34.339   1568.183 -  1575.985:    0.6900%  (       27)
00:27:34.339   1575.985 -  1583.787:    0.7648%  (       40)
00:27:34.339   1583.787 -  1591.589:    0.8246%  (       32)
00:27:34.339   1591.589 -  1599.390:    0.8994%  (       40)
00:27:34.339   1599.390 -  1607.192:    0.9929%  (       50)
00:27:34.339   1607.192 -  1614.994:    1.0771%  (       45)
00:27:34.339   1614.994 -  1622.796:    1.1687%  (       49)
00:27:34.339   1622.796 -  1630.598:    1.2753%  (       57)
00:27:34.339   1630.598 -  1638.400:    1.4062%  (       70)
00:27:34.339   1638.400 -  1646.202:    1.5109%  (       56)
00:27:34.339   1646.202 -  1654.004:    1.6624%  (       81)
00:27:34.339   1654.004 -  1661.806:    1.8325%  (       91)
00:27:34.339   1661.806 -  1669.608:    1.9971%  (       88)
00:27:34.339   1669.608 -  1677.410:    2.1841%  (      100)
00:27:34.339   1677.410 -  1685.211:    2.4029%  (      117)
00:27:34.339   1685.211 -  1693.013:    2.6198%  (      116)
00:27:34.339   1693.013 -  1700.815:    2.8629%  (      130)
00:27:34.339   1700.815 -  1708.617:    3.1340%  (      145)
00:27:34.339   1708.617 -  1716.419:    3.3995%  (      142)
00:27:34.339   1716.419 -  1724.221:    3.6819%  (      151)
00:27:34.339   1724.221 -  1732.023:    3.9680%  (      153)
00:27:34.339   1732.023 -  1739.825:    4.3064%  (      181)
00:27:34.339   1739.825 -  1747.627:    4.6243%  (      170)
00:27:34.339   1747.627 -  1755.429:    4.9684%  (      184)
00:27:34.339   1755.429 -  1763.230:    5.2919%  (      173)
00:27:34.339   1763.230 -  1771.032:    5.6771%  (      206)
00:27:34.339   1771.032 -  1778.834:    6.0548%  (      202)
00:27:34.339   1778.834 -  1786.636:    6.4438%  (      208)
00:27:34.339   1786.636 -  1794.438:    6.9730%  (      283)
00:27:34.339   1794.438 -  1802.240:    7.4648%  (      263)
00:27:34.339   1802.240 -  1810.042:    8.0108%  (      292)
00:27:34.339   1810.042 -  1817.844:    8.5493%  (      288)
00:27:34.339   1817.844 -  1825.646:    9.1028%  (      296)
00:27:34.339   1825.646 -  1833.448:    9.6189%  (      276)
00:27:34.339   1833.448 -  1841.250:   10.2753%  (      351)
00:27:34.339   1841.250 -  1849.051:   10.7914%  (      276)
00:27:34.339   1849.051 -  1856.853:   11.4140%  (      333)
00:27:34.339   1856.853 -  1864.655:   12.3153%  (      482)
00:27:34.339   1864.655 -  1872.457:   12.9792%  (      355)
00:27:34.339   1872.457 -  1880.259:   13.7047%  (      388)
00:27:34.339   1880.259 -  1888.061:   14.4583%  (      403)
00:27:34.339   1888.061 -  1895.863:   15.1988%  (      396)
00:27:34.339   1895.863 -  1903.665:   15.7560%  (      298)
00:27:34.339   1903.665 -  1911.467:   16.3357%  (      310)
00:27:34.339   1911.467 -  1919.269:   16.9677%  (      338)
00:27:34.339   1919.269 -  1927.070:   17.6259%  (      352)
00:27:34.339   1927.070 -  1934.872:   18.4674%  (      450)
00:27:34.339   1934.872 -  1942.674:   19.2995%  (      445)
00:27:34.339   1942.674 -  1950.476:   20.0007%  (      375)
00:27:34.339   1950.476 -  1958.278:   20.7880%  (      421)
00:27:34.339   1958.278 -  1966.080:   21.3957%  (      325)
00:27:34.339   1966.080 -  1973.882:   22.1343%  (      395)
00:27:34.339   1973.882 -  1981.684:   23.1516%  (      544)
00:27:34.339   1981.684 -  1989.486:   24.0304%  (      470)
00:27:34.339   1989.486 -  1997.288:   24.9617%  (      498)
00:27:34.339   1997.288 -  2012.891:   26.7231%  (      942)
00:27:34.339   2012.891 -  2028.495:   28.1667%  (      772)
00:27:34.339   2028.495 -  2044.099:   29.8497%  (      900)
00:27:34.339   2044.099 -  2059.703:   31.4167%  (      838)
00:27:34.339   2059.703 -  2075.307:   32.9724%  (      832)
00:27:34.339   2075.307 -  2090.910:   34.4011%  (      764)
00:27:34.339   2090.910 -  2106.514:   35.8110%  (      754)
00:27:34.339   2106.514 -  2122.118:   36.9255%  (      596)
00:27:34.339   2122.118 -  2137.722:   38.1783%  (      670)
00:27:34.339   2137.722 -  2153.326:   39.3395%  (      621)
00:27:34.339   2153.326 -  2168.930:   40.7046%  (      730)
00:27:34.339   2168.930 -  2184.533:   41.7499%  (      559)
00:27:34.339   2184.533 -  2200.137:   42.8924%  (      611)
00:27:34.339   2200.137 -  2215.741:   44.1022%  (      647)
00:27:34.339   2215.741 -  2231.345:   45.2691%  (      624)
00:27:34.339   2231.345 -  2246.949:   46.4247%  (      618)
00:27:34.339   2246.949 -  2262.552:   47.4232%  (      534)
00:27:34.339   2262.552 -  2278.156:   48.4984%  (      575)
00:27:34.339   2278.156 -  2293.760:   49.5550%  (      565)
00:27:34.339   2293.760 -  2309.364:   50.6807%  (      602)
00:27:34.339   2309.364 -  2324.968:   51.7933%  (      595)
00:27:34.339   2324.968 -  2340.571:   52.8629%  (      572)
00:27:34.339   2340.571 -  2356.175:   53.9287%  (      570)
00:27:34.339   2356.175 -  2371.779:   55.0899%  (      621)
00:27:34.339   2371.779 -  2387.383:   56.2156%  (      602)
00:27:34.339   2387.383 -  2402.987:   57.4404%  (      655)
00:27:34.339   2402.987 -  2418.590:   58.6372%  (      640)
00:27:34.339   2418.590 -  2434.194:   59.8190%  (      632)
00:27:34.339   2434.194 -  2449.798:   60.9970%  (      630)
00:27:34.339   2449.798 -  2465.402:   62.1209%  (      601)
00:27:34.339   2465.402 -  2481.006:   63.2148%  (      585)
00:27:34.339   2481.006 -  2496.610:   64.2619%  (      560)
00:27:34.339   2496.610 -  2512.213:   65.3783%  (      597)
00:27:34.339   2512.213 -  2527.817:   66.4516%  (      574)
00:27:34.339   2527.817 -  2543.421:   67.5418%  (      583)
00:27:34.339   2543.421 -  2559.025:   68.6095%  (      571)
00:27:34.339   2559.025 -  2574.629:   69.7053%  (      586)
00:27:34.339   2574.629 -  2590.232:   70.7712%  (      570)
00:27:34.339   2590.232 -  2605.836:   71.7734%  (      536)
00:27:34.339   2605.836 -  2621.440:   72.7365%  (      515)
00:27:34.339   2621.440 -  2637.044:   73.7313%  (      532)
00:27:34.339   2637.044 -  2652.648:   74.6662%  (      500)
00:27:34.339   2652.648 -  2668.251:   75.5395%  (      467)
00:27:34.339   2668.251 -  2683.855:   76.4183%  (      470)
00:27:34.339   2683.855 -  2699.459:   77.3159%  (      480)
00:27:34.339   2699.459 -  2715.063:   78.0695%  (      403)
00:27:34.339   2715.063 -  2730.667:   78.8549%  (      420)
00:27:34.339   2730.667 -  2746.270:   79.6010%  (      399)
00:27:34.339   2746.270 -  2761.874:   80.2947%  (      371)
00:27:34.339   2761.874 -  2777.478:   80.9884%  (      371)
00:27:34.339   2777.478 -  2793.082:   81.5756%  (      314)
00:27:34.339   2793.082 -  2808.686:   82.1927%  (      330)
00:27:34.339   2808.686 -  2824.290:   82.7742%  (      311)
00:27:34.339   2824.290 -  2839.893:   83.2885%  (      275)
00:27:34.339   2839.893 -  2855.497:   83.8008%  (      274)
00:27:34.339   2855.497 -  2871.101:   84.2720%  (      252)
00:27:34.339   2871.101 -  2886.705:   84.7451%  (      253)
00:27:34.339   2886.705 -  2902.309:   85.1397%  (      211)
00:27:34.339   2902.309 -  2917.912:   85.5436%  (      216)
00:27:34.339   2917.912 -  2933.516:   85.9064%  (      194)
00:27:34.339   2933.516 -  2949.120:   86.2822%  (      201)
00:27:34.339   2949.120 -  2964.724:   86.6300%  (      186)
00:27:34.339   2964.724 -  2980.328:   86.9629%  (      178)
00:27:34.339   2980.328 -  2995.931:   87.2901%  (      175)
00:27:34.339   2995.931 -  3011.535:   87.6229%  (      178)
00:27:34.339   3011.535 -  3027.139:   87.9390%  (      169)
00:27:34.339   3027.139 -  3042.743:   88.2625%  (      173)
00:27:34.339   3042.743 -  3058.347:   88.5654%  (      162)
00:27:34.339   3058.347 -  3073.950:   88.8683%  (      162)
00:27:34.339   3073.950 -  3089.554:   89.1787%  (      166)
00:27:34.339   3089.554 -  3105.158:   89.4536%  (      147)
00:27:34.339   3105.158 -  3120.762:   89.7603%  (      164)
00:27:34.339   3120.762 -  3136.366:   90.0221%  (      140)
00:27:34.340   3136.366 -  3151.970:   90.3119%  (      155)
00:27:34.340   3151.970 -  3167.573:   90.6092%  (      159)
00:27:34.340   3167.573 -  3183.177:   90.8729%  (      141)
00:27:34.340   3183.177 -  3198.781:   91.1609%  (      154)
00:27:34.340   3198.781 -  3214.385:   91.4283%  (      143)
00:27:34.340   3214.385 -  3229.989:   91.7218%  (      157)
00:27:34.340   3229.989 -  3245.592:   91.9967%  (      147)
00:27:34.340   3245.592 -  3261.196:   92.2566%  (      139)
00:27:34.340   3261.196 -  3276.800:   92.5390%  (      151)
00:27:34.340   3276.800 -  3292.404:   92.8064%  (      143)
00:27:34.340   3292.404 -  3308.008:   93.1000%  (      157)
00:27:34.340   3308.008 -  3323.611:   93.3748%  (      147)
00:27:34.340   3323.611 -  3339.215:   93.6479%  (      146)
00:27:34.340   3339.215 -  3354.819:   93.9265%  (      149)
00:27:34.340   3354.819 -  3370.423:   94.2014%  (      147)
00:27:34.340   3370.423 -  3386.027:   94.4725%  (      145)
00:27:34.340   3386.027 -  3401.630:   94.7511%  (      149)
00:27:34.340   3401.630 -  3417.234:   95.0353%  (      152)
00:27:34.340   3417.234 -  3432.838:   95.3121%  (      148)
00:27:34.340   3432.838 -  3448.442:   95.5945%  (      151)
00:27:34.340   3448.442 -  3464.046:   95.8843%  (      155)
00:27:34.340   3464.046 -  3479.650:   96.1517%  (      143)
00:27:34.340   3479.650 -  3495.253:   96.4116%  (      139)
00:27:34.340   3495.253 -  3510.857:   96.6846%  (      146)
00:27:34.340   3510.857 -  3526.461:   96.9296%  (      131)
00:27:34.340   3526.461 -  3542.065:   97.1652%  (      126)
00:27:34.340   3542.065 -  3557.669:   97.4045%  (      128)
00:27:34.340   3557.669 -  3573.272:   97.6364%  (      124)
00:27:34.340   3573.272 -  3588.876:   97.8309%  (      104)
00:27:34.340   3588.876 -  3604.480:   98.0123%  (       97)
00:27:34.340   3604.480 -  3620.084:   98.1806%  (       90)
00:27:34.340   3620.084 -  3635.688:   98.3245%  (       77)
00:27:34.340   3635.688 -  3651.291:   98.4629%  (       74)
00:27:34.340   3651.291 -  3666.895:   98.5714%  (       58)
00:27:34.340   3666.895 -  3682.499:   98.6555%  (       45)
00:27:34.340   3682.499 -  3698.103:   98.7434%  (       47)
00:27:34.340   3698.103 -  3713.707:   98.8201%  (       41)
00:27:34.340   3713.707 -  3729.310:   98.8818%  (       33)
00:27:34.340   3729.310 -  3744.914:   98.9435%  (       33)
00:27:34.340   3744.914 -  3760.518:   99.0033%  (       32)
00:27:34.340   3760.518 -  3776.122:   99.0538%  (       27)
00:27:34.340   3776.122 -  3791.726:   99.0968%  (       23)
00:27:34.340   3791.726 -  3807.330:   99.1417%  (       24)
00:27:34.340   3807.330 -  3822.933:   99.1716%  (       16)
00:27:34.340   3822.933 -  3838.537:   99.2053%  (       18)
00:27:34.340   3838.537 -  3854.141:   99.2445%  (       21)
00:27:34.340   3854.141 -  3869.745:   99.2632%  (       10)
00:27:34.340   3869.745 -  3885.349:   99.2819%  (       10)
00:27:34.340   3885.349 -  3900.952:   99.3044%  (       12)
00:27:34.340   3900.952 -  3916.556:   99.3212%  (        9)
00:27:34.340   3916.556 -  3932.160:   99.3362%  (        8)
00:27:34.340   3932.160 -  3947.764:   99.3530%  (        9)
00:27:34.340   3947.764 -  3963.368:   99.3680%  (        8)
00:27:34.340   3963.368 -  3978.971:   99.3754%  (        4)
00:27:34.340   3978.971 -  3994.575:   99.3867%  (        6)
00:27:34.340   3994.575 -  4025.783:   99.4297%  (       23)
00:27:34.340   4025.783 -  4056.990:   99.4484%  (       10)
00:27:34.340   4056.990 -  4088.198:   99.4652%  (        9)
00:27:34.340   4088.198 -  4119.406:   99.4820%  (        9)
00:27:34.340   4119.406 -  4150.613:   99.4932%  (        6)
00:27:34.340   4150.613 -  4181.821:   99.5138%  (       11)
00:27:34.340   4181.821 -  4213.029:   99.5288%  (        8)
00:27:34.340   4213.029 -  4244.236:   99.5456%  (        9)
00:27:34.340   4244.236 -  4275.444:   99.5643%  (       10)
00:27:34.340   4275.444 -  4306.651:   99.5774%  (        7)
00:27:34.340   4306.651 -  4337.859:   99.5867%  (        5)
00:27:34.340   4337.859 -  4369.067:   99.5980%  (        6)
00:27:34.340   4369.067 -  4400.274:   99.6073%  (        5)
00:27:34.340   4400.274 -  4431.482:   99.6129%  (        3)
00:27:34.340   4431.482 -  4462.690:   99.6223%  (        5)
00:27:34.340   4462.690 -  4493.897:   99.6298%  (        4)
00:27:34.340   4493.897 -  4525.105:   99.6335%  (        2)
00:27:34.340   4525.105 -  4556.312:   99.6372%  (        2)
00:27:34.340   4556.312 -  4587.520:   99.6391%  (        1)
00:27:34.340   4587.520 -  4618.728:   99.6410%  (        1)
00:27:34.340   4618.728 -  4649.935:   99.6447%  (        2)
00:27:34.340   4681.143 -  4712.350:   99.6485%  (        2)
00:27:34.340   4712.350 -  4743.558:   99.6522%  (        2)
00:27:34.340   4743.558 -  4774.766:   99.6559%  (        2)
00:27:34.340   4774.766 -  4805.973:   99.6578%  (        1)
00:27:34.340   4805.973 -  4837.181:   99.6615%  (        2)
00:27:34.340   4837.181 -  4868.389:   99.6653%  (        2)
00:27:34.340   4868.389 -  4899.596:   99.6690%  (        2)
00:27:34.340   4899.596 -  4930.804:   99.6728%  (        2)
00:27:34.340   4930.804 -  4962.011:   99.6746%  (        1)
00:27:34.340   4962.011 -  4993.219:   99.6784%  (        2)
00:27:34.340   4993.219 -  5024.427:   99.6821%  (        2)
00:27:34.340   5024.427 -  5055.634:   99.6840%  (        1)
00:27:34.340   5055.634 -  5086.842:   99.6859%  (        1)
00:27:34.340   5305.295 -  5336.503:   99.6915%  (        3)
00:27:34.340   5336.503 -  5367.710:   99.6971%  (        3)
00:27:34.340   5367.710 -  5398.918:   99.7046%  (        4)
00:27:34.340   5398.918 -  5430.126:   99.7120%  (        4)
00:27:34.340   5430.126 -  5461.333:   99.7195%  (        4)
00:27:34.340   5461.333 -  5492.541:   99.7233%  (        2)
00:27:34.340   5492.541 -  5523.749:   99.7307%  (        4)
00:27:34.340   5523.749 -  5554.956:   99.7382%  (        4)
00:27:34.340   5554.956 -  5586.164:   99.7438%  (        3)
00:27:34.340   5586.164 -  5617.371:   99.7494%  (        3)
00:27:34.340   5617.371 -  5648.579:   99.7588%  (        5)
00:27:34.340   5648.579 -  5679.787:   99.7644%  (        3)
00:27:34.340   5679.787 -  5710.994:   99.7700%  (        3)
00:27:34.340   5710.994 -  5742.202:   99.7737%  (        2)
00:27:34.340   5742.202 -  5773.410:   99.7775%  (        2)
00:27:34.340   5773.410 -  5804.617:   99.7812%  (        2)
00:27:34.340   5804.617 -  5835.825:   99.7850%  (        2)
00:27:34.340   5835.825 -  5867.032:   99.7887%  (        2)
00:27:34.340   5867.032 -  5898.240:   99.7924%  (        2)
00:27:34.340   5898.240 -  5929.448:   99.7999%  (        4)
00:27:34.340   5929.448 -  5960.655:   99.8373%  (       20)
00:27:34.340   5960.655 -  5991.863:   99.8429%  (        3)
00:27:34.340   5991.863 -  6023.070:   99.8467%  (        2)
00:27:34.340   6023.070 -  6054.278:   99.8504%  (        2)
00:27:34.340   6054.278 -  6085.486:   99.8560%  (        3)
00:27:34.340   6085.486 -  6116.693:   99.8598%  (        2)
00:27:34.340   6116.693 -  6147.901:   99.8635%  (        2)
00:27:34.340   6147.901 -  6179.109:   99.8672%  (        2)
00:27:34.340   6179.109 -  6210.316:   99.8728%  (        3)
00:27:34.340   6210.316 -  6241.524:   99.8766%  (        2)
00:27:34.340   6241.524 -  6272.731:   99.8822%  (        3)
00:27:34.340   6272.731 -  6303.939:   99.8859%  (        2)
00:27:34.340   6303.939 -  6335.147:   99.8915%  (        3)
00:27:34.340   6335.147 -  6366.354:   99.8953%  (        2)
00:27:34.340   6366.354 -  6397.562:   99.9009%  (        3)
00:27:34.340   6397.562 -  6428.770:   99.9046%  (        2)
00:27:34.340   6428.770 -  6459.977:   99.9102%  (        3)
00:27:34.340   6459.977 -  6491.185:   99.9140%  (        2)
00:27:34.340   6491.185 -  6522.392:   99.9196%  (        3)
00:27:34.340   6522.392 -  6553.600:   99.9233%  (        2)
00:27:34.340   6553.600 -  6584.808:   99.9289%  (        3)
00:27:34.340   6584.808 -  6616.015:   99.9327%  (        2)
00:27:34.340   6616.015 -  6647.223:   99.9364%  (        2)
00:27:34.340   6647.223 -  6678.430:   99.9420%  (        3)
00:27:34.340   6678.430 -  6709.638:   99.9458%  (        2)
00:27:34.340   6709.638 -  6740.846:   99.9495%  (        2)
00:27:34.340   6740.846 -  6772.053:   99.9551%  (        3)
00:27:34.340   6772.053 -  6803.261:   99.9589%  (        2)
00:27:34.340   6803.261 -  6834.469:   99.9645%  (        3)
00:27:34.340   6834.469 -  6865.676:   99.9663%  (        1)
00:27:34.340   8301.227 -  8363.642:   99.9963%  (       16)
00:27:34.340   8363.642 -  8426.057:  100.0000%  (        2)
00:27:34.340  
00:27:34.340   17:11:26	-- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']'
00:27:34.340  
00:27:34.340  real	0m2.630s
00:27:34.340  user	0m2.213s
00:27:34.340  sys	0m0.285s
00:27:34.340   17:11:26	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:27:34.340   17:11:26	-- common/autotest_common.sh@10 -- # set +x
00:27:34.340  ************************************
00:27:34.340  END TEST nvme_perf
00:27:34.340  ************************************
00:27:34.340   17:11:26	-- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0
00:27:34.340   17:11:26	-- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']'
00:27:34.340   17:11:26	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:27:34.340   17:11:26	-- common/autotest_common.sh@10 -- # set +x
00:27:34.340  ************************************
00:27:34.340  START TEST nvme_hello_world
00:27:34.340  ************************************
00:27:34.340   17:11:26	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0
00:27:34.598  Initializing NVMe Controllers
00:27:34.598  Attached to 0000:00:06.0
00:27:34.599    Namespace ID: 1 size: 5GB
00:27:34.599  Initialization complete.
00:27:34.599  INFO: using host memory buffer for IO
00:27:34.599  Hello world!
00:27:34.599  
00:27:34.599  real	0m0.338s
00:27:34.599  user	0m0.120s
00:27:34.599  sys	0m0.141s
00:27:34.599   17:11:27	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:27:34.599   17:11:27	-- common/autotest_common.sh@10 -- # set +x
00:27:34.599  ************************************
00:27:34.599  END TEST nvme_hello_world
00:27:34.599  ************************************
00:27:34.599   17:11:27	-- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl
00:27:34.599   17:11:27	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:27:34.599   17:11:27	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:27:34.599   17:11:27	-- common/autotest_common.sh@10 -- # set +x
00:27:34.599  ************************************
00:27:34.599  START TEST nvme_sgl
00:27:34.599  ************************************
00:27:34.599   17:11:27	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl
00:27:34.858  0000:00:06.0: build_io_request_0 Invalid IO length parameter
00:27:34.858  0000:00:06.0: build_io_request_1 Invalid IO length parameter
00:27:34.858  0000:00:06.0: build_io_request_3 Invalid IO length parameter
00:27:34.858  0000:00:06.0: build_io_request_8 Invalid IO length parameter
00:27:34.858  0000:00:06.0: build_io_request_9 Invalid IO length parameter
00:27:34.858  0000:00:06.0: build_io_request_11 Invalid IO length parameter
00:27:34.858  NVMe Readv/Writev Request test
00:27:34.858  Attached to 0000:00:06.0
00:27:34.858  0000:00:06.0: build_io_request_2 test passed
00:27:34.858  0000:00:06.0: build_io_request_4 test passed
00:27:34.858  0000:00:06.0: build_io_request_5 test passed
00:27:34.858  0000:00:06.0: build_io_request_6 test passed
00:27:34.858  0000:00:06.0: build_io_request_7 test passed
00:27:34.858  0000:00:06.0: build_io_request_10 test passed
00:27:34.858  Cleaning up...
00:27:34.858  
00:27:34.858  real	0m0.329s
00:27:34.858  user	0m0.124s
00:27:34.858  sys	0m0.134s
00:27:34.858   17:11:27	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:27:34.858   17:11:27	-- common/autotest_common.sh@10 -- # set +x
00:27:34.858  ************************************
00:27:34.858  END TEST nvme_sgl
00:27:34.858  ************************************
00:27:34.858   17:11:27	-- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp
00:27:34.858   17:11:27	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:27:34.858   17:11:27	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:27:34.858   17:11:27	-- common/autotest_common.sh@10 -- # set +x
00:27:34.858  ************************************
00:27:34.858  START TEST nvme_e2edp
00:27:34.858  ************************************
00:27:34.858   17:11:27	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp
00:27:35.116  NVMe Write/Read with End-to-End data protection test
00:27:35.116  Attached to 0000:00:06.0
00:27:35.116  Cleaning up...
00:27:35.116  
00:27:35.116  real	0m0.273s
00:27:35.116  user	0m0.067s
00:27:35.116  sys	0m0.113s
00:27:35.116   17:11:27	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:27:35.116   17:11:27	-- common/autotest_common.sh@10 -- # set +x
00:27:35.116  ************************************
00:27:35.116  END TEST nvme_e2edp
00:27:35.116  ************************************
00:27:35.374   17:11:28	-- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve
00:27:35.374   17:11:28	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:27:35.374   17:11:28	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:27:35.374   17:11:28	-- common/autotest_common.sh@10 -- # set +x
00:27:35.374  ************************************
00:27:35.374  START TEST nvme_reserve
00:27:35.374  ************************************
00:27:35.374   17:11:28	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve
00:27:35.633  =====================================================
00:27:35.633  NVMe Controller at PCI bus 0, device 6, function 0
00:27:35.633  =====================================================
00:27:35.633  Reservations:                Not Supported
00:27:35.633  Reservation test passed
00:27:35.633  
00:27:35.633  real	0m0.273s
00:27:35.633  user	0m0.079s
00:27:35.633  sys	0m0.138s
00:27:35.633   17:11:28	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:27:35.633   17:11:28	-- common/autotest_common.sh@10 -- # set +x
00:27:35.633  ************************************
00:27:35.633  END TEST nvme_reserve
00:27:35.633  ************************************
00:27:35.633   17:11:28	-- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection
00:27:35.633   17:11:28	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:27:35.633   17:11:28	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:27:35.633   17:11:28	-- common/autotest_common.sh@10 -- # set +x
00:27:35.633  ************************************
00:27:35.633  START TEST nvme_err_injection
00:27:35.633  ************************************
00:27:35.633   17:11:28	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection
00:27:35.892  NVMe Error Injection test
00:27:35.892  Attached to 0000:00:06.0
00:27:35.892  0000:00:06.0: get features failed as expected
00:27:35.892  0000:00:06.0: get features successfully as expected
00:27:35.892  0000:00:06.0: read failed as expected
00:27:35.892  0000:00:06.0: read successfully as expected
00:27:35.892  Cleaning up...
00:27:35.892  
00:27:35.892  real	0m0.241s
00:27:35.892  user	0m0.092s
00:27:35.892  sys	0m0.095s
00:27:35.892   17:11:28	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:27:35.892   17:11:28	-- common/autotest_common.sh@10 -- # set +x
00:27:35.892  ************************************
00:27:35.892  END TEST nvme_err_injection
00:27:35.892  ************************************
00:27:35.892   17:11:28	-- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0
00:27:35.892   17:11:28	-- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']'
00:27:35.892   17:11:28	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:27:35.892   17:11:28	-- common/autotest_common.sh@10 -- # set +x
00:27:35.892  ************************************
00:27:35.892  START TEST nvme_overhead
00:27:35.892  ************************************
00:27:35.892   17:11:28	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0
00:27:37.277  Initializing NVMe Controllers
00:27:37.277  Attached to 0000:00:06.0
00:27:37.277  Initialization complete. Launching workers.
00:27:37.277  submit (in ns)   avg, min, max =  13389.1,  11825.7, 261192.4
00:27:37.277  complete (in ns) avg, min, max =   8806.3,   7666.7, 714228.6
00:27:37.277  
00:27:37.277  Submit histogram
00:27:37.277  ================
00:27:37.277         Range in us     Cumulative     Count
00:27:37.277     11.825 -    11.886:    0.0609%  (        5)
00:27:37.277     11.886 -    11.947:    0.4263%  (       30)
00:27:37.277     11.947 -    12.008:    0.9743%  (       45)
00:27:37.277     12.008 -    12.069:    2.0095%  (       85)
00:27:37.277     12.069 -    12.130:    3.1665%  (       95)
00:27:37.277     12.130 -    12.190:    4.8472%  (      138)
00:27:37.277     12.190 -    12.251:    7.0759%  (      183)
00:27:37.277     12.251 -    12.312:   10.6199%  (      291)
00:27:37.277     12.312 -    12.373:   15.1626%  (      373)
00:27:37.277     12.373 -    12.434:   21.0815%  (      486)
00:27:37.277     12.434 -    12.495:   26.5254%  (      447)
00:27:37.277     12.495 -    12.556:   31.0072%  (      368)
00:27:37.277     12.556 -    12.617:   34.4781%  (      285)
00:27:37.277     12.617 -    12.678:   36.9017%  (      199)
00:27:37.277     12.678 -    12.739:   38.6311%  (      142)
00:27:37.277     12.739 -    12.800:   39.8124%  (       97)
00:27:37.277     12.800 -    12.861:   40.8720%  (       87)
00:27:37.277     12.861 -    12.922:   41.5905%  (       59)
00:27:37.277     12.922 -    12.983:   42.3091%  (       59)
00:27:37.277     12.983 -    13.044:   43.0520%  (       61)
00:27:37.277     13.044 -    13.105:   43.8558%  (       66)
00:27:37.277     13.105 -    13.166:   45.6339%  (      146)
00:27:37.277     13.166 -    13.227:   49.1536%  (      289)
00:27:37.277     13.227 -    13.288:   55.2673%  (      502)
00:27:37.277     13.288 -    13.349:   63.0617%  (      640)
00:27:37.277     13.349 -    13.410:   70.3690%  (      600)
00:27:37.277     13.410 -    13.470:   76.5437%  (      507)
00:27:37.277     13.470 -    13.531:   80.9158%  (      359)
00:27:37.277     13.531 -    13.592:   84.1310%  (      264)
00:27:37.277     13.592 -    13.653:   86.5181%  (      196)
00:27:37.277     13.653 -    13.714:   88.0891%  (      129)
00:27:37.277     13.714 -    13.775:   89.3192%  (      101)
00:27:37.277     13.775 -    13.836:   90.0256%  (       58)
00:27:37.277     13.836 -    13.897:   90.7076%  (       56)
00:27:37.277     13.897 -    13.958:   91.1582%  (       37)
00:27:37.277     13.958 -    14.019:   91.9620%  (       66)
00:27:37.278     14.019 -    14.080:   92.7658%  (       66)
00:27:37.278     14.080 -    14.141:   93.5940%  (       68)
00:27:37.278     14.141 -    14.202:   94.4099%  (       67)
00:27:37.278     14.202 -    14.263:   94.9580%  (       45)
00:27:37.278     14.263 -    14.324:   95.2746%  (       26)
00:27:37.278     14.324 -    14.385:   95.6156%  (       28)
00:27:37.278     14.385 -    14.446:   95.8105%  (       16)
00:27:37.278     14.446 -    14.507:   95.9932%  (       15)
00:27:37.278     14.507 -    14.568:   96.1393%  (       12)
00:27:37.278     14.568 -    14.629:   96.2733%  (       11)
00:27:37.278     14.629 -    14.690:   96.3829%  (        9)
00:27:37.278     14.690 -    14.750:   96.4925%  (        9)
00:27:37.278     14.750 -    14.811:   96.5290%  (        3)
00:27:37.278     14.811 -    14.872:   96.5778%  (        4)
00:27:37.278     14.872 -    14.933:   96.5899%  (        1)
00:27:37.278     14.933 -    14.994:   96.6021%  (        1)
00:27:37.278     14.994 -    15.055:   96.6387%  (        3)
00:27:37.278     15.055 -    15.116:   96.6874%  (        4)
00:27:37.278     15.116 -    15.177:   96.7117%  (        2)
00:27:37.278     15.238 -    15.299:   96.7239%  (        1)
00:27:37.278     15.299 -    15.360:   96.7361%  (        1)
00:27:37.278     15.421 -    15.482:   96.7483%  (        1)
00:27:37.278     15.604 -    15.726:   96.7848%  (        3)
00:27:37.278     15.726 -    15.848:   96.8092%  (        2)
00:27:37.278     15.848 -    15.970:   96.8457%  (        3)
00:27:37.278     15.970 -    16.091:   96.8579%  (        1)
00:27:37.278     16.213 -    16.335:   96.8822%  (        2)
00:27:37.278     16.335 -    16.457:   96.9431%  (        5)
00:27:37.278     16.457 -    16.579:   97.0284%  (        7)
00:27:37.278     16.579 -    16.701:   97.0649%  (        3)
00:27:37.278     16.701 -    16.823:   97.0893%  (        2)
00:27:37.278     16.823 -    16.945:   97.1136%  (        2)
00:27:37.278     16.945 -    17.067:   97.1258%  (        1)
00:27:37.278     17.067 -    17.189:   97.1867%  (        5)
00:27:37.278     17.189 -    17.310:   97.2354%  (        4)
00:27:37.278     17.310 -    17.432:   97.2598%  (        2)
00:27:37.278     17.432 -    17.554:   97.2963%  (        3)
00:27:37.278     17.554 -    17.676:   97.3328%  (        3)
00:27:37.278     17.676 -    17.798:   97.3694%  (        3)
00:27:37.278     17.798 -    17.920:   97.4303%  (        5)
00:27:37.278     17.920 -    18.042:   97.5277%  (        8)
00:27:37.278     18.042 -    18.164:   97.6008%  (        6)
00:27:37.278     18.164 -    18.286:   97.6373%  (        3)
00:27:37.278     18.286 -    18.408:   97.6739%  (        3)
00:27:37.278     18.408 -    18.530:   97.6982%  (        2)
00:27:37.278     18.530 -    18.651:   97.7104%  (        1)
00:27:37.278     18.651 -    18.773:   97.7591%  (        4)
00:27:37.278     18.773 -    18.895:   97.8809%  (       10)
00:27:37.278     18.895 -    19.017:   97.9174%  (        3)
00:27:37.278     19.017 -    19.139:   97.9418%  (        2)
00:27:37.278     19.139 -    19.261:   97.9661%  (        2)
00:27:37.278     19.261 -    19.383:   98.0270%  (        5)
00:27:37.278     19.383 -    19.505:   98.1245%  (        8)
00:27:37.278     19.505 -    19.627:   98.1610%  (        3)
00:27:37.278     19.749 -    19.870:   98.1732%  (        1)
00:27:37.278     19.870 -    19.992:   98.2097%  (        3)
00:27:37.278     19.992 -    20.114:   98.2463%  (        3)
00:27:37.278     20.114 -    20.236:   98.3071%  (        5)
00:27:37.278     20.236 -    20.358:   98.3193%  (        1)
00:27:37.278     20.358 -    20.480:   98.3559%  (        3)
00:27:37.278     20.602 -    20.724:   98.3802%  (        2)
00:27:37.278     20.724 -    20.846:   98.4046%  (        2)
00:27:37.278     20.846 -    20.968:   98.4411%  (        3)
00:27:37.278     20.968 -    21.090:   98.4533%  (        1)
00:27:37.278     21.090 -    21.211:   98.4777%  (        2)
00:27:37.278     21.211 -    21.333:   98.5142%  (        3)
00:27:37.278     21.333 -    21.455:   98.5385%  (        2)
00:27:37.278     21.455 -    21.577:   98.5507%  (        1)
00:27:37.278     21.577 -    21.699:   98.5629%  (        1)
00:27:37.278     21.821 -    21.943:   98.5751%  (        1)
00:27:37.278     22.309 -    22.430:   98.5873%  (        1)
00:27:37.278     22.430 -    22.552:   98.5994%  (        1)
00:27:37.278     23.040 -    23.162:   98.6116%  (        1)
00:27:37.278     23.284 -    23.406:   98.6238%  (        1)
00:27:37.278     24.015 -    24.137:   98.6360%  (        1)
00:27:37.278     24.137 -    24.259:   98.6603%  (        2)
00:27:37.278     24.381 -    24.503:   98.6847%  (        2)
00:27:37.278     24.503 -    24.625:   98.7212%  (        3)
00:27:37.278     24.625 -    24.747:   98.7456%  (        2)
00:27:37.278     24.747 -    24.869:   98.7943%  (        4)
00:27:37.278     24.869 -    24.990:   98.9039%  (        9)
00:27:37.278     24.990 -    25.112:   99.0135%  (        9)
00:27:37.278     25.112 -    25.234:   99.1231%  (        9)
00:27:37.278     25.234 -    25.356:   99.2571%  (       11)
00:27:37.278     25.356 -    25.478:   99.4154%  (       13)
00:27:37.278     25.478 -    25.600:   99.4885%  (        6)
00:27:37.278     25.600 -    25.722:   99.5859%  (        8)
00:27:37.278     25.722 -    25.844:   99.6225%  (        3)
00:27:37.278     25.844 -    25.966:   99.6346%  (        1)
00:27:37.278     26.088 -    26.210:   99.6468%  (        1)
00:27:37.278     26.453 -    26.575:   99.6590%  (        1)
00:27:37.278     28.526 -    28.648:   99.6834%  (        2)
00:27:37.278     28.770 -    28.891:   99.7077%  (        2)
00:27:37.278     29.013 -    29.135:   99.7442%  (        3)
00:27:37.278     29.135 -    29.257:   99.7686%  (        2)
00:27:37.278     29.257 -    29.379:   99.7808%  (        1)
00:27:37.278     29.379 -    29.501:   99.8173%  (        3)
00:27:37.278     29.501 -    29.623:   99.8539%  (        3)
00:27:37.278     29.745 -    29.867:   99.8782%  (        2)
00:27:37.278     29.989 -    30.110:   99.8904%  (        1)
00:27:37.278     30.354 -    30.476:   99.9026%  (        1)
00:27:37.278     32.427 -    32.670:   99.9147%  (        1)
00:27:37.278     35.352 -    35.596:   99.9269%  (        1)
00:27:37.278     53.882 -    54.126:   99.9391%  (        1)
00:27:37.278     98.499 -    98.987:   99.9513%  (        1)
00:27:37.278    105.813 -   106.301:   99.9635%  (        1)
00:27:37.278    121.417 -   121.905:   99.9756%  (        1)
00:27:37.278    131.657 -   132.632:   99.9878%  (        1)
00:27:37.278    259.413 -   261.364:  100.0000%  (        1)
00:27:37.278  
00:27:37.278  Complete histogram
00:27:37.278  ==================
00:27:37.278         Range in us     Cumulative     Count
00:27:37.278      7.650 -     7.680:    0.0853%  (        7)
00:27:37.278      7.680 -     7.710:    1.1570%  (       88)
00:27:37.278      7.710 -     7.741:    4.4574%  (      271)
00:27:37.278      7.741 -     7.771:    9.8526%  (      443)
00:27:37.278      7.771 -     7.802:   14.8825%  (      413)
00:27:37.278      7.802 -     7.863:   23.3711%  (      697)
00:27:37.278      7.863 -     7.924:   26.8420%  (      285)
00:27:37.278      7.924 -     7.985:   28.7785%  (      159)
00:27:37.278      7.985 -     8.046:   30.4104%  (      134)
00:27:37.278      8.046 -     8.107:   31.5187%  (       91)
00:27:37.278      8.107 -     8.168:   32.0180%  (       41)
00:27:37.278      8.168 -     8.229:   33.3455%  (      109)
00:27:37.278      8.229 -     8.290:   35.0627%  (      141)
00:27:37.278      8.290 -     8.350:   36.7434%  (      138)
00:27:37.278      8.350 -     8.411:   41.5662%  (      396)
00:27:37.278      8.411 -     8.472:   62.0874%  (     1685)
00:27:37.278      8.472 -     8.533:   73.6573%  (      950)
00:27:37.278      8.533 -     8.594:   77.9442%  (      352)
00:27:37.278      8.594 -     8.655:   80.1120%  (      178)
00:27:37.278      8.655 -     8.716:   82.9497%  (      233)
00:27:37.278      8.716 -     8.777:   84.4233%  (      121)
00:27:37.278      8.777 -     8.838:   85.3855%  (       79)
00:27:37.278      8.838 -     8.899:   86.0431%  (       54)
00:27:37.278      8.899 -     8.960:   87.0783%  (       85)
00:27:37.278      8.960 -     9.021:   89.4288%  (      193)
00:27:37.278      9.021 -     9.082:   90.8659%  (      118)
00:27:37.279      9.082 -     9.143:   91.7915%  (       76)
00:27:37.279      9.143 -     9.204:   92.4370%  (       53)
00:27:37.279      9.204 -     9.265:   93.0459%  (       50)
00:27:37.279      9.265 -     9.326:   93.8132%  (       63)
00:27:37.279      9.326 -     9.387:   94.2394%  (       35)
00:27:37.279      9.387 -     9.448:   94.6048%  (       30)
00:27:37.279      9.448 -     9.509:   95.0311%  (       35)
00:27:37.279      9.509 -     9.570:   95.3721%  (       28)
00:27:37.279      9.570 -     9.630:   95.5669%  (       16)
00:27:37.279      9.630 -     9.691:   95.6765%  (        9)
00:27:37.279      9.691 -     9.752:   95.8227%  (       12)
00:27:37.279      9.752 -     9.813:   95.9079%  (        7)
00:27:37.279      9.813 -     9.874:   96.0541%  (       12)
00:27:37.279      9.874 -     9.935:   96.1393%  (        7)
00:27:37.279      9.935 -     9.996:   96.2002%  (        5)
00:27:37.279      9.996 -    10.057:   96.2124%  (        1)
00:27:37.279     10.057 -    10.118:   96.2855%  (        6)
00:27:37.279     10.118 -    10.179:   96.3220%  (        3)
00:27:37.279     10.179 -    10.240:   96.3585%  (        3)
00:27:37.279     10.240 -    10.301:   96.4316%  (        6)
00:27:37.279     10.301 -    10.362:   96.4438%  (        1)
00:27:37.279     10.362 -    10.423:   96.4560%  (        1)
00:27:37.279     10.423 -    10.484:   96.4803%  (        2)
00:27:37.279     10.484 -    10.545:   96.5169%  (        3)
00:27:37.279     10.545 -    10.606:   96.5534%  (        3)
00:27:37.279     10.606 -    10.667:   96.5656%  (        1)
00:27:37.279     10.667 -    10.728:   96.5899%  (        2)
00:27:37.279     10.728 -    10.789:   96.6143%  (        2)
00:27:37.279     10.850 -    10.910:   96.6265%  (        1)
00:27:37.279     10.971 -    11.032:   96.6387%  (        1)
00:27:37.279     11.032 -    11.093:   96.6630%  (        2)
00:27:37.279     11.093 -    11.154:   96.6752%  (        1)
00:27:37.279     11.154 -    11.215:   96.6874%  (        1)
00:27:37.279     11.215 -    11.276:   96.6995%  (        1)
00:27:37.279     11.276 -    11.337:   96.7117%  (        1)
00:27:37.279     11.337 -    11.398:   96.7361%  (        2)
00:27:37.279     11.520 -    11.581:   96.7483%  (        1)
00:27:37.279     11.642 -    11.703:   96.7604%  (        1)
00:27:37.279     11.703 -    11.764:   96.8092%  (        4)
00:27:37.279     11.764 -    11.825:   96.8213%  (        1)
00:27:37.279     11.825 -    11.886:   96.8579%  (        3)
00:27:37.279     11.886 -    11.947:   96.9066%  (        4)
00:27:37.279     11.947 -    12.008:   96.9188%  (        1)
00:27:37.279     12.008 -    12.069:   96.9309%  (        1)
00:27:37.279     12.069 -    12.130:   96.9675%  (        3)
00:27:37.279     12.130 -    12.190:   96.9797%  (        1)
00:27:37.279     12.190 -    12.251:   96.9918%  (        1)
00:27:37.279     12.251 -    12.312:   97.0162%  (        2)
00:27:37.279     12.373 -    12.434:   97.0284%  (        1)
00:27:37.279     12.434 -    12.495:   97.0406%  (        1)
00:27:37.279     12.556 -    12.617:   97.0527%  (        1)
00:27:37.279     12.617 -    12.678:   97.0649%  (        1)
00:27:37.279     12.678 -    12.739:   97.1014%  (        3)
00:27:37.279     12.800 -    12.861:   97.1502%  (        4)
00:27:37.279     12.861 -    12.922:   97.1867%  (        3)
00:27:37.279     12.922 -    12.983:   97.2232%  (        3)
00:27:37.279     12.983 -    13.044:   97.2720%  (        4)
00:27:37.279     13.044 -    13.105:   97.3207%  (        4)
00:27:37.279     13.105 -    13.166:   97.3572%  (        3)
00:27:37.279     13.166 -    13.227:   97.3937%  (        3)
00:27:37.279     13.227 -    13.288:   97.4181%  (        2)
00:27:37.279     13.288 -    13.349:   97.4668%  (        4)
00:27:37.279     13.349 -    13.410:   97.4912%  (        2)
00:27:37.279     13.410 -    13.470:   97.5155%  (        2)
00:27:37.279     13.470 -    13.531:   97.5399%  (        2)
00:27:37.279     13.531 -    13.592:   97.5642%  (        2)
00:27:37.279     13.592 -    13.653:   97.5886%  (        2)
00:27:37.279     13.653 -    13.714:   97.6251%  (        3)
00:27:37.279     13.714 -    13.775:   97.6617%  (        3)
00:27:37.279     13.775 -    13.836:   97.6860%  (        2)
00:27:37.279     13.897 -    13.958:   97.7104%  (        2)
00:27:37.279     13.958 -    14.019:   97.7347%  (        2)
00:27:37.279     14.019 -    14.080:   97.7591%  (        2)
00:27:37.279     14.080 -    14.141:   97.7835%  (        2)
00:27:37.279     14.141 -    14.202:   97.8565%  (        6)
00:27:37.279     14.202 -    14.263:   97.8687%  (        1)
00:27:37.279     14.263 -    14.324:   97.8809%  (        1)
00:27:37.279     14.324 -    14.385:   97.8931%  (        1)
00:27:37.279     14.385 -    14.446:   97.9052%  (        1)
00:27:37.279     14.446 -    14.507:   97.9174%  (        1)
00:27:37.279     14.507 -    14.568:   97.9540%  (        3)
00:27:37.279     14.629 -    14.690:   97.9661%  (        1)
00:27:37.279     14.750 -    14.811:   98.0027%  (        3)
00:27:37.279     14.811 -    14.872:   98.0270%  (        2)
00:27:37.279     14.933 -    14.994:   98.0636%  (        3)
00:27:37.279     15.055 -    15.116:   98.1123%  (        4)
00:27:37.279     15.116 -    15.177:   98.1245%  (        1)
00:27:37.279     15.238 -    15.299:   98.1366%  (        1)
00:27:37.279     15.299 -    15.360:   98.1610%  (        2)
00:27:37.279     15.482 -    15.543:   98.1732%  (        1)
00:27:37.279     16.091 -    16.213:   98.1854%  (        1)
00:27:37.279     16.213 -    16.335:   98.2097%  (        2)
00:27:37.279     16.457 -    16.579:   98.2219%  (        1)
00:27:37.279     16.579 -    16.701:   98.2341%  (        1)
00:27:37.279     16.701 -    16.823:   98.2584%  (        2)
00:27:37.279     16.945 -    17.067:   98.2706%  (        1)
00:27:37.279     17.189 -    17.310:   98.2828%  (        1)
00:27:37.279     18.530 -    18.651:   98.2950%  (        1)
00:27:37.279     18.651 -    18.773:   98.3071%  (        1)
00:27:37.279     18.895 -    19.017:   98.3193%  (        1)
00:27:37.279     19.383 -    19.505:   98.3437%  (        2)
00:27:37.279     19.505 -    19.627:   98.3924%  (        4)
00:27:37.279     19.627 -    19.749:   98.4046%  (        1)
00:27:37.279     19.749 -    19.870:   98.4411%  (        3)
00:27:37.279     19.870 -    19.992:   98.4898%  (        4)
00:27:37.279     19.992 -    20.114:   98.6238%  (       11)
00:27:37.279     20.114 -    20.236:   98.7334%  (        9)
00:27:37.279     20.236 -    20.358:   98.8430%  (        9)
00:27:37.279     20.358 -    20.480:   98.9404%  (        8)
00:27:37.279     20.480 -    20.602:   99.0744%  (       11)
00:27:37.279     20.602 -    20.724:   99.2815%  (       17)
00:27:37.279     20.724 -    20.846:   99.3545%  (        6)
00:27:37.279     20.846 -    20.968:   99.4154%  (        5)
00:27:37.279     21.090 -    21.211:   99.4276%  (        1)
00:27:37.279     23.406 -    23.528:   99.4398%  (        1)
00:27:37.279     23.528 -    23.650:   99.4520%  (        1)
00:27:37.279     23.771 -    23.893:   99.4641%  (        1)
00:27:37.279     23.893 -    24.015:   99.4763%  (        1)
00:27:37.279     24.015 -    24.137:   99.5128%  (        3)
00:27:37.279     24.137 -    24.259:   99.5372%  (        2)
00:27:37.279     24.259 -    24.381:   99.5616%  (        2)
00:27:37.279     24.381 -    24.503:   99.5859%  (        2)
00:27:37.279     24.503 -    24.625:   99.6346%  (        4)
00:27:37.279     24.625 -    24.747:   99.6834%  (        4)
00:27:37.279     24.747 -    24.869:   99.7077%  (        2)
00:27:37.279     24.869 -    24.990:   99.7686%  (        5)
00:27:37.279     24.990 -    25.112:   99.7808%  (        1)
00:27:37.279     25.112 -    25.234:   99.8051%  (        2)
00:27:37.279     25.234 -    25.356:   99.8173%  (        1)
00:27:37.279     25.356 -    25.478:   99.8295%  (        1)
00:27:37.279     25.478 -    25.600:   99.8417%  (        1)
00:27:37.279     25.600 -    25.722:   99.8539%  (        1)
00:27:37.279     25.966 -    26.088:   99.8660%  (        1)
00:27:37.279     26.210 -    26.331:   99.8782%  (        1)
00:27:37.280     28.770 -    28.891:   99.8904%  (        1)
00:27:37.280     29.867 -    29.989:   99.9026%  (        1)
00:27:37.280     30.110 -    30.232:   99.9147%  (        1)
00:27:37.280     30.232 -    30.354:   99.9269%  (        1)
00:27:37.280     31.939 -    32.183:   99.9391%  (        1)
00:27:37.280     38.034 -    38.278:   99.9513%  (        1)
00:27:37.280     43.154 -    43.398:   99.9635%  (        1)
00:27:37.280     49.250 -    49.493:   99.9756%  (        1)
00:27:37.280    417.402 -   419.352:   99.9878%  (        1)
00:27:37.280    713.874 -   717.775:  100.0000%  (        1)
00:27:37.280  
00:27:37.280  
00:27:37.280  real	0m1.239s
00:27:37.280  user	0m1.089s
00:27:37.280  sys	0m0.097s
00:27:37.280   17:11:29	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:27:37.280   17:11:29	-- common/autotest_common.sh@10 -- # set +x
00:27:37.280  ************************************
00:27:37.280  END TEST nvme_overhead
00:27:37.280  ************************************
00:27:37.280   17:11:29	-- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0
00:27:37.280   17:11:29	-- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']'
00:27:37.280   17:11:29	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:27:37.280   17:11:29	-- common/autotest_common.sh@10 -- # set +x
00:27:37.280  ************************************
00:27:37.280  START TEST nvme_arbitration
00:27:37.280  ************************************
00:27:37.280   17:11:29	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0
00:27:40.562  Initializing NVMe Controllers
00:27:40.562  Attached to 0000:00:06.0
00:27:40.562  Associating QEMU NVMe Ctrl       (12340               ) with lcore 0
00:27:40.562  Associating QEMU NVMe Ctrl       (12340               ) with lcore 1
00:27:40.562  Associating QEMU NVMe Ctrl       (12340               ) with lcore 2
00:27:40.562  Associating QEMU NVMe Ctrl       (12340               ) with lcore 3
00:27:40.562  /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration:
00:27:40.562  /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0
00:27:40.562  Initialization complete. Launching workers.
00:27:40.562  Starting thread on core 1 with urgent priority queue
00:27:40.562  Starting thread on core 2 with urgent priority queue
00:27:40.562  Starting thread on core 3 with urgent priority queue
00:27:40.562  Starting thread on core 0 with urgent priority queue
00:27:40.562  QEMU NVMe Ctrl       (12340               ) core 0:  6685.67 IO/s    14.96 secs/100000 ios
00:27:40.562  QEMU NVMe Ctrl       (12340               ) core 1:  6633.00 IO/s    15.08 secs/100000 ios
00:27:40.562  QEMU NVMe Ctrl       (12340               ) core 2:  3935.00 IO/s    25.41 secs/100000 ios
00:27:40.562  QEMU NVMe Ctrl       (12340               ) core 3:  3907.00 IO/s    25.60 secs/100000 ios
00:27:40.562  ========================================================
00:27:40.562  
00:27:40.562  
00:27:40.562  real	0m3.312s
00:27:40.562  user	0m9.160s
00:27:40.562  sys	0m0.112s
00:27:40.562   17:11:33	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:27:40.562  ************************************
00:27:40.562  END TEST nvme_arbitration
00:27:40.562  ************************************
00:27:40.562   17:11:33	-- common/autotest_common.sh@10 -- # set +x
00:27:40.562   17:11:33	-- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log
00:27:40.562   17:11:33	-- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']'
00:27:40.562   17:11:33	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:27:40.562   17:11:33	-- common/autotest_common.sh@10 -- # set +x
00:27:40.562  ************************************
00:27:40.562  START TEST nvme_single_aen
00:27:40.562  ************************************
00:27:40.562   17:11:33	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log
00:27:40.562  [2024-11-19 17:11:33.323493] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:27:40.562  [2024-11-19 17:11:33.323600] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:27:40.820  [2024-11-19 17:11:33.495619] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller
00:27:40.820  Asynchronous Event Request test
00:27:40.820  Attached to 0000:00:06.0
00:27:40.820  Reset controller to setup AER completions for this process
00:27:40.820  Registering asynchronous event callbacks...
00:27:40.820  Getting orig temperature thresholds of all controllers
00:27:40.820  0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius)
00:27:40.820  Setting all controllers temperature threshold low to trigger AER
00:27:40.820  Waiting for all controllers temperature threshold to be set lower
00:27:40.820  0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01
00:27:40.820  aer_cb - Resetting Temp Threshold for device: 0000:00:06.0
00:27:40.820  Waiting for all controllers to trigger AER and reset threshold
00:27:40.820  0000:00:06.0: Current Temperature:         323 Kelvin (50 Celsius)
00:27:40.820  Cleaning up...
00:27:40.820  
00:27:40.820  real	0m0.252s
00:27:40.820  user	0m0.099s
00:27:40.820  sys	0m0.085s
00:27:40.820   17:11:33	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:27:40.820   17:11:33	-- common/autotest_common.sh@10 -- # set +x
00:27:40.820  ************************************
00:27:40.820  END TEST nvme_single_aen
00:27:40.820  ************************************
00:27:40.820   17:11:33	-- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers
00:27:40.820   17:11:33	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:27:40.820   17:11:33	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:27:40.821   17:11:33	-- common/autotest_common.sh@10 -- # set +x
00:27:40.821  ************************************
00:27:40.821  START TEST nvme_doorbell_aers
00:27:40.821  ************************************
00:27:40.821   17:11:33	-- common/autotest_common.sh@1114 -- # nvme_doorbell_aers
00:27:40.821   17:11:33	-- nvme/nvme.sh@70 -- # bdfs=()
00:27:40.821   17:11:33	-- nvme/nvme.sh@70 -- # local bdfs bdf
00:27:40.821   17:11:33	-- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs))
00:27:40.821    17:11:33	-- nvme/nvme.sh@71 -- # get_nvme_bdfs
00:27:40.821    17:11:33	-- common/autotest_common.sh@1508 -- # bdfs=()
00:27:40.821    17:11:33	-- common/autotest_common.sh@1508 -- # local bdfs
00:27:40.821    17:11:33	-- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:27:40.821     17:11:33	-- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:27:40.821     17:11:33	-- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr'
00:27:40.821    17:11:33	-- common/autotest_common.sh@1510 -- # (( 1 == 0 ))
00:27:40.821    17:11:33	-- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0
00:27:40.821   17:11:33	-- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}"
00:27:40.821   17:11:33	-- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:06.0'
00:27:41.079  [2024-11-19 17:11:33.912071] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 148573) is not found. Dropping the request.
00:27:51.135  Executing: test_write_invalid_db
00:27:51.135  Waiting for AER completion...
00:27:51.135  Failure: test_write_invalid_db
00:27:51.135  
00:27:51.135  Executing: test_invalid_db_write_overflow_sq
00:27:51.135  Waiting for AER completion...
00:27:51.135  Failure: test_invalid_db_write_overflow_sq
00:27:51.135  
00:27:51.135  Executing: test_invalid_db_write_overflow_cq
00:27:51.135  Waiting for AER completion...
00:27:51.135  Failure: test_invalid_db_write_overflow_cq
00:27:51.135  
00:27:51.135  
00:27:51.135  real	0m10.111s
00:27:51.136  user	0m7.416s
00:27:51.136  sys	0m2.635s
00:27:51.136   17:11:43	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:27:51.136   17:11:43	-- common/autotest_common.sh@10 -- # set +x
00:27:51.136  ************************************
00:27:51.136  END TEST nvme_doorbell_aers
00:27:51.136  ************************************
00:27:51.136    17:11:43	-- nvme/nvme.sh@97 -- # uname
00:27:51.136   17:11:43	-- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']'
00:27:51.136   17:11:43	-- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log
00:27:51.136   17:11:43	-- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']'
00:27:51.136   17:11:43	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:27:51.136   17:11:43	-- common/autotest_common.sh@10 -- # set +x
00:27:51.136  ************************************
00:27:51.136  START TEST nvme_multi_aen
00:27:51.136  ************************************
00:27:51.136   17:11:43	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log
00:27:51.136  [2024-11-19 17:11:43.822609] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:27:51.136  [2024-11-19 17:11:43.822790] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:27:51.394  [2024-11-19 17:11:44.033297] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller
00:27:51.394  [2024-11-19 17:11:44.033393] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 148573) is not found. Dropping the request.
00:27:51.394  [2024-11-19 17:11:44.033510] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 148573) is not found. Dropping the request.
00:27:51.394  [2024-11-19 17:11:44.033546] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 148573) is not found. Dropping the request.
00:27:51.394  [2024-11-19 17:11:44.041244] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:27:51.394  [2024-11-19 17:11:44.041601] [ DPDK EAL parameters: aer -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:27:51.394  Child process pid: 148769
00:27:51.961  [Child] Asynchronous Event Request test
00:27:51.961  [Child] Attached to 0000:00:06.0
00:27:51.961  [Child] Registering asynchronous event callbacks...
00:27:51.961  [Child] Getting orig temperature thresholds of all controllers
00:27:51.961  [Child] 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius)
00:27:51.961  [Child] Waiting for all controllers to trigger AER and reset threshold
00:27:51.961  [Child] 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01
00:27:51.961  [Child] 0000:00:06.0: Current Temperature:         323 Kelvin (50 Celsius)
00:27:51.961  [Child] Cleaning up...
00:27:51.961  Asynchronous Event Request test
00:27:51.961  Attached to 0000:00:06.0
00:27:51.962  Reset controller to setup AER completions for this process
00:27:51.962  Registering asynchronous event callbacks...
00:27:51.962  Getting orig temperature thresholds of all controllers
00:27:51.962  0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius)
00:27:51.962  Setting all controllers temperature threshold low to trigger AER
00:27:51.962  Waiting for all controllers temperature threshold to be set lower
00:27:51.962  0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01
00:27:51.962  aer_cb - Resetting Temp Threshold for device: 0000:00:06.0
00:27:51.962  Waiting for all controllers to trigger AER and reset threshold
00:27:51.962  0000:00:06.0: Current Temperature:         323 Kelvin (50 Celsius)
00:27:51.962  Cleaning up...
00:27:51.962  
00:27:51.962  real	0m0.818s
00:27:51.962  user	0m0.385s
00:27:51.962  sys	0m0.305s
00:27:51.962   17:11:44	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:27:51.962   17:11:44	-- common/autotest_common.sh@10 -- # set +x
00:27:51.962  ************************************
00:27:51.962  END TEST nvme_multi_aen
00:27:51.962  ************************************
00:27:51.962   17:11:44	-- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000
00:27:51.962   17:11:44	-- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']'
00:27:51.962   17:11:44	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:27:51.962   17:11:44	-- common/autotest_common.sh@10 -- # set +x
00:27:51.962  ************************************
00:27:51.962  START TEST nvme_startup
00:27:51.962  ************************************
00:27:51.962   17:11:44	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000
00:27:52.221  Initializing NVMe Controllers
00:27:52.221  Attached to 0000:00:06.0
00:27:52.221  Initialization complete.
00:27:52.221  Time used:218515.734      (us).
00:27:52.221  
00:27:52.221  real	0m0.315s
00:27:52.221  user	0m0.142s
00:27:52.221  sys	0m0.113s
00:27:52.221   17:11:44	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:27:52.221   17:11:44	-- common/autotest_common.sh@10 -- # set +x
00:27:52.221  ************************************
00:27:52.221  END TEST nvme_startup
00:27:52.221  ************************************
00:27:52.221   17:11:45	-- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary
00:27:52.221   17:11:45	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:27:52.221   17:11:45	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:27:52.221   17:11:45	-- common/autotest_common.sh@10 -- # set +x
00:27:52.221  ************************************
00:27:52.221  START TEST nvme_multi_secondary
00:27:52.221  ************************************
00:27:52.221   17:11:45	-- common/autotest_common.sh@1114 -- # nvme_multi_secondary
00:27:52.221   17:11:45	-- nvme/nvme.sh@52 -- # pid0=148834
00:27:52.221   17:11:45	-- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1
00:27:52.221   17:11:45	-- nvme/nvme.sh@54 -- # pid1=148835
00:27:52.221   17:11:45	-- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4
00:27:52.221   17:11:45	-- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2
00:27:56.426  Initializing NVMe Controllers
00:27:56.426  Attached to NVMe Controller at 0000:00:06.0 [1b36:0010]
00:27:56.426  Associating PCIE (0000:00:06.0) NSID 1 with lcore 1
00:27:56.426  Initialization complete. Launching workers.
00:27:56.426  ========================================================
00:27:56.426                                                                             Latency(us)
00:27:56.426  Device Information                     :       IOPS      MiB/s    Average        min        max
00:27:56.426  PCIE (0000:00:06.0) NSID 1 from core  1:   32789.33     128.08     487.65     172.69    1691.94
00:27:56.426  ========================================================
00:27:56.426  Total                                  :   32789.33     128.08     487.65     172.69    1691.94
00:27:56.426  
00:27:56.426  Initializing NVMe Controllers
00:27:56.426  Attached to NVMe Controller at 0000:00:06.0 [1b36:0010]
00:27:56.426  Associating PCIE (0000:00:06.0) NSID 1 with lcore 2
00:27:56.426  Initialization complete. Launching workers.
00:27:56.426  ========================================================
00:27:56.426                                                                             Latency(us)
00:27:56.426  Device Information                     :       IOPS      MiB/s    Average        min        max
00:27:56.426  PCIE (0000:00:06.0) NSID 1 from core  2:   15196.92      59.36    1051.80     181.52   24732.47
00:27:56.426  ========================================================
00:27:56.426  Total                                  :   15196.92      59.36    1051.80     181.52   24732.47
00:27:56.426  
00:27:56.426   17:11:48	-- nvme/nvme.sh@56 -- # wait 148834
00:27:57.800  Initializing NVMe Controllers
00:27:57.800  Attached to NVMe Controller at 0000:00:06.0 [1b36:0010]
00:27:57.800  Associating PCIE (0000:00:06.0) NSID 1 with lcore 0
00:27:57.800  Initialization complete. Launching workers.
00:27:57.800  ========================================================
00:27:57.800                                                                             Latency(us)
00:27:57.800  Device Information                     :       IOPS      MiB/s    Average        min        max
00:27:57.800  PCIE (0000:00:06.0) NSID 1 from core  0:   40597.20     158.58     393.78     136.73    2008.13
00:27:57.800  ========================================================
00:27:57.800  Total                                  :   40597.20     158.58     393.78     136.73    2008.13
00:27:57.800  
00:27:57.800   17:11:50	-- nvme/nvme.sh@57 -- # wait 148835
00:27:57.800   17:11:50	-- nvme/nvme.sh@61 -- # pid0=148908
00:27:57.800   17:11:50	-- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1
00:27:57.800   17:11:50	-- nvme/nvme.sh@63 -- # pid1=148909
00:27:57.800   17:11:50	-- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2
00:27:57.800   17:11:50	-- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4
00:28:01.082  Initializing NVMe Controllers
00:28:01.082  Attached to NVMe Controller at 0000:00:06.0 [1b36:0010]
00:28:01.082  Associating PCIE (0000:00:06.0) NSID 1 with lcore 1
00:28:01.082  Initialization complete. Launching workers.
00:28:01.082  ========================================================
00:28:01.082                                                                             Latency(us)
00:28:01.082  Device Information                     :       IOPS      MiB/s    Average        min        max
00:28:01.082  PCIE (0000:00:06.0) NSID 1 from core  1:   32414.79     126.62     493.25     168.47    1719.33
00:28:01.082  ========================================================
00:28:01.082  Total                                  :   32414.79     126.62     493.25     168.47    1719.33
00:28:01.082  
00:28:01.340  Initializing NVMe Controllers
00:28:01.340  Attached to NVMe Controller at 0000:00:06.0 [1b36:0010]
00:28:01.340  Associating PCIE (0000:00:06.0) NSID 1 with lcore 0
00:28:01.340  Initialization complete. Launching workers.
00:28:01.340  ========================================================
00:28:01.340                                                                             Latency(us)
00:28:01.340  Device Information                     :       IOPS      MiB/s    Average        min        max
00:28:01.340  PCIE (0000:00:06.0) NSID 1 from core  0:   34101.33     133.21     468.86     166.00    2017.83
00:28:01.340  ========================================================
00:28:01.340  Total                                  :   34101.33     133.21     468.86     166.00    2017.83
00:28:01.340  
00:28:03.240  Initializing NVMe Controllers
00:28:03.240  Attached to NVMe Controller at 0000:00:06.0 [1b36:0010]
00:28:03.240  Associating PCIE (0000:00:06.0) NSID 1 with lcore 2
00:28:03.240  Initialization complete. Launching workers.
00:28:03.240  ========================================================
00:28:03.240                                                                             Latency(us)
00:28:03.240  Device Information                     :       IOPS      MiB/s    Average        min        max
00:28:03.240  PCIE (0000:00:06.0) NSID 1 from core  2:   17569.75      68.63     910.35     162.34   28543.42
00:28:03.240  ========================================================
00:28:03.240  Total                                  :   17569.75      68.63     910.35     162.34   28543.42
00:28:03.240  
00:28:03.241   17:11:55	-- nvme/nvme.sh@65 -- # wait 148908
00:28:03.241   17:11:55	-- nvme/nvme.sh@66 -- # wait 148909
00:28:03.241  
00:28:03.241  real	0m10.652s
00:28:03.241  user	0m18.544s
00:28:03.241  sys	0m0.911s
00:28:03.241   17:11:55	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:28:03.241   17:11:55	-- common/autotest_common.sh@10 -- # set +x
00:28:03.241  ************************************
00:28:03.241  END TEST nvme_multi_secondary
00:28:03.241  ************************************
00:28:03.241   17:11:55	-- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT
00:28:03.241   17:11:55	-- nvme/nvme.sh@102 -- # kill_stub
00:28:03.241   17:11:55	-- common/autotest_common.sh@1075 -- # [[ -e /proc/148136 ]]
00:28:03.241   17:11:55	-- common/autotest_common.sh@1076 -- # kill 148136
00:28:03.241   17:11:55	-- common/autotest_common.sh@1077 -- # wait 148136
00:28:03.499  [2024-11-19 17:11:56.302719] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 148768) is not found. Dropping the request.
00:28:03.499  [2024-11-19 17:11:56.302934] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 148768) is not found. Dropping the request.
00:28:03.499  [2024-11-19 17:11:56.303030] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 148768) is not found. Dropping the request.
00:28:03.499  [2024-11-19 17:11:56.303108] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 148768) is not found. Dropping the request.
00:28:03.757   17:11:56	-- common/autotest_common.sh@1079 -- # rm -f /var/run/spdk_stub0
00:28:03.757   17:11:56	-- common/autotest_common.sh@1083 -- # echo 2
00:28:03.757   17:11:56	-- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh
00:28:03.757   17:11:56	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:28:03.757   17:11:56	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:28:03.757   17:11:56	-- common/autotest_common.sh@10 -- # set +x
00:28:03.757  ************************************
00:28:03.757  START TEST bdev_nvme_reset_stuck_adm_cmd
00:28:03.757  ************************************
00:28:03.757   17:11:56	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh
00:28:03.757  * Looking for test storage...
00:28:03.757  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme
00:28:03.757    17:11:56	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:28:03.757     17:11:56	-- common/autotest_common.sh@1690 -- # lcov --version
00:28:03.757     17:11:56	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:28:04.016    17:11:56	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:28:04.016    17:11:56	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:28:04.016    17:11:56	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:28:04.016    17:11:56	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:28:04.016    17:11:56	-- scripts/common.sh@335 -- # IFS=.-:
00:28:04.016    17:11:56	-- scripts/common.sh@335 -- # read -ra ver1
00:28:04.016    17:11:56	-- scripts/common.sh@336 -- # IFS=.-:
00:28:04.016    17:11:56	-- scripts/common.sh@336 -- # read -ra ver2
00:28:04.016    17:11:56	-- scripts/common.sh@337 -- # local 'op=<'
00:28:04.016    17:11:56	-- scripts/common.sh@339 -- # ver1_l=2
00:28:04.016    17:11:56	-- scripts/common.sh@340 -- # ver2_l=1
00:28:04.016    17:11:56	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:28:04.016    17:11:56	-- scripts/common.sh@343 -- # case "$op" in
00:28:04.016    17:11:56	-- scripts/common.sh@344 -- # : 1
00:28:04.016    17:11:56	-- scripts/common.sh@363 -- # (( v = 0 ))
00:28:04.016    17:11:56	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:28:04.016     17:11:56	-- scripts/common.sh@364 -- # decimal 1
00:28:04.016     17:11:56	-- scripts/common.sh@352 -- # local d=1
00:28:04.016     17:11:56	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:28:04.016     17:11:56	-- scripts/common.sh@354 -- # echo 1
00:28:04.016    17:11:56	-- scripts/common.sh@364 -- # ver1[v]=1
00:28:04.016     17:11:56	-- scripts/common.sh@365 -- # decimal 2
00:28:04.016     17:11:56	-- scripts/common.sh@352 -- # local d=2
00:28:04.016     17:11:56	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:28:04.016     17:11:56	-- scripts/common.sh@354 -- # echo 2
00:28:04.016    17:11:56	-- scripts/common.sh@365 -- # ver2[v]=2
00:28:04.016    17:11:56	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:28:04.016    17:11:56	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:28:04.016    17:11:56	-- scripts/common.sh@367 -- # return 0
00:28:04.016    17:11:56	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:28:04.016    17:11:56	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:28:04.016  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:28:04.016  		--rc genhtml_branch_coverage=1
00:28:04.016  		--rc genhtml_function_coverage=1
00:28:04.016  		--rc genhtml_legend=1
00:28:04.016  		--rc geninfo_all_blocks=1
00:28:04.016  		--rc geninfo_unexecuted_blocks=1
00:28:04.016  		
00:28:04.016  		'
00:28:04.016    17:11:56	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:28:04.016  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:28:04.016  		--rc genhtml_branch_coverage=1
00:28:04.016  		--rc genhtml_function_coverage=1
00:28:04.016  		--rc genhtml_legend=1
00:28:04.016  		--rc geninfo_all_blocks=1
00:28:04.016  		--rc geninfo_unexecuted_blocks=1
00:28:04.016  		
00:28:04.016  		'
00:28:04.016    17:11:56	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:28:04.016  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:28:04.016  		--rc genhtml_branch_coverage=1
00:28:04.016  		--rc genhtml_function_coverage=1
00:28:04.016  		--rc genhtml_legend=1
00:28:04.016  		--rc geninfo_all_blocks=1
00:28:04.016  		--rc geninfo_unexecuted_blocks=1
00:28:04.016  		
00:28:04.016  		'
00:28:04.016    17:11:56	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:28:04.016  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:28:04.016  		--rc genhtml_branch_coverage=1
00:28:04.016  		--rc genhtml_function_coverage=1
00:28:04.016  		--rc genhtml_legend=1
00:28:04.016  		--rc geninfo_all_blocks=1
00:28:04.017  		--rc geninfo_unexecuted_blocks=1
00:28:04.017  		
00:28:04.017  		'
00:28:04.017   17:11:56	-- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0
00:28:04.017   17:11:56	-- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000
00:28:04.017   17:11:56	-- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5
00:28:04.017   17:11:56	-- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0
00:28:04.017   17:11:56	-- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1
00:28:04.017    17:11:56	-- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf
00:28:04.017    17:11:56	-- common/autotest_common.sh@1519 -- # bdfs=()
00:28:04.017    17:11:56	-- common/autotest_common.sh@1519 -- # local bdfs
00:28:04.017    17:11:56	-- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs))
00:28:04.017     17:11:56	-- common/autotest_common.sh@1520 -- # get_nvme_bdfs
00:28:04.017     17:11:56	-- common/autotest_common.sh@1508 -- # bdfs=()
00:28:04.017     17:11:56	-- common/autotest_common.sh@1508 -- # local bdfs
00:28:04.017     17:11:56	-- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:28:04.017      17:11:56	-- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:28:04.017      17:11:56	-- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr'
00:28:04.017     17:11:56	-- common/autotest_common.sh@1510 -- # (( 1 == 0 ))
00:28:04.017     17:11:56	-- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0
00:28:04.017    17:11:56	-- common/autotest_common.sh@1522 -- # echo 0000:00:06.0
00:28:04.017   17:11:56	-- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:06.0
00:28:04.017   17:11:56	-- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:06.0 ']'
00:28:04.017   17:11:56	-- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=149068
00:28:04.017   17:11:56	-- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF
00:28:04.017   17:11:56	-- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT
00:28:04.017   17:11:56	-- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 149068
00:28:04.017   17:11:56	-- common/autotest_common.sh@829 -- # '[' -z 149068 ']'
00:28:04.017   17:11:56	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:28:04.017   17:11:56	-- common/autotest_common.sh@834 -- # local max_retries=100
00:28:04.017  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:28:04.017   17:11:56	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:28:04.017   17:11:56	-- common/autotest_common.sh@838 -- # xtrace_disable
00:28:04.017   17:11:56	-- common/autotest_common.sh@10 -- # set +x
00:28:04.017  [2024-11-19 17:11:56.781442] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:28:04.017  [2024-11-19 17:11:56.781672] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149068 ]
00:28:04.285  [2024-11-19 17:11:56.981085] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4
00:28:04.285  [2024-11-19 17:11:57.035248] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:28:04.285  [2024-11-19 17:11:57.035712] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:28:04.285  [2024-11-19 17:11:57.035897] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:28:04.285  [2024-11-19 17:11:57.036727] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:28:04.285  [2024-11-19 17:11:57.036726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3
00:28:04.866   17:11:57	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:28:04.866   17:11:57	-- common/autotest_common.sh@862 -- # return 0
00:28:04.866   17:11:57	-- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:06.0
00:28:04.866   17:11:57	-- common/autotest_common.sh@561 -- # xtrace_disable
00:28:04.866   17:11:57	-- common/autotest_common.sh@10 -- # set +x
00:28:05.124  nvme0n1
00:28:05.124   17:11:57	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:28:05.124    17:11:57	-- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt
00:28:05.125   17:11:57	-- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_1hmp0.txt
00:28:05.125   17:11:57	-- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit
00:28:05.125   17:11:57	-- common/autotest_common.sh@561 -- # xtrace_disable
00:28:05.125   17:11:57	-- common/autotest_common.sh@10 -- # set +x
00:28:05.125  true
00:28:05.125   17:11:57	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:28:05.125    17:11:57	-- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s
00:28:05.125   17:11:57	-- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1732036317
00:28:05.125   17:11:57	-- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=149096
00:28:05.125   17:11:57	-- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT
00:28:05.125   17:11:57	-- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2
00:28:05.125   17:11:57	-- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA==
00:28:07.030   17:11:59	-- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0
00:28:07.030   17:11:59	-- common/autotest_common.sh@561 -- # xtrace_disable
00:28:07.030   17:11:59	-- common/autotest_common.sh@10 -- # set +x
00:28:07.030  [2024-11-19 17:11:59.804720] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller
00:28:07.030  [2024-11-19 17:11:59.805077] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:28:07.030  [2024-11-19 17:11:59.805164] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0
00:28:07.030  [2024-11-19 17:11:59.805201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:07.030  [2024-11-19 17:11:59.806881] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful.
00:28:07.030   17:11:59	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:28:07.030  Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 149096
00:28:07.030   17:11:59	-- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 149096
00:28:07.030   17:11:59	-- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 149096
00:28:07.030    17:11:59	-- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s
00:28:07.030   17:11:59	-- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2
00:28:07.030   17:11:59	-- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:28:07.031   17:11:59	-- common/autotest_common.sh@561 -- # xtrace_disable
00:28:07.031   17:11:59	-- common/autotest_common.sh@10 -- # set +x
00:28:07.031   17:11:59	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:28:07.031   17:11:59	-- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT
00:28:07.031    17:11:59	-- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_1hmp0.txt
00:28:07.290   17:11:59	-- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA==
00:28:07.290    17:11:59	-- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255
00:28:07.290    17:11:59	-- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status
00:28:07.290    17:11:59	-- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"'))
00:28:07.290     17:11:59	-- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63
00:28:07.290      17:11:59	-- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA==
00:28:07.290     17:11:59	-- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"'
00:28:07.290    17:11:59	-- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2
00:28:07.290    17:11:59	-- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1
00:28:07.290   17:11:59	-- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1
00:28:07.290    17:11:59	-- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3
00:28:07.290    17:11:59	-- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status
00:28:07.290    17:11:59	-- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"'))
00:28:07.290     17:11:59	-- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"'
00:28:07.290      17:11:59	-- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA==
00:28:07.290     17:11:59	-- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63
00:28:07.290    17:11:59	-- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2
00:28:07.290    17:11:59	-- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0
00:28:07.290   17:11:59	-- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0
00:28:07.290   17:11:59	-- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_1hmp0.txt
00:28:07.290   17:11:59	-- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 149068
00:28:07.290   17:11:59	-- common/autotest_common.sh@936 -- # '[' -z 149068 ']'
00:28:07.290   17:11:59	-- common/autotest_common.sh@940 -- # kill -0 149068
00:28:07.290    17:11:59	-- common/autotest_common.sh@941 -- # uname
00:28:07.290   17:11:59	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:28:07.290    17:11:59	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 149068
00:28:07.290   17:11:59	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:28:07.290   17:11:59	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:28:07.290   17:11:59	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 149068'
00:28:07.290  killing process with pid 149068
00:28:07.290   17:11:59	-- common/autotest_common.sh@955 -- # kill 149068
00:28:07.290   17:11:59	-- common/autotest_common.sh@960 -- # wait 149068
00:28:07.549   17:12:00	-- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct ))
00:28:07.549   17:12:00	-- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout ))
00:28:07.549  
00:28:07.549  real	0m3.904s
00:28:07.549  user	0m13.512s
00:28:07.549  sys	0m0.694s
00:28:07.549   17:12:00	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:28:07.549   17:12:00	-- common/autotest_common.sh@10 -- # set +x
00:28:07.549  ************************************
00:28:07.549  END TEST bdev_nvme_reset_stuck_adm_cmd
00:28:07.549  ************************************
00:28:07.549   17:12:00	-- nvme/nvme.sh@107 -- # [[ y == y ]]
00:28:07.549   17:12:00	-- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test
00:28:07.549   17:12:00	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:28:07.549   17:12:00	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:28:07.549   17:12:00	-- common/autotest_common.sh@10 -- # set +x
00:28:07.808  ************************************
00:28:07.808  START TEST nvme_fio
00:28:07.808  ************************************
00:28:07.808   17:12:00	-- common/autotest_common.sh@1114 -- # nvme_fio_test
00:28:07.808   17:12:00	-- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme
00:28:07.808   17:12:00	-- nvme/nvme.sh@32 -- # ran_fio=false
00:28:07.808    17:12:00	-- nvme/nvme.sh@33 -- # get_nvme_bdfs
00:28:07.808    17:12:00	-- common/autotest_common.sh@1508 -- # bdfs=()
00:28:07.808    17:12:00	-- common/autotest_common.sh@1508 -- # local bdfs
00:28:07.808    17:12:00	-- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:28:07.808     17:12:00	-- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:28:07.808     17:12:00	-- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr'
00:28:07.808    17:12:00	-- common/autotest_common.sh@1510 -- # (( 1 == 0 ))
00:28:07.808    17:12:00	-- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0
00:28:07.808   17:12:00	-- nvme/nvme.sh@33 -- # bdfs=('0000:00:06.0')
00:28:07.808   17:12:00	-- nvme/nvme.sh@33 -- # local bdfs bdf
00:28:07.808   17:12:00	-- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}"
00:28:07.808   17:12:00	-- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0'
00:28:07.808   17:12:00	-- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+'
00:28:08.068   17:12:00	-- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0'
00:28:08.068   17:12:00	-- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA'
00:28:08.327   17:12:00	-- nvme/nvme.sh@41 -- # bs=4096
00:28:08.327   17:12:00	-- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096
00:28:08.327   17:12:00	-- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096
00:28:08.327   17:12:00	-- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio
00:28:08.327   17:12:00	-- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:28:08.327   17:12:00	-- common/autotest_common.sh@1328 -- # local sanitizers
00:28:08.327   17:12:00	-- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme
00:28:08.327   17:12:00	-- common/autotest_common.sh@1330 -- # shift
00:28:08.327   17:12:00	-- common/autotest_common.sh@1332 -- # local asan_lib=
00:28:08.327   17:12:00	-- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}"
00:28:08.327    17:12:00	-- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme
00:28:08.327    17:12:00	-- common/autotest_common.sh@1334 -- # grep libasan
00:28:08.327    17:12:00	-- common/autotest_common.sh@1334 -- # awk '{print $3}'
00:28:08.327   17:12:00	-- common/autotest_common.sh@1334 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.6
00:28:08.327   17:12:00	-- common/autotest_common.sh@1335 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.6 ]]
00:28:08.327   17:12:00	-- common/autotest_common.sh@1336 -- # break
00:28:08.327   17:12:00	-- common/autotest_common.sh@1341 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme'
00:28:08.327   17:12:00	-- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096
00:28:08.327  test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128
00:28:08.327  fio-3.35
00:28:08.327  Starting 1 thread
00:28:11.614  
00:28:11.614  test: (groupid=0, jobs=1): err= 0: pid=149231: Tue Nov 19 17:12:04 2024
00:28:11.614    read: IOPS=20.1k, BW=78.5MiB/s (82.3MB/s)(157MiB/2001msec)
00:28:11.614      slat (usec): min=4, max=198, avg= 4.96, stdev= 1.95
00:28:11.614      clat (usec): min=218, max=12190, avg=3174.63, stdev=371.63
00:28:11.614       lat (usec): min=222, max=12285, avg=3179.59, stdev=372.09
00:28:11.614      clat percentiles (usec):
00:28:11.614       |  1.00th=[ 2868],  5.00th=[ 2933], 10.00th=[ 2966], 20.00th=[ 2999],
00:28:11.614       | 30.00th=[ 3032], 40.00th=[ 3064], 50.00th=[ 3097], 60.00th=[ 3130],
00:28:11.614       | 70.00th=[ 3163], 80.00th=[ 3195], 90.00th=[ 3589], 95.00th=[ 3884],
00:28:11.614       | 99.00th=[ 4080], 99.50th=[ 4146], 99.90th=[ 8586], 99.95th=[10290],
00:28:11.614       | 99.99th=[11994]
00:28:11.614     bw (  KiB/s): min=72824, max=83008, per=98.78%, avg=79362.67, stdev=5675.12, samples=3
00:28:11.614     iops        : min=18206, max=20752, avg=19840.67, stdev=1418.78, samples=3
00:28:11.614    write: IOPS=20.0k, BW=78.3MiB/s (82.1MB/s)(157MiB/2001msec); 0 zone resets
00:28:11.614      slat (usec): min=4, max=127, avg= 5.13, stdev= 1.72
00:28:11.614      clat (usec): min=226, max=12041, avg=3185.08, stdev=382.09
00:28:11.614       lat (usec): min=231, max=12062, avg=3190.21, stdev=382.53
00:28:11.614      clat percentiles (usec):
00:28:11.614       |  1.00th=[ 2868],  5.00th=[ 2933], 10.00th=[ 2966], 20.00th=[ 3032],
00:28:11.614       | 30.00th=[ 3064], 40.00th=[ 3064], 50.00th=[ 3097], 60.00th=[ 3130],
00:28:11.614       | 70.00th=[ 3163], 80.00th=[ 3228], 90.00th=[ 3687], 95.00th=[ 3916],
00:28:11.614       | 99.00th=[ 4080], 99.50th=[ 4146], 99.90th=[ 9110], 99.95th=[10552],
00:28:11.614       | 99.99th=[11731]
00:28:11.614     bw (  KiB/s): min=72920, max=82960, per=99.05%, avg=79381.33, stdev=5606.49, samples=3
00:28:11.614     iops        : min=18230, max=20740, avg=19845.33, stdev=1401.62, samples=3
00:28:11.614    lat (usec)   : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01%
00:28:11.614    lat (msec)   : 2=0.05%, 4=97.34%, 10=2.49%, 20=0.06%
00:28:11.614    cpu          : usr=99.80%, sys=0.00%, ctx=21, majf=0, minf=41
00:28:11.614    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9%
00:28:11.614       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:28:11.614       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:28:11.614       issued rwts: total=40192,40093,0,0 short=0,0,0,0 dropped=0,0,0,0
00:28:11.614       latency   : target=0, window=0, percentile=100.00%, depth=128
00:28:11.614  
00:28:11.614  Run status group 0 (all jobs):
00:28:11.614     READ: bw=78.5MiB/s (82.3MB/s), 78.5MiB/s-78.5MiB/s (82.3MB/s-82.3MB/s), io=157MiB (165MB), run=2001-2001msec
00:28:11.614    WRITE: bw=78.3MiB/s (82.1MB/s), 78.3MiB/s-78.3MiB/s (82.1MB/s-82.1MB/s), io=157MiB (164MB), run=2001-2001msec
00:28:11.873  -----------------------------------------------------
00:28:11.873  Suppressions used:
00:28:11.873    count      bytes template
00:28:11.873        1         32 /usr/src/fio/parse.c
00:28:11.873  -----------------------------------------------------
00:28:11.873  
00:28:11.873   17:12:04	-- nvme/nvme.sh@44 -- # ran_fio=true
00:28:11.873   17:12:04	-- nvme/nvme.sh@46 -- # true
00:28:11.873  ************************************
00:28:11.873  END TEST nvme_fio
00:28:11.873  ************************************
00:28:11.873  
00:28:11.873  real	0m4.241s
00:28:11.873  user	0m3.515s
00:28:11.873  sys	0m0.416s
00:28:11.873   17:12:04	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:28:11.873   17:12:04	-- common/autotest_common.sh@10 -- # set +x
00:28:11.873  ************************************
00:28:11.873  END TEST nvme
00:28:11.873  ************************************
00:28:11.873  
00:28:11.873  real	0m45.299s
00:28:11.873  user	1m57.551s
00:28:11.873  sys	0m9.742s
00:28:11.873   17:12:04	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:28:11.873   17:12:04	-- common/autotest_common.sh@10 -- # set +x
00:28:12.132   17:12:04	-- spdk/autotest.sh@210 -- # [[ 0 -eq 1 ]]
00:28:12.132   17:12:04	-- spdk/autotest.sh@214 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh
00:28:12.132   17:12:04	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:28:12.132   17:12:04	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:28:12.132   17:12:04	-- common/autotest_common.sh@10 -- # set +x
00:28:12.132  ************************************
00:28:12.132  START TEST nvme_scc
00:28:12.132  ************************************
00:28:12.132   17:12:04	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh
00:28:12.132  * Looking for test storage...
00:28:12.132  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme
00:28:12.132     17:12:04	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:28:12.132      17:12:04	-- common/autotest_common.sh@1690 -- # lcov --version
00:28:12.132      17:12:04	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:28:12.132     17:12:04	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:28:12.132     17:12:04	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:28:12.132     17:12:04	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:28:12.132     17:12:04	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:28:12.132     17:12:04	-- scripts/common.sh@335 -- # IFS=.-:
00:28:12.132     17:12:04	-- scripts/common.sh@335 -- # read -ra ver1
00:28:12.132     17:12:04	-- scripts/common.sh@336 -- # IFS=.-:
00:28:12.132     17:12:04	-- scripts/common.sh@336 -- # read -ra ver2
00:28:12.132     17:12:04	-- scripts/common.sh@337 -- # local 'op=<'
00:28:12.132     17:12:04	-- scripts/common.sh@339 -- # ver1_l=2
00:28:12.132     17:12:04	-- scripts/common.sh@340 -- # ver2_l=1
00:28:12.132     17:12:04	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:28:12.132     17:12:04	-- scripts/common.sh@343 -- # case "$op" in
00:28:12.132     17:12:04	-- scripts/common.sh@344 -- # : 1
00:28:12.132     17:12:04	-- scripts/common.sh@363 -- # (( v = 0 ))
00:28:12.132     17:12:04	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:28:12.132      17:12:04	-- scripts/common.sh@364 -- # decimal 1
00:28:12.132      17:12:04	-- scripts/common.sh@352 -- # local d=1
00:28:12.132      17:12:04	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:28:12.132      17:12:04	-- scripts/common.sh@354 -- # echo 1
00:28:12.132     17:12:04	-- scripts/common.sh@364 -- # ver1[v]=1
00:28:12.132      17:12:04	-- scripts/common.sh@365 -- # decimal 2
00:28:12.132      17:12:04	-- scripts/common.sh@352 -- # local d=2
00:28:12.132      17:12:04	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:28:12.132      17:12:04	-- scripts/common.sh@354 -- # echo 2
00:28:12.132     17:12:04	-- scripts/common.sh@365 -- # ver2[v]=2
00:28:12.132     17:12:04	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:28:12.133     17:12:04	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:28:12.133     17:12:04	-- scripts/common.sh@367 -- # return 0
00:28:12.133     17:12:04	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:28:12.133     17:12:04	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:28:12.133  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:28:12.133  		--rc genhtml_branch_coverage=1
00:28:12.133  		--rc genhtml_function_coverage=1
00:28:12.133  		--rc genhtml_legend=1
00:28:12.133  		--rc geninfo_all_blocks=1
00:28:12.133  		--rc geninfo_unexecuted_blocks=1
00:28:12.133  		
00:28:12.133  		'
00:28:12.133     17:12:04	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:28:12.133  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:28:12.133  		--rc genhtml_branch_coverage=1
00:28:12.133  		--rc genhtml_function_coverage=1
00:28:12.133  		--rc genhtml_legend=1
00:28:12.133  		--rc geninfo_all_blocks=1
00:28:12.133  		--rc geninfo_unexecuted_blocks=1
00:28:12.133  		
00:28:12.133  		'
00:28:12.133     17:12:04	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:28:12.133  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:28:12.133  		--rc genhtml_branch_coverage=1
00:28:12.133  		--rc genhtml_function_coverage=1
00:28:12.133  		--rc genhtml_legend=1
00:28:12.133  		--rc geninfo_all_blocks=1
00:28:12.133  		--rc geninfo_unexecuted_blocks=1
00:28:12.133  		
00:28:12.133  		'
00:28:12.133     17:12:04	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:28:12.133  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:28:12.133  		--rc genhtml_branch_coverage=1
00:28:12.133  		--rc genhtml_function_coverage=1
00:28:12.133  		--rc genhtml_legend=1
00:28:12.133  		--rc geninfo_all_blocks=1
00:28:12.133  		--rc geninfo_unexecuted_blocks=1
00:28:12.133  		
00:28:12.133  		'
00:28:12.133    17:12:04	-- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh
00:28:12.133       17:12:04	-- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh
00:28:12.133      17:12:04	-- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../
00:28:12.133     17:12:04	-- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk
00:28:12.133     17:12:04	-- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:28:12.133      17:12:04	-- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]]
00:28:12.133      17:12:04	-- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:28:12.133      17:12:04	-- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:28:12.133       17:12:04	-- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:28:12.133       17:12:04	-- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:28:12.133       17:12:04	-- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:28:12.133       17:12:04	-- paths/export.sh@5 -- # export PATH
00:28:12.133       17:12:04	-- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:28:12.133     17:12:04	-- nvme/functions.sh@10 -- # ctrls=()
00:28:12.133     17:12:04	-- nvme/functions.sh@10 -- # declare -A ctrls
00:28:12.133     17:12:04	-- nvme/functions.sh@11 -- # nvmes=()
00:28:12.133     17:12:04	-- nvme/functions.sh@11 -- # declare -A nvmes
00:28:12.133     17:12:04	-- nvme/functions.sh@12 -- # bdfs=()
00:28:12.133     17:12:04	-- nvme/functions.sh@12 -- # declare -A bdfs
00:28:12.133     17:12:04	-- nvme/functions.sh@13 -- # ordered_ctrls=()
00:28:12.133     17:12:04	-- nvme/functions.sh@13 -- # declare -a ordered_ctrls
00:28:12.133     17:12:04	-- nvme/functions.sh@14 -- # nvme_name=
00:28:12.133    17:12:04	-- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:28:12.133    17:12:04	-- nvme/nvme_scc.sh@12 -- # uname
00:28:12.133   17:12:04	-- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]]
00:28:12.133   17:12:04	-- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]]
00:28:12.133   17:12:04	-- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:28:12.702  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev
00:28:12.702  Waiting for block devices as requested
00:28:12.702  0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme
00:28:12.702   17:12:05	-- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls
00:28:12.702   17:12:05	-- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci
00:28:12.702   17:12:05	-- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme*
00:28:12.702   17:12:05	-- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]]
00:28:12.702   17:12:05	-- nvme/functions.sh@49 -- # pci=0000:00:06.0
00:28:12.702   17:12:05	-- nvme/functions.sh@50 -- # pci_can_use 0000:00:06.0
00:28:12.702   17:12:05	-- scripts/common.sh@15 -- # local i
00:28:12.702   17:12:05	-- scripts/common.sh@18 -- # [[    =~  0000:00:06.0  ]]
00:28:12.702   17:12:05	-- scripts/common.sh@22 -- # [[ -z '' ]]
00:28:12.702   17:12:05	-- scripts/common.sh@24 -- # return 0
00:28:12.702   17:12:05	-- nvme/functions.sh@51 -- # ctrl_dev=nvme0
00:28:12.702   17:12:05	-- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0
00:28:12.702   17:12:05	-- nvme/functions.sh@17 -- # local ref=nvme0 reg val
00:28:12.702   17:12:05	-- nvme/functions.sh@18 -- # shift
00:28:12.702   17:12:05	-- nvme/functions.sh@20 -- # local -gA 'nvme0=()'
00:28:12.702   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.702    17:12:05	-- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0
00:28:12.702   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.702   17:12:05	-- nvme/functions.sh@22 -- # [[ -n '' ]]
00:28:12.702   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.702   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.702   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0x1b36 ]]
00:28:12.702   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"'
00:28:12.702    17:12:05	-- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36
00:28:12.702   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.702   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.702   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0x1af4 ]]
00:28:12.702   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"'
00:28:12.702    17:12:05	-- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4
00:28:12.702   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.702   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.702   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  12340                ]]
00:28:12.702   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12340               "'
00:28:12.702    17:12:05	-- nvme/functions.sh@23 -- # nvme0[sn]='12340               '
00:28:12.702   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.702   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.702   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  QEMU NVMe Ctrl                           ]]
00:28:12.702   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl                          "'
00:28:12.702    17:12:05	-- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl                          '
00:28:12.702   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.702   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.702   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  8.0.0    ]]
00:28:12.702   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0   "'
00:28:12.702    17:12:05	-- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0   '
00:28:12.702   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.702   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.702   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  6 ]]
00:28:12.702   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"'
00:28:12.702    17:12:05	-- nvme/functions.sh@23 -- # nvme0[rab]=6
00:28:12.702   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.702   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.702   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  525400 ]]
00:28:12.702   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"'
00:28:12.702    17:12:05	-- nvme/functions.sh@23 -- # nvme0[ieee]=525400
00:28:12.702   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.702   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.702   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:28:12.702   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"'
00:28:12.702    17:12:05	-- nvme/functions.sh@23 -- # nvme0[cmic]=0
00:28:12.702   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.702   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.702   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:28:12.702   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"'
00:28:12.702    17:12:05	-- nvme/functions.sh@23 -- # nvme0[mdts]=7
00:28:12.702   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.702   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.702   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:28:12.702   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"'
00:28:12.702    17:12:05	-- nvme/functions.sh@23 -- # nvme0[cntlid]=0
00:28:12.702   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.702   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.702   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0x10400 ]]
00:28:12.962   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"'
00:28:12.962    17:12:05	-- nvme/functions.sh@23 -- # nvme0[ver]=0x10400
00:28:12.962   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.962   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.962   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:28:12.962   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"'
00:28:12.962    17:12:05	-- nvme/functions.sh@23 -- # nvme0[rtd3r]=0
00:28:12.962   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.962   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.962   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:28:12.962   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"'
00:28:12.962    17:12:05	-- nvme/functions.sh@23 -- # nvme0[rtd3e]=0
00:28:12.962   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.962   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.962   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0x100 ]]
00:28:12.962   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"'
00:28:12.962    17:12:05	-- nvme/functions.sh@23 -- # nvme0[oaes]=0x100
00:28:12.962   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.962   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.962   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0x8000 ]]
00:28:12.962   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"'
00:28:12.962    17:12:05	-- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000
00:28:12.962   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.962   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.962   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:28:12.962   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"'
00:28:12.962    17:12:05	-- nvme/functions.sh@23 -- # nvme0[rrls]=0
00:28:12.962   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.962   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.962   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:28:12.962   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"'
00:28:12.962    17:12:05	-- nvme/functions.sh@23 -- # nvme0[cntrltype]=1
00:28:12.962   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.962   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.962   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  00000000-0000-0000-0000-000000000000 ]]
00:28:12.962   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"'
00:28:12.962    17:12:05	-- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000
00:28:12.962   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.962   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.962   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:28:12.962   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"'
00:28:12.962    17:12:05	-- nvme/functions.sh@23 -- # nvme0[crdt1]=0
00:28:12.962   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.962   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.962   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:28:12.962   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"'
00:28:12.962    17:12:05	-- nvme/functions.sh@23 -- # nvme0[crdt2]=0
00:28:12.962   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.962   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.962   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:28:12.962   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"'
00:28:12.962    17:12:05	-- nvme/functions.sh@23 -- # nvme0[crdt3]=0
00:28:12.962   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.962   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.962   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:28:12.962   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"'
00:28:12.962    17:12:05	-- nvme/functions.sh@23 -- # nvme0[nvmsr]=0
00:28:12.962   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.962   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.962   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:28:12.962   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"'
00:28:12.962    17:12:05	-- nvme/functions.sh@23 -- # nvme0[vwci]=0
00:28:12.962   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.962   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.962   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:28:12.962   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"'
00:28:12.962    17:12:05	-- nvme/functions.sh@23 -- # nvme0[mec]=0
00:28:12.962   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.962   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.962   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0x12a ]]
00:28:12.962   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"'
00:28:12.962    17:12:05	-- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a
00:28:12.962   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.962   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.962   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  3 ]]
00:28:12.962   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"'
00:28:12.962    17:12:05	-- nvme/functions.sh@23 -- # nvme0[acl]=3
00:28:12.962   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.962   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.962   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  3 ]]
00:28:12.962   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"'
00:28:12.962    17:12:05	-- nvme/functions.sh@23 -- # nvme0[aerl]=3
00:28:12.962   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.962   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.962   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:28:12.962   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"'
00:28:12.962    17:12:05	-- nvme/functions.sh@23 -- # nvme0[frmw]=0x3
00:28:12.962   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.962   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.962   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0x7 ]]
00:28:12.962   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"'
00:28:12.962    17:12:05	-- nvme/functions.sh@23 -- # nvme0[lpa]=0x7
00:28:12.962   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.962   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.962   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:28:12.962   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"'
00:28:12.962    17:12:05	-- nvme/functions.sh@23 -- # nvme0[elpe]=0
00:28:12.962   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.962   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.962   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:28:12.962   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"'
00:28:12.962    17:12:05	-- nvme/functions.sh@23 -- # nvme0[npss]=0
00:28:12.962   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.963   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.963   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:28:12.963   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"'
00:28:12.963    17:12:05	-- nvme/functions.sh@23 -- # nvme0[avscc]=0
00:28:12.963   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.963   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.963   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:28:12.963   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"'
00:28:12.963    17:12:05	-- nvme/functions.sh@23 -- # nvme0[apsta]=0
00:28:12.963   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.963   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.963   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  343 ]]
00:28:12.963   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"'
00:28:12.963    17:12:05	-- nvme/functions.sh@23 -- # nvme0[wctemp]=343
00:28:12.963   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.963   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.963   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  373 ]]
00:28:12.963   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"'
00:28:12.963    17:12:05	-- nvme/functions.sh@23 -- # nvme0[cctemp]=373
00:28:12.963   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.963   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.963   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:28:12.963   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"'
00:28:12.963    17:12:05	-- nvme/functions.sh@23 -- # nvme0[mtfa]=0
00:28:12.963   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.963   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.963   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:28:12.963   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"'
00:28:12.963    17:12:05	-- nvme/functions.sh@23 -- # nvme0[hmpre]=0
00:28:12.963   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.963   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.963   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:28:12.963   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"'
00:28:12.963    17:12:05	-- nvme/functions.sh@23 -- # nvme0[hmmin]=0
00:28:12.963   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.963   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.963   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:28:12.963   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"'
00:28:12.963    17:12:05	-- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0
00:28:12.963   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.963   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.963   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:28:12.963   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"'
00:28:12.963    17:12:05	-- nvme/functions.sh@23 -- # nvme0[unvmcap]=0
00:28:12.963   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.963   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.963   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:28:12.963   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"'
00:28:12.963    17:12:05	-- nvme/functions.sh@23 -- # nvme0[rpmbs]=0
00:28:12.963   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.963   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.963   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:28:12.963   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"'
00:28:12.963    17:12:05	-- nvme/functions.sh@23 -- # nvme0[edstt]=0
00:28:12.963   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.963   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.963   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:28:12.963   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"'
00:28:12.963    17:12:05	-- nvme/functions.sh@23 -- # nvme0[dsto]=0
00:28:12.963   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.963   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.963   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:28:12.963   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"'
00:28:12.963    17:12:05	-- nvme/functions.sh@23 -- # nvme0[fwug]=0
00:28:12.963   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.963   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.963   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:28:12.963   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"'
00:28:12.963    17:12:05	-- nvme/functions.sh@23 -- # nvme0[kas]=0
00:28:12.963   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.963   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.963   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:28:12.963   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"'
00:28:12.963    17:12:05	-- nvme/functions.sh@23 -- # nvme0[hctma]=0
00:28:12.963   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.963   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.963   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:28:12.963   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"'
00:28:12.963    17:12:05	-- nvme/functions.sh@23 -- # nvme0[mntmt]=0
00:28:12.963   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.963   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.963   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:28:12.963   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"'
00:28:12.963    17:12:05	-- nvme/functions.sh@23 -- # nvme0[mxtmt]=0
00:28:12.963   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.963   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.963   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:28:12.963   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"'
00:28:12.963    17:12:05	-- nvme/functions.sh@23 -- # nvme0[sanicap]=0
00:28:12.963   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.963   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.963   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:28:12.963   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"'
00:28:12.963    17:12:05	-- nvme/functions.sh@23 -- # nvme0[hmminds]=0
00:28:12.963   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.963   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.963   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:28:12.963   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"'
00:28:12.963    17:12:05	-- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0
00:28:12.963   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.963   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.963   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:28:12.963   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"'
00:28:12.963    17:12:05	-- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0
00:28:12.963   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.963   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.963   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:28:12.963   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"'
00:28:12.963    17:12:05	-- nvme/functions.sh@23 -- # nvme0[endgidmax]=0
00:28:12.963   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.963   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.963   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:28:12.963   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"'
00:28:12.963    17:12:05	-- nvme/functions.sh@23 -- # nvme0[anatt]=0
00:28:12.963   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.963   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.963   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:28:12.963   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"'
00:28:12.963    17:12:05	-- nvme/functions.sh@23 -- # nvme0[anacap]=0
00:28:12.963   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.963   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.963   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:28:12.963   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"'
00:28:12.963    17:12:05	-- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0
00:28:12.963   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.963   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.963   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:28:12.963   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"'
00:28:12.963    17:12:05	-- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0
00:28:12.963   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.963   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.963   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:28:12.963   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"'
00:28:12.963    17:12:05	-- nvme/functions.sh@23 -- # nvme0[pels]=0
00:28:12.963   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.963   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.963   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:28:12.963   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"'
00:28:12.963    17:12:05	-- nvme/functions.sh@23 -- # nvme0[domainid]=0
00:28:12.963   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.963   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.963   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:28:12.963   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"'
00:28:12.963    17:12:05	-- nvme/functions.sh@23 -- # nvme0[megcap]=0
00:28:12.963   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.964   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.964   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0x66 ]]
00:28:12.964   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"'
00:28:12.964    17:12:05	-- nvme/functions.sh@23 -- # nvme0[sqes]=0x66
00:28:12.964   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.964   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.964   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0x44 ]]
00:28:12.964   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"'
00:28:12.964    17:12:05	-- nvme/functions.sh@23 -- # nvme0[cqes]=0x44
00:28:12.964   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.964   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.964   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:28:12.964   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"'
00:28:12.964    17:12:05	-- nvme/functions.sh@23 -- # nvme0[maxcmd]=0
00:28:12.964   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.964   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.964   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  256 ]]
00:28:12.964   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"'
00:28:12.964    17:12:05	-- nvme/functions.sh@23 -- # nvme0[nn]=256
00:28:12.964   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.964   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.964   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0x15d ]]
00:28:12.964   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"'
00:28:12.964    17:12:05	-- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d
00:28:12.964   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.964   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.964   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:28:12.964   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"'
00:28:12.964    17:12:05	-- nvme/functions.sh@23 -- # nvme0[fuses]=0
00:28:12.964   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.964   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.964   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:28:12.964   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"'
00:28:12.964    17:12:05	-- nvme/functions.sh@23 -- # nvme0[fna]=0
00:28:12.964   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.964   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.964   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0x7 ]]
00:28:12.964   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"'
00:28:12.964    17:12:05	-- nvme/functions.sh@23 -- # nvme0[vwc]=0x7
00:28:12.964   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.964   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.964   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:28:12.964   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"'
00:28:12.964    17:12:05	-- nvme/functions.sh@23 -- # nvme0[awun]=0
00:28:12.964   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.964   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.964   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:28:12.964   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"'
00:28:12.964    17:12:05	-- nvme/functions.sh@23 -- # nvme0[awupf]=0
00:28:12.964   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.964   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.964   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:28:12.964   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"'
00:28:12.964    17:12:05	-- nvme/functions.sh@23 -- # nvme0[icsvscc]=0
00:28:12.964   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.964   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.964   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:28:12.964   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"'
00:28:12.964    17:12:05	-- nvme/functions.sh@23 -- # nvme0[nwpc]=0
00:28:12.964   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.964   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.964   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:28:12.964   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"'
00:28:12.964    17:12:05	-- nvme/functions.sh@23 -- # nvme0[acwu]=0
00:28:12.964   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.964   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.964   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:28:12.964   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"'
00:28:12.964    17:12:05	-- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3
00:28:12.964   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.964   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.964   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0x1 ]]
00:28:12.964   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"'
00:28:12.964    17:12:05	-- nvme/functions.sh@23 -- # nvme0[sgls]=0x1
00:28:12.964   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.964   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.964   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:28:12.964   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"'
00:28:12.964    17:12:05	-- nvme/functions.sh@23 -- # nvme0[mnan]=0
00:28:12.964   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.964   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.964   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:28:12.964   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"'
00:28:12.964    17:12:05	-- nvme/functions.sh@23 -- # nvme0[maxdna]=0
00:28:12.964   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.964   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.964   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:28:12.964   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"'
00:28:12.964    17:12:05	-- nvme/functions.sh@23 -- # nvme0[maxcna]=0
00:28:12.964   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.964   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.964   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  nqn.2019-08.org.qemu:12340 ]]
00:28:12.964   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12340"'
00:28:12.964    17:12:05	-- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12340
00:28:12.964   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.964   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.964   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:28:12.964   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"'
00:28:12.964    17:12:05	-- nvme/functions.sh@23 -- # nvme0[ioccsz]=0
00:28:12.964   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.964   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.964   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:28:12.964   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"'
00:28:12.964    17:12:05	-- nvme/functions.sh@23 -- # nvme0[iorcsz]=0
00:28:12.964   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.964   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.964   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:28:12.964   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"'
00:28:12.964    17:12:05	-- nvme/functions.sh@23 -- # nvme0[icdoff]=0
00:28:12.964   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.964   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.964   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:28:12.964   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"'
00:28:12.964    17:12:05	-- nvme/functions.sh@23 -- # nvme0[fcatt]=0
00:28:12.964   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.964   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.964   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:28:12.964   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"'
00:28:12.964    17:12:05	-- nvme/functions.sh@23 -- # nvme0[msdbd]=0
00:28:12.964   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.964   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.964   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:28:12.964   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"'
00:28:12.964    17:12:05	-- nvme/functions.sh@23 -- # nvme0[ofcs]=0
00:28:12.964   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.964   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.964   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]]
00:28:12.964   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"'
00:28:12.964    17:12:05	-- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0'
00:28:12.964   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.964   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.964   17:12:05	-- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]]
00:28:12.964   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"'
00:28:12.964    17:12:05	-- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-'
00:28:12.964   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.964   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.964   17:12:05	-- nvme/functions.sh@22 -- # [[ -n - ]]
00:28:12.964   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"'
00:28:12.964    17:12:05	-- nvme/functions.sh@23 -- # nvme0[active_power_workload]=-
00:28:12.964   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.964   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.964   17:12:05	-- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns
00:28:12.964   17:12:05	-- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"*
00:28:12.964   17:12:05	-- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]]
00:28:12.964   17:12:05	-- nvme/functions.sh@56 -- # ns_dev=nvme0n1
00:28:12.964   17:12:05	-- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1
00:28:12.964   17:12:05	-- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val
00:28:12.964   17:12:05	-- nvme/functions.sh@18 -- # shift
00:28:12.964   17:12:05	-- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()'
00:28:12.965   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.965   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.965    17:12:05	-- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1
00:28:12.965   17:12:05	-- nvme/functions.sh@22 -- # [[ -n '' ]]
00:28:12.965   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.965   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.965   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0x140000 ]]
00:28:12.965   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"'
00:28:12.965    17:12:05	-- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000
00:28:12.965   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.965   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.965   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0x140000 ]]
00:28:12.965   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"'
00:28:12.965    17:12:05	-- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000
00:28:12.965   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.965   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.965   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0x140000 ]]
00:28:12.965   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"'
00:28:12.965    17:12:05	-- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000
00:28:12.965   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.965   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.965   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0x14 ]]
00:28:12.965   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"'
00:28:12.965    17:12:05	-- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14
00:28:12.965   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.965   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.965   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:28:12.965   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"'
00:28:12.965    17:12:05	-- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7
00:28:12.965   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.965   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.965   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0x4 ]]
00:28:12.965   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"'
00:28:12.965    17:12:05	-- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4
00:28:12.965   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.965   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.965   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:28:12.965   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"'
00:28:12.965    17:12:05	-- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3
00:28:12.965   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.965   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.965   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0x1f ]]
00:28:12.965   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"'
00:28:12.965    17:12:05	-- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f
00:28:12.965   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.965   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.965   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:28:12.965   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"'
00:28:12.965    17:12:05	-- nvme/functions.sh@23 -- # nvme0n1[dps]=0
00:28:12.965   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.965   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.965   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:28:12.965   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"'
00:28:12.965    17:12:05	-- nvme/functions.sh@23 -- # nvme0n1[nmic]=0
00:28:12.965   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.965   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.965   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:28:12.965   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"'
00:28:12.965    17:12:05	-- nvme/functions.sh@23 -- # nvme0n1[rescap]=0
00:28:12.965   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.965   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.965   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:28:12.965   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"'
00:28:12.965    17:12:05	-- nvme/functions.sh@23 -- # nvme0n1[fpi]=0
00:28:12.965   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.965   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.965   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:28:12.965   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"'
00:28:12.965    17:12:05	-- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1
00:28:12.965   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.965   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.965   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:28:12.965   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"'
00:28:12.965    17:12:05	-- nvme/functions.sh@23 -- # nvme0n1[nawun]=0
00:28:12.965   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.965   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.965   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:28:12.965   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"'
00:28:12.965    17:12:05	-- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0
00:28:12.965   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.965   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.965   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:28:12.965   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"'
00:28:12.965    17:12:05	-- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0
00:28:12.965   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.965   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.965   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:28:12.965   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"'
00:28:12.965    17:12:05	-- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0
00:28:12.965   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.965   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.965   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:28:12.965   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"'
00:28:12.965    17:12:05	-- nvme/functions.sh@23 -- # nvme0n1[nabo]=0
00:28:12.965   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.965   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.965   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:28:12.965   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"'
00:28:12.965    17:12:05	-- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0
00:28:12.965   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.965   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.965   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:28:12.965   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"'
00:28:12.965    17:12:05	-- nvme/functions.sh@23 -- # nvme0n1[noiob]=0
00:28:12.965   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.965   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.965   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:28:12.965   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"'
00:28:12.965    17:12:05	-- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0
00:28:12.965   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.965   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.965   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:28:12.965   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"'
00:28:12.965    17:12:05	-- nvme/functions.sh@23 -- # nvme0n1[npwg]=0
00:28:12.965   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.965   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.965   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:28:12.965   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"'
00:28:12.965    17:12:05	-- nvme/functions.sh@23 -- # nvme0n1[npwa]=0
00:28:12.965   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.965   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.965   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:28:12.965   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"'
00:28:12.965    17:12:05	-- nvme/functions.sh@23 -- # nvme0n1[npdg]=0
00:28:12.965   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.965   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.965   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:28:12.965   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"'
00:28:12.965    17:12:05	-- nvme/functions.sh@23 -- # nvme0n1[npda]=0
00:28:12.965   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.965   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.965   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:28:12.965   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"'
00:28:12.965    17:12:05	-- nvme/functions.sh@23 -- # nvme0n1[nows]=0
00:28:12.965   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.965   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.965   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:28:12.965   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"'
00:28:12.965    17:12:05	-- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128
00:28:12.965   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.965   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.965   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:28:12.965   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"'
00:28:12.965    17:12:05	-- nvme/functions.sh@23 -- # nvme0n1[mcl]=128
00:28:12.965   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.965   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.965   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  127 ]]
00:28:12.965   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"'
00:28:12.965    17:12:05	-- nvme/functions.sh@23 -- # nvme0n1[msrc]=127
00:28:12.965   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.965   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.965   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:28:12.965   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"'
00:28:12.965    17:12:05	-- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0
00:28:12.965   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.965   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.965   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:28:12.965   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"'
00:28:12.965    17:12:05	-- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0
00:28:12.965   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.965   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.965   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:28:12.965   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"'
00:28:12.965    17:12:05	-- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0
00:28:12.965   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.966   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.966   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:28:12.966   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"'
00:28:12.966    17:12:05	-- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0
00:28:12.966   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.966   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.966   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:28:12.966   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"'
00:28:12.966    17:12:05	-- nvme/functions.sh@23 -- # nvme0n1[endgid]=0
00:28:12.966   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.966   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.966   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  00000000000000000000000000000000 ]]
00:28:12.966   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"'
00:28:12.966    17:12:05	-- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000
00:28:12.966   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.966   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.966   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  0000000000000000 ]]
00:28:12.966   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"'
00:28:12.966    17:12:05	-- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000
00:28:12.966   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.966   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.966   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:9  rp:0  ]]
00:28:12.966   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0   lbads:9  rp:0 "'
00:28:12.966    17:12:05	-- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0   lbads:9  rp:0 '
00:28:12.966   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.966   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.966   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:9  rp:0  ]]
00:28:12.966   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8   lbads:9  rp:0 "'
00:28:12.966    17:12:05	-- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8   lbads:9  rp:0 '
00:28:12.966   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.966   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.966   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:9  rp:0  ]]
00:28:12.966   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16  lbads:9  rp:0 "'
00:28:12.966    17:12:05	-- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16  lbads:9  rp:0 '
00:28:12.966   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.966   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.966   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:9  rp:0  ]]
00:28:12.966   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64  lbads:9  rp:0 "'
00:28:12.966    17:12:05	-- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64  lbads:9  rp:0 '
00:28:12.966   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.966   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.966   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:12 rp:0 (in use) ]]
00:28:12.966   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0   lbads:12 rp:0 (in use)"'
00:28:12.966    17:12:05	-- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0   lbads:12 rp:0 (in use)'
00:28:12.966   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.966   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.966   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:12 rp:0  ]]
00:28:12.966   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8   lbads:12 rp:0 "'
00:28:12.966    17:12:05	-- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8   lbads:12 rp:0 '
00:28:12.966   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.966   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.966   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:12 rp:0  ]]
00:28:12.966   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16  lbads:12 rp:0 "'
00:28:12.966    17:12:05	-- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16  lbads:12 rp:0 '
00:28:12.966   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.966   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.966   17:12:05	-- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:12 rp:0  ]]
00:28:12.966   17:12:05	-- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64  lbads:12 rp:0 "'
00:28:12.966    17:12:05	-- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64  lbads:12 rp:0 '
00:28:12.966   17:12:05	-- nvme/functions.sh@21 -- # IFS=:
00:28:12.966   17:12:05	-- nvme/functions.sh@21 -- # read -r reg val
00:28:12.966   17:12:05	-- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1
00:28:12.966   17:12:05	-- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0
00:28:12.966   17:12:05	-- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns
00:28:12.966   17:12:05	-- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:06.0
00:28:12.966   17:12:05	-- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0
00:28:12.966   17:12:05	-- nvme/functions.sh@65 -- # (( 1 > 0 ))
00:28:12.966    17:12:05	-- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc
00:28:12.966    17:12:05	-- nvme/functions.sh@202 -- # local _ctrls feature=scc
00:28:12.966    17:12:05	-- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature"))
00:28:12.966     17:12:05	-- nvme/functions.sh@204 -- # get_ctrls_with_feature scc
00:28:12.966     17:12:05	-- nvme/functions.sh@190 -- # (( 1 == 0 ))
00:28:12.966     17:12:05	-- nvme/functions.sh@192 -- # local ctrl feature=scc
00:28:12.966      17:12:05	-- nvme/functions.sh@194 -- # type -t ctrl_has_scc
00:28:12.966     17:12:05	-- nvme/functions.sh@194 -- # [[ function == function ]]
00:28:12.966     17:12:05	-- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}"
00:28:12.966     17:12:05	-- nvme/functions.sh@197 -- # ctrl_has_scc nvme0
00:28:12.966     17:12:05	-- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs
00:28:12.966      17:12:05	-- nvme/functions.sh@184 -- # get_oncs nvme0
00:28:12.966      17:12:05	-- nvme/functions.sh@169 -- # local ctrl=nvme0
00:28:12.966      17:12:05	-- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs
00:28:12.966      17:12:05	-- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs
00:28:12.966      17:12:05	-- nvme/functions.sh@71 -- # [[ -n nvme0 ]]
00:28:12.966      17:12:05	-- nvme/functions.sh@73 -- # local -n _ctrl=nvme0
00:28:12.966      17:12:05	-- nvme/functions.sh@75 -- # [[ -n 0x15d ]]
00:28:12.966      17:12:05	-- nvme/functions.sh@76 -- # echo 0x15d
00:28:12.966     17:12:05	-- nvme/functions.sh@184 -- # oncs=0x15d
00:28:12.966     17:12:05	-- nvme/functions.sh@186 -- # (( oncs & 1 << 8 ))
00:28:12.966     17:12:05	-- nvme/functions.sh@197 -- # echo nvme0
00:28:12.966    17:12:05	-- nvme/functions.sh@205 -- # (( 1 > 0 ))
00:28:12.966    17:12:05	-- nvme/functions.sh@206 -- # echo nvme0
00:28:12.966    17:12:05	-- nvme/functions.sh@207 -- # return 0
00:28:12.966   17:12:05	-- nvme/nvme_scc.sh@17 -- # ctrl=nvme0
00:28:12.966   17:12:05	-- nvme/nvme_scc.sh@17 -- # bdf=0000:00:06.0
00:28:12.966   17:12:05	-- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:28:13.226  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev
00:28:13.484  0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic
00:28:14.421   17:12:07	-- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:06.0'
00:28:14.421   17:12:07	-- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']'
00:28:14.421   17:12:07	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:28:14.421   17:12:07	-- common/autotest_common.sh@10 -- # set +x
00:28:14.421  ************************************
00:28:14.421  START TEST nvme_simple_copy
00:28:14.421  ************************************
00:28:14.421   17:12:07	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:06.0'
00:28:14.680  Initializing NVMe Controllers
00:28:14.680  Attaching to 0000:00:06.0
00:28:14.680  Controller supports SCC. Attached to 0000:00:06.0
00:28:14.680    Namespace ID: 1 size: 5GB
00:28:14.680  Initialization complete.
00:28:14.680  
00:28:14.680  Controller QEMU NVMe Ctrl       (12340               )
00:28:14.680  Controller PCI vendor:6966 PCI subsystem vendor:6900
00:28:14.680  Namespace Block Size:4096
00:28:14.680  Writing LBAs 0 to 63 with Random Data
00:28:14.680  Copied LBAs from 0 - 63 to the Destination LBA 256
00:28:14.680  LBAs matching Written Data: 64
00:28:14.680  
00:28:14.680  real	0m0.289s
00:28:14.680  user	0m0.100s
00:28:14.680  sys	0m0.091s
00:28:14.680   17:12:07	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:28:14.680  ************************************
00:28:14.680  END TEST nvme_simple_copy
00:28:14.680  ************************************
00:28:14.680   17:12:07	-- common/autotest_common.sh@10 -- # set +x
00:28:14.680  
00:28:14.680  real	0m2.701s
00:28:14.680  user	0m0.807s
00:28:14.680  sys	0m1.791s
00:28:14.680   17:12:07	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:28:14.680   17:12:07	-- common/autotest_common.sh@10 -- # set +x
00:28:14.680  ************************************
00:28:14.680  END TEST nvme_scc
00:28:14.680  ************************************
00:28:14.680   17:12:07	-- spdk/autotest.sh@216 -- # [[ 0 -eq 1 ]]
00:28:14.680   17:12:07	-- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]]
00:28:14.680   17:12:07	-- spdk/autotest.sh@222 -- # [[ '' -eq 1 ]]
00:28:14.680   17:12:07	-- spdk/autotest.sh@225 -- # [[ 0 -eq 1 ]]
00:28:14.680   17:12:07	-- spdk/autotest.sh@229 -- # [[ '' -eq 1 ]]
00:28:14.680   17:12:07	-- spdk/autotest.sh@233 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh
00:28:14.680   17:12:07	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:28:14.680   17:12:07	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:28:14.680   17:12:07	-- common/autotest_common.sh@10 -- # set +x
00:28:14.680  ************************************
00:28:14.680  START TEST nvme_rpc
00:28:14.680  ************************************
00:28:14.680   17:12:07	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh
00:28:14.940  * Looking for test storage...
00:28:14.940  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme
00:28:14.940    17:12:07	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:28:14.940     17:12:07	-- common/autotest_common.sh@1690 -- # lcov --version
00:28:14.940     17:12:07	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:28:14.940    17:12:07	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:28:14.940    17:12:07	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:28:14.940    17:12:07	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:28:14.940    17:12:07	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:28:14.940    17:12:07	-- scripts/common.sh@335 -- # IFS=.-:
00:28:14.940    17:12:07	-- scripts/common.sh@335 -- # read -ra ver1
00:28:14.940    17:12:07	-- scripts/common.sh@336 -- # IFS=.-:
00:28:14.940    17:12:07	-- scripts/common.sh@336 -- # read -ra ver2
00:28:14.940    17:12:07	-- scripts/common.sh@337 -- # local 'op=<'
00:28:14.940    17:12:07	-- scripts/common.sh@339 -- # ver1_l=2
00:28:14.940    17:12:07	-- scripts/common.sh@340 -- # ver2_l=1
00:28:14.940    17:12:07	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:28:14.940    17:12:07	-- scripts/common.sh@343 -- # case "$op" in
00:28:14.940    17:12:07	-- scripts/common.sh@344 -- # : 1
00:28:14.940    17:12:07	-- scripts/common.sh@363 -- # (( v = 0 ))
00:28:14.940    17:12:07	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:28:14.940     17:12:07	-- scripts/common.sh@364 -- # decimal 1
00:28:14.940     17:12:07	-- scripts/common.sh@352 -- # local d=1
00:28:14.940     17:12:07	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:28:14.940     17:12:07	-- scripts/common.sh@354 -- # echo 1
00:28:14.940    17:12:07	-- scripts/common.sh@364 -- # ver1[v]=1
00:28:14.940     17:12:07	-- scripts/common.sh@365 -- # decimal 2
00:28:14.940     17:12:07	-- scripts/common.sh@352 -- # local d=2
00:28:14.940     17:12:07	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:28:14.940     17:12:07	-- scripts/common.sh@354 -- # echo 2
00:28:14.940    17:12:07	-- scripts/common.sh@365 -- # ver2[v]=2
00:28:14.940    17:12:07	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:28:14.940    17:12:07	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:28:14.940    17:12:07	-- scripts/common.sh@367 -- # return 0
00:28:14.940    17:12:07	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:28:14.940    17:12:07	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:28:14.940  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:28:14.940  		--rc genhtml_branch_coverage=1
00:28:14.940  		--rc genhtml_function_coverage=1
00:28:14.940  		--rc genhtml_legend=1
00:28:14.940  		--rc geninfo_all_blocks=1
00:28:14.940  		--rc geninfo_unexecuted_blocks=1
00:28:14.940  		
00:28:14.940  		'
00:28:14.940    17:12:07	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:28:14.940  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:28:14.940  		--rc genhtml_branch_coverage=1
00:28:14.940  		--rc genhtml_function_coverage=1
00:28:14.940  		--rc genhtml_legend=1
00:28:14.940  		--rc geninfo_all_blocks=1
00:28:14.940  		--rc geninfo_unexecuted_blocks=1
00:28:14.940  		
00:28:14.940  		'
00:28:14.940    17:12:07	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:28:14.940  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:28:14.940  		--rc genhtml_branch_coverage=1
00:28:14.940  		--rc genhtml_function_coverage=1
00:28:14.940  		--rc genhtml_legend=1
00:28:14.940  		--rc geninfo_all_blocks=1
00:28:14.940  		--rc geninfo_unexecuted_blocks=1
00:28:14.940  		
00:28:14.940  		'
00:28:14.940    17:12:07	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:28:14.940  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:28:14.940  		--rc genhtml_branch_coverage=1
00:28:14.940  		--rc genhtml_function_coverage=1
00:28:14.940  		--rc genhtml_legend=1
00:28:14.940  		--rc geninfo_all_blocks=1
00:28:14.940  		--rc geninfo_unexecuted_blocks=1
00:28:14.940  		
00:28:14.940  		'
00:28:14.940   17:12:07	-- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:28:14.940    17:12:07	-- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf
00:28:14.940    17:12:07	-- common/autotest_common.sh@1519 -- # bdfs=()
00:28:14.940    17:12:07	-- common/autotest_common.sh@1519 -- # local bdfs
00:28:14.940    17:12:07	-- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs))
00:28:14.940     17:12:07	-- common/autotest_common.sh@1520 -- # get_nvme_bdfs
00:28:14.940     17:12:07	-- common/autotest_common.sh@1508 -- # bdfs=()
00:28:14.940     17:12:07	-- common/autotest_common.sh@1508 -- # local bdfs
00:28:14.940     17:12:07	-- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:28:14.940      17:12:07	-- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:28:14.940      17:12:07	-- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr'
00:28:14.940     17:12:07	-- common/autotest_common.sh@1510 -- # (( 1 == 0 ))
00:28:14.940     17:12:07	-- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0
00:28:14.940    17:12:07	-- common/autotest_common.sh@1522 -- # echo 0000:00:06.0
00:28:14.940   17:12:07	-- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:06.0
00:28:14.940   17:12:07	-- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=149732
00:28:14.940   17:12:07	-- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3
00:28:14.940   17:12:07	-- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT
00:28:14.940   17:12:07	-- nvme/nvme_rpc.sh@19 -- # waitforlisten 149732
00:28:14.940   17:12:07	-- common/autotest_common.sh@829 -- # '[' -z 149732 ']'
00:28:14.940   17:12:07	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:28:14.940   17:12:07	-- common/autotest_common.sh@834 -- # local max_retries=100
00:28:14.940  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:28:14.940   17:12:07	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:28:14.940   17:12:07	-- common/autotest_common.sh@838 -- # xtrace_disable
00:28:14.940   17:12:07	-- common/autotest_common.sh@10 -- # set +x
00:28:15.200  [2024-11-19 17:12:07.840838] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:28:15.200  [2024-11-19 17:12:07.841004] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149732 ]
00:28:15.200  [2024-11-19 17:12:07.990915] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2
00:28:15.200  [2024-11-19 17:12:08.044660] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:28:15.200  [2024-11-19 17:12:08.045119] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:28:15.200  [2024-11-19 17:12:08.045123] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:28:16.135   17:12:08	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:28:16.135   17:12:08	-- common/autotest_common.sh@862 -- # return 0
00:28:16.135   17:12:08	-- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0
00:28:16.394  Nvme0n1
00:28:16.394   17:12:09	-- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']'
00:28:16.394   17:12:09	-- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1
00:28:16.703  request:
00:28:16.703  {
00:28:16.703    "filename": "non_existing_file",
00:28:16.703    "bdev_name": "Nvme0n1",
00:28:16.703    "method": "bdev_nvme_apply_firmware",
00:28:16.703    "req_id": 1
00:28:16.703  }
00:28:16.703  Got JSON-RPC error response
00:28:16.703  response:
00:28:16.703  {
00:28:16.703    "code": -32603,
00:28:16.703    "message": "open file failed."
00:28:16.703  }
00:28:16.703   17:12:09	-- nvme/nvme_rpc.sh@32 -- # rv=1
00:28:16.703   17:12:09	-- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']'
00:28:16.703   17:12:09	-- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0
00:28:16.703   17:12:09	-- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT
00:28:16.703   17:12:09	-- nvme/nvme_rpc.sh@40 -- # killprocess 149732
00:28:16.703   17:12:09	-- common/autotest_common.sh@936 -- # '[' -z 149732 ']'
00:28:16.703   17:12:09	-- common/autotest_common.sh@940 -- # kill -0 149732
00:28:16.703    17:12:09	-- common/autotest_common.sh@941 -- # uname
00:28:16.703   17:12:09	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:28:16.976    17:12:09	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 149732
00:28:16.976   17:12:09	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:28:16.976   17:12:09	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:28:16.976  killing process with pid 149732
00:28:16.976   17:12:09	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 149732'
00:28:16.976   17:12:09	-- common/autotest_common.sh@955 -- # kill 149732
00:28:16.976   17:12:09	-- common/autotest_common.sh@960 -- # wait 149732
00:28:17.234  
00:28:17.234  real	0m2.473s
00:28:17.234  user	0m4.848s
00:28:17.234  sys	0m0.579s
00:28:17.234   17:12:09	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:28:17.234   17:12:09	-- common/autotest_common.sh@10 -- # set +x
00:28:17.234  ************************************
00:28:17.234  END TEST nvme_rpc
00:28:17.234  ************************************
00:28:17.234   17:12:10	-- spdk/autotest.sh@234 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh
00:28:17.234   17:12:10	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:28:17.234   17:12:10	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:28:17.234   17:12:10	-- common/autotest_common.sh@10 -- # set +x
00:28:17.234  ************************************
00:28:17.234  START TEST nvme_rpc_timeouts
00:28:17.234  ************************************
00:28:17.234   17:12:10	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh
00:28:17.493  * Looking for test storage...
00:28:17.493  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme
00:28:17.493    17:12:10	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:28:17.493     17:12:10	-- common/autotest_common.sh@1690 -- # lcov --version
00:28:17.493     17:12:10	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:28:17.493    17:12:10	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:28:17.493    17:12:10	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:28:17.493    17:12:10	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:28:17.493    17:12:10	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:28:17.493    17:12:10	-- scripts/common.sh@335 -- # IFS=.-:
00:28:17.493    17:12:10	-- scripts/common.sh@335 -- # read -ra ver1
00:28:17.493    17:12:10	-- scripts/common.sh@336 -- # IFS=.-:
00:28:17.493    17:12:10	-- scripts/common.sh@336 -- # read -ra ver2
00:28:17.493    17:12:10	-- scripts/common.sh@337 -- # local 'op=<'
00:28:17.493    17:12:10	-- scripts/common.sh@339 -- # ver1_l=2
00:28:17.493    17:12:10	-- scripts/common.sh@340 -- # ver2_l=1
00:28:17.493    17:12:10	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:28:17.493    17:12:10	-- scripts/common.sh@343 -- # case "$op" in
00:28:17.493    17:12:10	-- scripts/common.sh@344 -- # : 1
00:28:17.493    17:12:10	-- scripts/common.sh@363 -- # (( v = 0 ))
00:28:17.493    17:12:10	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:28:17.493     17:12:10	-- scripts/common.sh@364 -- # decimal 1
00:28:17.493     17:12:10	-- scripts/common.sh@352 -- # local d=1
00:28:17.493     17:12:10	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:28:17.493     17:12:10	-- scripts/common.sh@354 -- # echo 1
00:28:17.493    17:12:10	-- scripts/common.sh@364 -- # ver1[v]=1
00:28:17.493     17:12:10	-- scripts/common.sh@365 -- # decimal 2
00:28:17.493     17:12:10	-- scripts/common.sh@352 -- # local d=2
00:28:17.493     17:12:10	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:28:17.493     17:12:10	-- scripts/common.sh@354 -- # echo 2
00:28:17.493    17:12:10	-- scripts/common.sh@365 -- # ver2[v]=2
00:28:17.493    17:12:10	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:28:17.493    17:12:10	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:28:17.493    17:12:10	-- scripts/common.sh@367 -- # return 0
00:28:17.493    17:12:10	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:28:17.493    17:12:10	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:28:17.493  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:28:17.493  		--rc genhtml_branch_coverage=1
00:28:17.493  		--rc genhtml_function_coverage=1
00:28:17.493  		--rc genhtml_legend=1
00:28:17.493  		--rc geninfo_all_blocks=1
00:28:17.493  		--rc geninfo_unexecuted_blocks=1
00:28:17.493  		
00:28:17.493  		'
00:28:17.493    17:12:10	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:28:17.493  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:28:17.493  		--rc genhtml_branch_coverage=1
00:28:17.493  		--rc genhtml_function_coverage=1
00:28:17.493  		--rc genhtml_legend=1
00:28:17.493  		--rc geninfo_all_blocks=1
00:28:17.493  		--rc geninfo_unexecuted_blocks=1
00:28:17.493  		
00:28:17.493  		'
00:28:17.493    17:12:10	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:28:17.493  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:28:17.493  		--rc genhtml_branch_coverage=1
00:28:17.493  		--rc genhtml_function_coverage=1
00:28:17.493  		--rc genhtml_legend=1
00:28:17.493  		--rc geninfo_all_blocks=1
00:28:17.493  		--rc geninfo_unexecuted_blocks=1
00:28:17.493  		
00:28:17.493  		'
00:28:17.493    17:12:10	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:28:17.493  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:28:17.493  		--rc genhtml_branch_coverage=1
00:28:17.493  		--rc genhtml_function_coverage=1
00:28:17.493  		--rc genhtml_legend=1
00:28:17.493  		--rc geninfo_all_blocks=1
00:28:17.493  		--rc geninfo_unexecuted_blocks=1
00:28:17.493  		
00:28:17.493  		'
00:28:17.493   17:12:10	-- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:28:17.493   17:12:10	-- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_149794
00:28:17.493   17:12:10	-- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_149794
00:28:17.493   17:12:10	-- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=149833
00:28:17.494   17:12:10	-- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT
00:28:17.494   17:12:10	-- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 149833
00:28:17.494   17:12:10	-- common/autotest_common.sh@829 -- # '[' -z 149833 ']'
00:28:17.494   17:12:10	-- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3
00:28:17.494   17:12:10	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:28:17.494   17:12:10	-- common/autotest_common.sh@834 -- # local max_retries=100
00:28:17.494  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:28:17.494   17:12:10	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:28:17.494   17:12:10	-- common/autotest_common.sh@838 -- # xtrace_disable
00:28:17.494   17:12:10	-- common/autotest_common.sh@10 -- # set +x
00:28:17.494  [2024-11-19 17:12:10.326056] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:28:17.494  [2024-11-19 17:12:10.326222] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149833 ]
00:28:17.752  [2024-11-19 17:12:10.469777] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2
00:28:17.752  [2024-11-19 17:12:10.512241] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:28:17.752  [2024-11-19 17:12:10.512627] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:28:17.752  [2024-11-19 17:12:10.512633] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:28:18.688   17:12:11	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:28:18.688   17:12:11	-- common/autotest_common.sh@862 -- # return 0
00:28:18.688  Checking default timeout settings:
00:28:18.688   17:12:11	-- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings:
00:28:18.688   17:12:11	-- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config
00:28:18.946  Making settings changes with rpc:
00:28:18.946   17:12:11	-- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc:
00:28:18.946   17:12:11	-- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort
00:28:18.946  Check default vs. modified settings:
00:28:18.946   17:12:11	-- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings:
00:28:18.946   17:12:11	-- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config
00:28:19.205   17:12:12	-- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us'
00:28:19.205   17:12:12	-- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check
00:28:19.205    17:12:12	-- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_149794
00:28:19.205    17:12:12	-- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}'
00:28:19.205    17:12:12	-- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g'
00:28:19.205   17:12:12	-- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none
00:28:19.205    17:12:12	-- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_149794
00:28:19.205    17:12:12	-- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}'
00:28:19.205    17:12:12	-- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g'
00:28:19.205   17:12:12	-- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort
00:28:19.205   17:12:12	-- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']'
00:28:19.205   17:12:12	-- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected.
00:28:19.205  Setting action_on_timeout is changed as expected.
00:28:19.205   17:12:12	-- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check
00:28:19.205    17:12:12	-- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_149794
00:28:19.205    17:12:12	-- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}'
00:28:19.205    17:12:12	-- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g'
00:28:19.464   17:12:12	-- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0
00:28:19.464    17:12:12	-- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_149794
00:28:19.464    17:12:12	-- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}'
00:28:19.464    17:12:12	-- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g'
00:28:19.464   17:12:12	-- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000
00:28:19.464   17:12:12	-- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']'
00:28:19.464   17:12:12	-- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected.
00:28:19.464  Setting timeout_us is changed as expected.
00:28:19.464   17:12:12	-- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check
00:28:19.464    17:12:12	-- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_149794
00:28:19.464    17:12:12	-- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}'
00:28:19.464    17:12:12	-- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g'
00:28:19.464   17:12:12	-- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0
00:28:19.464    17:12:12	-- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_149794
00:28:19.464    17:12:12	-- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}'
00:28:19.464    17:12:12	-- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g'
00:28:19.464   17:12:12	-- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000
00:28:19.464  Setting timeout_admin_us is changed as expected.
00:28:19.464   17:12:12	-- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']'
00:28:19.464   17:12:12	-- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected.
00:28:19.464   17:12:12	-- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT
00:28:19.464   17:12:12	-- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_149794 /tmp/settings_modified_149794
00:28:19.464   17:12:12	-- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 149833
00:28:19.464   17:12:12	-- common/autotest_common.sh@936 -- # '[' -z 149833 ']'
00:28:19.464   17:12:12	-- common/autotest_common.sh@940 -- # kill -0 149833
00:28:19.464    17:12:12	-- common/autotest_common.sh@941 -- # uname
00:28:19.464   17:12:12	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:28:19.464    17:12:12	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 149833
00:28:19.464   17:12:12	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:28:19.464   17:12:12	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:28:19.464   17:12:12	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 149833'
00:28:19.464  killing process with pid 149833
00:28:19.464   17:12:12	-- common/autotest_common.sh@955 -- # kill 149833
00:28:19.464   17:12:12	-- common/autotest_common.sh@960 -- # wait 149833
00:28:19.724  RPC TIMEOUT SETTING TEST PASSED.
00:28:19.724   17:12:12	-- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED.
00:28:19.724  ************************************
00:28:19.724  END TEST nvme_rpc_timeouts
00:28:19.724  ************************************
00:28:19.724  
00:28:19.724  real	0m2.466s
00:28:19.724  user	0m4.760s
00:28:19.724  sys	0m0.666s
00:28:19.724   17:12:12	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:28:19.724   17:12:12	-- common/autotest_common.sh@10 -- # set +x
00:28:19.982   17:12:12	-- spdk/autotest.sh@238 -- # '[' 1 -eq 0 ']'
00:28:19.982   17:12:12	-- spdk/autotest.sh@242 -- # [[ 0 -eq 1 ]]
00:28:19.982   17:12:12	-- spdk/autotest.sh@251 -- # '[' 0 -eq 1 ']'
00:28:19.982   17:12:12	-- spdk/autotest.sh@255 -- # timing_exit lib
00:28:19.982   17:12:12	-- common/autotest_common.sh@728 -- # xtrace_disable
00:28:19.982   17:12:12	-- common/autotest_common.sh@10 -- # set +x
00:28:19.982   17:12:12	-- spdk/autotest.sh@257 -- # '[' 0 -eq 1 ']'
00:28:19.982   17:12:12	-- spdk/autotest.sh@265 -- # '[' 0 -eq 1 ']'
00:28:19.982   17:12:12	-- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']'
00:28:19.982   17:12:12	-- spdk/autotest.sh@298 -- # '[' 0 -eq 1 ']'
00:28:19.982   17:12:12	-- spdk/autotest.sh@302 -- # '[' 0 -eq 1 ']'
00:28:19.982   17:12:12	-- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']'
00:28:19.982   17:12:12	-- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']'
00:28:19.982   17:12:12	-- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']'
00:28:19.982   17:12:12	-- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']'
00:28:19.983   17:12:12	-- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']'
00:28:19.983   17:12:12	-- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']'
00:28:19.983   17:12:12	-- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']'
00:28:19.983   17:12:12	-- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']'
00:28:19.983   17:12:12	-- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']'
00:28:19.983   17:12:12	-- spdk/autotest.sh@353 -- # [[ 0 -eq 1 ]]
00:28:19.983   17:12:12	-- spdk/autotest.sh@357 -- # [[ 0 -eq 1 ]]
00:28:19.983   17:12:12	-- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]]
00:28:19.983   17:12:12	-- spdk/autotest.sh@365 -- # [[ 1 -eq 1 ]]
00:28:19.983   17:12:12	-- spdk/autotest.sh@366 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f
00:28:19.983   17:12:12	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:28:19.983   17:12:12	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:28:19.983   17:12:12	-- common/autotest_common.sh@10 -- # set +x
00:28:19.983  ************************************
00:28:19.983  START TEST blockdev_raid5f
00:28:19.983  ************************************
00:28:19.983   17:12:12	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f
00:28:19.983  * Looking for test storage...
00:28:19.983  * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev
00:28:19.983    17:12:12	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:28:19.983     17:12:12	-- common/autotest_common.sh@1690 -- # lcov --version
00:28:19.983     17:12:12	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:28:20.242    17:12:12	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:28:20.242    17:12:12	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:28:20.242    17:12:12	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:28:20.242    17:12:12	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:28:20.242    17:12:12	-- scripts/common.sh@335 -- # IFS=.-:
00:28:20.242    17:12:12	-- scripts/common.sh@335 -- # read -ra ver1
00:28:20.242    17:12:12	-- scripts/common.sh@336 -- # IFS=.-:
00:28:20.242    17:12:12	-- scripts/common.sh@336 -- # read -ra ver2
00:28:20.242    17:12:12	-- scripts/common.sh@337 -- # local 'op=<'
00:28:20.242    17:12:12	-- scripts/common.sh@339 -- # ver1_l=2
00:28:20.242    17:12:12	-- scripts/common.sh@340 -- # ver2_l=1
00:28:20.242    17:12:12	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:28:20.242    17:12:12	-- scripts/common.sh@343 -- # case "$op" in
00:28:20.242    17:12:12	-- scripts/common.sh@344 -- # : 1
00:28:20.242    17:12:12	-- scripts/common.sh@363 -- # (( v = 0 ))
00:28:20.242    17:12:12	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:28:20.242     17:12:12	-- scripts/common.sh@364 -- # decimal 1
00:28:20.242     17:12:12	-- scripts/common.sh@352 -- # local d=1
00:28:20.242     17:12:12	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:28:20.242     17:12:12	-- scripts/common.sh@354 -- # echo 1
00:28:20.242    17:12:12	-- scripts/common.sh@364 -- # ver1[v]=1
00:28:20.242     17:12:12	-- scripts/common.sh@365 -- # decimal 2
00:28:20.242     17:12:12	-- scripts/common.sh@352 -- # local d=2
00:28:20.242     17:12:12	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:28:20.242     17:12:12	-- scripts/common.sh@354 -- # echo 2
00:28:20.242    17:12:12	-- scripts/common.sh@365 -- # ver2[v]=2
00:28:20.242    17:12:12	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:28:20.242    17:12:12	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:28:20.242    17:12:12	-- scripts/common.sh@367 -- # return 0
00:28:20.242    17:12:12	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:28:20.242    17:12:12	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:28:20.242  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:28:20.242  		--rc genhtml_branch_coverage=1
00:28:20.242  		--rc genhtml_function_coverage=1
00:28:20.242  		--rc genhtml_legend=1
00:28:20.242  		--rc geninfo_all_blocks=1
00:28:20.242  		--rc geninfo_unexecuted_blocks=1
00:28:20.242  		
00:28:20.242  		'
00:28:20.242    17:12:12	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:28:20.242  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:28:20.242  		--rc genhtml_branch_coverage=1
00:28:20.242  		--rc genhtml_function_coverage=1
00:28:20.242  		--rc genhtml_legend=1
00:28:20.242  		--rc geninfo_all_blocks=1
00:28:20.242  		--rc geninfo_unexecuted_blocks=1
00:28:20.242  		
00:28:20.242  		'
00:28:20.242    17:12:12	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:28:20.242  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:28:20.242  		--rc genhtml_branch_coverage=1
00:28:20.242  		--rc genhtml_function_coverage=1
00:28:20.242  		--rc genhtml_legend=1
00:28:20.242  		--rc geninfo_all_blocks=1
00:28:20.242  		--rc geninfo_unexecuted_blocks=1
00:28:20.242  		
00:28:20.242  		'
00:28:20.242    17:12:12	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:28:20.242  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:28:20.242  		--rc genhtml_branch_coverage=1
00:28:20.242  		--rc genhtml_function_coverage=1
00:28:20.242  		--rc genhtml_legend=1
00:28:20.242  		--rc geninfo_all_blocks=1
00:28:20.242  		--rc geninfo_unexecuted_blocks=1
00:28:20.242  		
00:28:20.242  		'
00:28:20.242   17:12:12	-- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh
00:28:20.242    17:12:12	-- bdev/nbd_common.sh@6 -- # set -e
00:28:20.242   17:12:12	-- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd
00:28:20.242   17:12:12	-- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:28:20.242   17:12:12	-- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json
00:28:20.242   17:12:12	-- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json
00:28:20.242   17:12:12	-- bdev/blockdev.sh@18 -- # :
00:28:20.242   17:12:12	-- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0
00:28:20.242   17:12:12	-- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1
00:28:20.242   17:12:12	-- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5
00:28:20.242    17:12:12	-- bdev/blockdev.sh@672 -- # uname -s
00:28:20.242   17:12:12	-- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']'
00:28:20.242   17:12:12	-- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0
00:28:20.242   17:12:12	-- bdev/blockdev.sh@680 -- # test_type=raid5f
00:28:20.242   17:12:12	-- bdev/blockdev.sh@681 -- # crypto_device=
00:28:20.242   17:12:12	-- bdev/blockdev.sh@682 -- # dek=
00:28:20.242   17:12:12	-- bdev/blockdev.sh@683 -- # env_ctx=
00:28:20.242   17:12:12	-- bdev/blockdev.sh@684 -- # wait_for_rpc=
00:28:20.242   17:12:12	-- bdev/blockdev.sh@685 -- # '[' -n '' ']'
00:28:20.242   17:12:12	-- bdev/blockdev.sh@688 -- # [[ raid5f == bdev ]]
00:28:20.242   17:12:12	-- bdev/blockdev.sh@688 -- # [[ raid5f == crypto_* ]]
00:28:20.242   17:12:12	-- bdev/blockdev.sh@691 -- # start_spdk_tgt
00:28:20.242   17:12:12	-- bdev/blockdev.sh@45 -- # spdk_tgt_pid=149972
00:28:20.242   17:12:12	-- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT
00:28:20.242   17:12:12	-- bdev/blockdev.sh@47 -- # waitforlisten 149972
00:28:20.242   17:12:12	-- common/autotest_common.sh@829 -- # '[' -z 149972 ']'
00:28:20.242   17:12:12	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:28:20.242   17:12:12	-- common/autotest_common.sh@834 -- # local max_retries=100
00:28:20.243   17:12:12	-- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' ''
00:28:20.243   17:12:12	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:28:20.243  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:28:20.243   17:12:12	-- common/autotest_common.sh@838 -- # xtrace_disable
00:28:20.243   17:12:12	-- common/autotest_common.sh@10 -- # set +x
00:28:20.243  [2024-11-19 17:12:12.968354] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:28:20.243  [2024-11-19 17:12:12.969277] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149972 ]
00:28:20.502  [2024-11-19 17:12:13.123763] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:28:20.502  [2024-11-19 17:12:13.174053] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:28:20.502  [2024-11-19 17:12:13.174304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:28:21.071   17:12:13	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:28:21.071   17:12:13	-- common/autotest_common.sh@862 -- # return 0
00:28:21.071   17:12:13	-- bdev/blockdev.sh@692 -- # case "$test_type" in
00:28:21.071   17:12:13	-- bdev/blockdev.sh@724 -- # setup_raid5f_conf
00:28:21.071   17:12:13	-- bdev/blockdev.sh@278 -- # rpc_cmd
00:28:21.071   17:12:13	-- common/autotest_common.sh@561 -- # xtrace_disable
00:28:21.071   17:12:13	-- common/autotest_common.sh@10 -- # set +x
00:28:21.331  Malloc0
00:28:21.331  Malloc1
00:28:21.331  Malloc2
00:28:21.331   17:12:13	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:28:21.331   17:12:13	-- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine
00:28:21.331   17:12:13	-- common/autotest_common.sh@561 -- # xtrace_disable
00:28:21.331   17:12:13	-- common/autotest_common.sh@10 -- # set +x
00:28:21.331   17:12:13	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:28:21.331   17:12:13	-- bdev/blockdev.sh@738 -- # cat
00:28:21.331    17:12:13	-- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel
00:28:21.331    17:12:13	-- common/autotest_common.sh@561 -- # xtrace_disable
00:28:21.331    17:12:13	-- common/autotest_common.sh@10 -- # set +x
00:28:21.331    17:12:13	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:28:21.331    17:12:13	-- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev
00:28:21.331    17:12:13	-- common/autotest_common.sh@561 -- # xtrace_disable
00:28:21.331    17:12:13	-- common/autotest_common.sh@10 -- # set +x
00:28:21.331    17:12:14	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:28:21.331    17:12:14	-- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf
00:28:21.331    17:12:14	-- common/autotest_common.sh@561 -- # xtrace_disable
00:28:21.331    17:12:14	-- common/autotest_common.sh@10 -- # set +x
00:28:21.331    17:12:14	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:28:21.331   17:12:14	-- bdev/blockdev.sh@746 -- # mapfile -t bdevs
00:28:21.331    17:12:14	-- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs
00:28:21.331    17:12:14	-- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)'
00:28:21.331    17:12:14	-- common/autotest_common.sh@561 -- # xtrace_disable
00:28:21.331    17:12:14	-- common/autotest_common.sh@10 -- # set +x
00:28:21.331    17:12:14	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:28:21.331   17:12:14	-- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name
00:28:21.331    17:12:14	-- bdev/blockdev.sh@747 -- # jq -r .name
00:28:21.331    17:12:14	-- bdev/blockdev.sh@747 -- # printf '%s\n' '{' '  "name": "raid5f",' '  "aliases": [' '    "a0145004-dc14-4ef8-8059-f635c20d7ee3"' '  ],' '  "product_name": "Raid Volume",' '  "block_size": 512,' '  "num_blocks": 131072,' '  "uuid": "a0145004-dc14-4ef8-8059-f635c20d7ee3",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": false,' '    "write_zeroes": true,' '    "flush": false,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "driver_specific": {' '    "raid": {' '      "uuid": "a0145004-dc14-4ef8-8059-f635c20d7ee3",' '      "strip_size_kb": 2,' '      "state": "online",' '      "raid_level": "raid5f",' '      "superblock": false,' '      "num_base_bdevs": 3,' '      "num_base_bdevs_discovered": 3,' '      "num_base_bdevs_operational": 3,' '      "base_bdevs_list": [' '        {' '          "name": "Malloc0",' '          "uuid": "3833a6fb-4965-4bc4-96dc-65bff4b77a73",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        },' '        {' '          "name": "Malloc1",' '          "uuid": "181e1ce7-4d75-4914-956b-63465473c446",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        },' '        {' '          "name": "Malloc2",' '          "uuid": "3ad80bfe-da17-4a1f-ac0d-530cf47042b4",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        }' '      ]' '    }' '  }' '}'
00:28:21.331   17:12:14	-- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}")
00:28:21.331   17:12:14	-- bdev/blockdev.sh@750 -- # hello_world_bdev=raid5f
00:28:21.331   17:12:14	-- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT
00:28:21.331   17:12:14	-- bdev/blockdev.sh@752 -- # killprocess 149972
00:28:21.331   17:12:14	-- common/autotest_common.sh@936 -- # '[' -z 149972 ']'
00:28:21.331   17:12:14	-- common/autotest_common.sh@940 -- # kill -0 149972
00:28:21.331    17:12:14	-- common/autotest_common.sh@941 -- # uname
00:28:21.331   17:12:14	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:28:21.331    17:12:14	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 149972
00:28:21.331   17:12:14	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:28:21.331   17:12:14	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:28:21.331   17:12:14	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 149972'
00:28:21.331  killing process with pid 149972
00:28:21.331   17:12:14	-- common/autotest_common.sh@955 -- # kill 149972
00:28:21.331   17:12:14	-- common/autotest_common.sh@960 -- # wait 149972
00:28:21.899   17:12:14	-- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT
00:28:21.899   17:12:14	-- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f ''
00:28:21.899   17:12:14	-- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']'
00:28:21.899   17:12:14	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:28:21.899   17:12:14	-- common/autotest_common.sh@10 -- # set +x
00:28:21.899  ************************************
00:28:21.899  START TEST bdev_hello_world
00:28:21.899  ************************************
00:28:21.899   17:12:14	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f ''
00:28:21.899  [2024-11-19 17:12:14.678016] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:28:21.899  [2024-11-19 17:12:14.678346] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150026 ]
00:28:22.157  [2024-11-19 17:12:14.821971] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:28:22.157  [2024-11-19 17:12:14.868228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:28:22.434  [2024-11-19 17:12:15.067643] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application
00:28:22.434  [2024-11-19 17:12:15.067732] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f
00:28:22.434  [2024-11-19 17:12:15.067767] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel
00:28:22.434  [2024-11-19 17:12:15.068150] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev
00:28:22.434  [2024-11-19 17:12:15.068335] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully
00:28:22.434  [2024-11-19 17:12:15.068369] hello_bdev.c:  84:hello_read: *NOTICE*: Reading io
00:28:22.434  [2024-11-19 17:12:15.068440] hello_bdev.c:  65:read_complete: *NOTICE*: Read string from bdev : Hello World!
00:28:22.434  
00:28:22.434  [2024-11-19 17:12:15.068482] hello_bdev.c:  74:read_complete: *NOTICE*: Stopping app
00:28:22.705  
00:28:22.705  real	0m0.725s
00:28:22.705  user	0m0.369s
00:28:22.705  sys	0m0.237s
00:28:22.705   17:12:15	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:28:22.705   17:12:15	-- common/autotest_common.sh@10 -- # set +x
00:28:22.705  ************************************
00:28:22.705  END TEST bdev_hello_world
00:28:22.705  ************************************
00:28:22.705   17:12:15	-- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds ''
00:28:22.705   17:12:15	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:28:22.705   17:12:15	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:28:22.705   17:12:15	-- common/autotest_common.sh@10 -- # set +x
00:28:22.705  ************************************
00:28:22.705  START TEST bdev_bounds
00:28:22.705  ************************************
00:28:22.705   17:12:15	-- common/autotest_common.sh@1114 -- # bdev_bounds ''
00:28:22.705   17:12:15	-- bdev/blockdev.sh@288 -- # bdevio_pid=150058
00:28:22.705   17:12:15	-- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT
00:28:22.705   17:12:15	-- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 150058'
00:28:22.705  Process bdevio pid: 150058
00:28:22.705   17:12:15	-- bdev/blockdev.sh@291 -- # waitforlisten 150058
00:28:22.705   17:12:15	-- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json ''
00:28:22.705   17:12:15	-- common/autotest_common.sh@829 -- # '[' -z 150058 ']'
00:28:22.705   17:12:15	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:28:22.705   17:12:15	-- common/autotest_common.sh@834 -- # local max_retries=100
00:28:22.705  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:28:22.705   17:12:15	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:28:22.705   17:12:15	-- common/autotest_common.sh@838 -- # xtrace_disable
00:28:22.705   17:12:15	-- common/autotest_common.sh@10 -- # set +x
00:28:22.705  [2024-11-19 17:12:15.487785] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:28:22.705  [2024-11-19 17:12:15.488236] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150058 ]
00:28:22.964  [2024-11-19 17:12:15.650205] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3
00:28:22.964  [2024-11-19 17:12:15.698591] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:28:22.964  [2024-11-19 17:12:15.698738] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:28:22.964  [2024-11-19 17:12:15.698736] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:28:23.528   17:12:16	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:28:23.528   17:12:16	-- common/autotest_common.sh@862 -- # return 0
00:28:23.529   17:12:16	-- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests
00:28:23.786  I/O targets:
00:28:23.786    raid5f: 131072 blocks of 512 bytes (64 MiB)
00:28:23.786  
00:28:23.786  
00:28:23.786       CUnit - A unit testing framework for C - Version 2.1-3
00:28:23.786       http://cunit.sourceforge.net/
00:28:23.787  
00:28:23.787  
00:28:23.787  Suite: bdevio tests on: raid5f
00:28:23.787    Test: blockdev write read block ...passed
00:28:23.787    Test: blockdev write zeroes read block ...passed
00:28:23.787    Test: blockdev write zeroes read no split ...passed
00:28:23.787    Test: blockdev write zeroes read split ...passed
00:28:23.787    Test: blockdev write zeroes read split partial ...passed
00:28:23.787    Test: blockdev reset ...passed
00:28:23.787    Test: blockdev write read 8 blocks ...passed
00:28:23.787    Test: blockdev write read size > 128k ...passed
00:28:23.787    Test: blockdev write read invalid size ...passed
00:28:23.787    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:28:23.787    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:28:23.787    Test: blockdev write read max offset ...passed
00:28:23.787    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:28:23.787    Test: blockdev writev readv 8 blocks ...passed
00:28:23.787    Test: blockdev writev readv 30 x 1block ...passed
00:28:23.787    Test: blockdev writev readv block ...passed
00:28:23.787    Test: blockdev writev readv size > 128k ...passed
00:28:23.787    Test: blockdev writev readv size > 128k in two iovs ...passed
00:28:23.787    Test: blockdev comparev and writev ...passed
00:28:23.787    Test: blockdev nvme passthru rw ...passed
00:28:23.787    Test: blockdev nvme passthru vendor specific ...passed
00:28:23.787    Test: blockdev nvme admin passthru ...passed
00:28:23.787    Test: blockdev copy ...passed
00:28:23.787  
00:28:23.787  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:28:23.787                suites      1      1    n/a      0        0
00:28:23.787                 tests     23     23     23      0        0
00:28:23.787               asserts    130    130    130      0      n/a
00:28:23.787  
00:28:23.787  Elapsed time =    0.324 seconds
00:28:23.787  0
00:28:23.787   17:12:16	-- bdev/blockdev.sh@293 -- # killprocess 150058
00:28:23.787   17:12:16	-- common/autotest_common.sh@936 -- # '[' -z 150058 ']'
00:28:23.787   17:12:16	-- common/autotest_common.sh@940 -- # kill -0 150058
00:28:23.787    17:12:16	-- common/autotest_common.sh@941 -- # uname
00:28:23.787   17:12:16	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:28:23.787    17:12:16	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 150058
00:28:24.046  killing process with pid 150058
00:28:24.046   17:12:16	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:28:24.046   17:12:16	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:28:24.046   17:12:16	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 150058'
00:28:24.046   17:12:16	-- common/autotest_common.sh@955 -- # kill 150058
00:28:24.046   17:12:16	-- common/autotest_common.sh@960 -- # wait 150058
00:28:24.305   17:12:16	-- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT
00:28:24.305  
00:28:24.305  real	0m1.522s
00:28:24.305  user	0m3.688s
00:28:24.305  sys	0m0.336s
00:28:24.305   17:12:16	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:28:24.305   17:12:16	-- common/autotest_common.sh@10 -- # set +x
00:28:24.305  ************************************
00:28:24.305  END TEST bdev_bounds
00:28:24.305  ************************************
00:28:24.305   17:12:16	-- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f ''
00:28:24.305   17:12:16	-- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']'
00:28:24.305   17:12:16	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:28:24.305   17:12:16	-- common/autotest_common.sh@10 -- # set +x
00:28:24.305  ************************************
00:28:24.305  START TEST bdev_nbd
00:28:24.305  ************************************
00:28:24.305   17:12:17	-- common/autotest_common.sh@1114 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f ''
00:28:24.305    17:12:17	-- bdev/blockdev.sh@298 -- # uname -s
00:28:24.305   17:12:17	-- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]]
00:28:24.305   17:12:17	-- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:28:24.305   17:12:17	-- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:28:24.305   17:12:17	-- bdev/blockdev.sh@302 -- # bdev_all=('raid5f')
00:28:24.305   17:12:17	-- bdev/blockdev.sh@302 -- # local bdev_all
00:28:24.305   17:12:17	-- bdev/blockdev.sh@303 -- # local bdev_num=1
00:28:24.305   17:12:17	-- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]]
00:28:24.305   17:12:17	-- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9')
00:28:24.305   17:12:17	-- bdev/blockdev.sh@309 -- # local nbd_all
00:28:24.305   17:12:17	-- bdev/blockdev.sh@310 -- # bdev_num=1
00:28:24.305   17:12:17	-- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0')
00:28:24.305   17:12:17	-- bdev/blockdev.sh@312 -- # local nbd_list
00:28:24.305   17:12:17	-- bdev/blockdev.sh@313 -- # bdev_list=('raid5f')
00:28:24.305   17:12:17	-- bdev/blockdev.sh@313 -- # local bdev_list
00:28:24.305   17:12:17	-- bdev/blockdev.sh@316 -- # nbd_pid=150114
00:28:24.305   17:12:17	-- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT
00:28:24.305   17:12:17	-- bdev/blockdev.sh@318 -- # waitforlisten 150114 /var/tmp/spdk-nbd.sock
00:28:24.305   17:12:17	-- common/autotest_common.sh@829 -- # '[' -z 150114 ']'
00:28:24.305   17:12:17	-- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json ''
00:28:24.305   17:12:17	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:28:24.305   17:12:17	-- common/autotest_common.sh@834 -- # local max_retries=100
00:28:24.305  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:28:24.305   17:12:17	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:28:24.305   17:12:17	-- common/autotest_common.sh@838 -- # xtrace_disable
00:28:24.305   17:12:17	-- common/autotest_common.sh@10 -- # set +x
00:28:24.305  [2024-11-19 17:12:17.061234] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:28:24.305  [2024-11-19 17:12:17.061396] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:28:24.564  [2024-11-19 17:12:17.198857] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:28:24.564  [2024-11-19 17:12:17.244359] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:28:25.130   17:12:17	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:28:25.130   17:12:17	-- common/autotest_common.sh@862 -- # return 0
00:28:25.130   17:12:17	-- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f
00:28:25.130   17:12:17	-- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:28:25.130   17:12:17	-- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f')
00:28:25.130   17:12:17	-- bdev/nbd_common.sh@114 -- # local bdev_list
00:28:25.130   17:12:17	-- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f
00:28:25.130   17:12:17	-- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:28:25.130   17:12:17	-- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f')
00:28:25.130   17:12:17	-- bdev/nbd_common.sh@23 -- # local bdev_list
00:28:25.130   17:12:17	-- bdev/nbd_common.sh@24 -- # local i
00:28:25.130   17:12:17	-- bdev/nbd_common.sh@25 -- # local nbd_device
00:28:25.130   17:12:17	-- bdev/nbd_common.sh@27 -- # (( i = 0 ))
00:28:25.130   17:12:17	-- bdev/nbd_common.sh@27 -- # (( i < 1 ))
00:28:25.130    17:12:17	-- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f
00:28:25.389   17:12:18	-- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0
00:28:25.390    17:12:18	-- bdev/nbd_common.sh@30 -- # basename /dev/nbd0
00:28:25.390   17:12:18	-- bdev/nbd_common.sh@30 -- # waitfornbd nbd0
00:28:25.390   17:12:18	-- common/autotest_common.sh@866 -- # local nbd_name=nbd0
00:28:25.390   17:12:18	-- common/autotest_common.sh@867 -- # local i
00:28:25.390   17:12:18	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:28:25.390   17:12:18	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:28:25.390   17:12:18	-- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions
00:28:25.390   17:12:18	-- common/autotest_common.sh@871 -- # break
00:28:25.390   17:12:18	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:28:25.390   17:12:18	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:28:25.390   17:12:18	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:28:25.390  1+0 records in
00:28:25.390  1+0 records out
00:28:25.390  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000463711 s, 8.8 MB/s
00:28:25.390    17:12:18	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:28:25.390   17:12:18	-- common/autotest_common.sh@884 -- # size=4096
00:28:25.390   17:12:18	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:28:25.390   17:12:18	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:28:25.390   17:12:18	-- common/autotest_common.sh@887 -- # return 0
00:28:25.390   17:12:18	-- bdev/nbd_common.sh@27 -- # (( i++ ))
00:28:25.390   17:12:18	-- bdev/nbd_common.sh@27 -- # (( i < 1 ))
00:28:25.390    17:12:18	-- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:28:25.649   17:12:18	-- bdev/nbd_common.sh@118 -- # nbd_disks_json='[
00:28:25.649    {
00:28:25.649      "nbd_device": "/dev/nbd0",
00:28:25.649      "bdev_name": "raid5f"
00:28:25.649    }
00:28:25.649  ]'
00:28:25.649   17:12:18	-- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device'))
00:28:25.649    17:12:18	-- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device'
00:28:25.649    17:12:18	-- bdev/nbd_common.sh@119 -- # echo '[
00:28:25.649    {
00:28:25.649      "nbd_device": "/dev/nbd0",
00:28:25.649      "bdev_name": "raid5f"
00:28:25.649    }
00:28:25.649  ]'
00:28:25.649   17:12:18	-- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0
00:28:25.649   17:12:18	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:28:25.649   17:12:18	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:28:25.649   17:12:18	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:28:25.649   17:12:18	-- bdev/nbd_common.sh@51 -- # local i
00:28:25.649   17:12:18	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:28:25.908   17:12:18	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:28:25.908    17:12:18	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:28:25.908   17:12:18	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:28:25.908   17:12:18	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:28:25.908   17:12:18	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:28:25.908   17:12:18	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:28:25.908   17:12:18	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:28:25.908   17:12:18	-- bdev/nbd_common.sh@41 -- # break
00:28:25.908   17:12:18	-- bdev/nbd_common.sh@45 -- # return 0
00:28:25.908    17:12:18	-- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:28:25.908    17:12:18	-- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:28:25.908     17:12:18	-- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:28:26.167    17:12:18	-- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:28:26.167     17:12:18	-- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:28:26.167     17:12:18	-- bdev/nbd_common.sh@64 -- # echo '[]'
00:28:26.167    17:12:18	-- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:28:26.167     17:12:18	-- bdev/nbd_common.sh@65 -- # echo ''
00:28:26.167     17:12:18	-- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:28:26.167     17:12:18	-- bdev/nbd_common.sh@65 -- # true
00:28:26.167    17:12:18	-- bdev/nbd_common.sh@65 -- # count=0
00:28:26.167    17:12:18	-- bdev/nbd_common.sh@66 -- # echo 0
00:28:26.167   17:12:18	-- bdev/nbd_common.sh@122 -- # count=0
00:28:26.167   17:12:18	-- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']'
00:28:26.167   17:12:18	-- bdev/nbd_common.sh@127 -- # return 0
00:28:26.167   17:12:18	-- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0
00:28:26.167   17:12:18	-- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:28:26.167   17:12:18	-- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f')
00:28:26.167   17:12:18	-- bdev/nbd_common.sh@91 -- # local bdev_list
00:28:26.167   17:12:19	-- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0')
00:28:26.167   17:12:19	-- bdev/nbd_common.sh@92 -- # local nbd_list
00:28:26.167   17:12:19	-- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0
00:28:26.167   17:12:19	-- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:28:26.167   17:12:19	-- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f')
00:28:26.167   17:12:19	-- bdev/nbd_common.sh@10 -- # local bdev_list
00:28:26.167   17:12:19	-- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0')
00:28:26.167   17:12:19	-- bdev/nbd_common.sh@11 -- # local nbd_list
00:28:26.167   17:12:19	-- bdev/nbd_common.sh@12 -- # local i
00:28:26.167   17:12:19	-- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:28:26.167   17:12:19	-- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:28:26.167   17:12:19	-- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0
00:28:26.426  /dev/nbd0
00:28:26.426    17:12:19	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:28:26.426   17:12:19	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:28:26.426   17:12:19	-- common/autotest_common.sh@866 -- # local nbd_name=nbd0
00:28:26.426   17:12:19	-- common/autotest_common.sh@867 -- # local i
00:28:26.426   17:12:19	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:28:26.426   17:12:19	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:28:26.426   17:12:19	-- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions
00:28:26.426   17:12:19	-- common/autotest_common.sh@871 -- # break
00:28:26.426   17:12:19	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:28:26.426   17:12:19	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:28:26.426   17:12:19	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:28:26.426  1+0 records in
00:28:26.426  1+0 records out
00:28:26.426  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00069725 s, 5.9 MB/s
00:28:26.426    17:12:19	-- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:28:26.426   17:12:19	-- common/autotest_common.sh@884 -- # size=4096
00:28:26.426   17:12:19	-- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:28:26.426   17:12:19	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:28:26.426   17:12:19	-- common/autotest_common.sh@887 -- # return 0
00:28:26.426   17:12:19	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:28:26.426   17:12:19	-- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:28:26.426    17:12:19	-- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:28:26.426    17:12:19	-- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:28:26.426     17:12:19	-- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:28:26.685    17:12:19	-- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:28:26.685    {
00:28:26.685      "nbd_device": "/dev/nbd0",
00:28:26.685      "bdev_name": "raid5f"
00:28:26.685    }
00:28:26.685  ]'
00:28:26.685     17:12:19	-- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:28:26.685     17:12:19	-- bdev/nbd_common.sh@64 -- # echo '[
00:28:26.685    {
00:28:26.685      "nbd_device": "/dev/nbd0",
00:28:26.685      "bdev_name": "raid5f"
00:28:26.685    }
00:28:26.685  ]'
00:28:26.685    17:12:19	-- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0
00:28:26.685     17:12:19	-- bdev/nbd_common.sh@65 -- # echo /dev/nbd0
00:28:26.685     17:12:19	-- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:28:26.685    17:12:19	-- bdev/nbd_common.sh@65 -- # count=1
00:28:26.685    17:12:19	-- bdev/nbd_common.sh@66 -- # echo 1
00:28:26.685   17:12:19	-- bdev/nbd_common.sh@95 -- # count=1
00:28:26.685   17:12:19	-- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']'
00:28:26.685   17:12:19	-- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write
00:28:26.685   17:12:19	-- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0')
00:28:26.685   17:12:19	-- bdev/nbd_common.sh@70 -- # local nbd_list
00:28:26.685   17:12:19	-- bdev/nbd_common.sh@71 -- # local operation=write
00:28:26.685   17:12:19	-- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest
00:28:26.685   17:12:19	-- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:28:26.685   17:12:19	-- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256
00:28:26.685  256+0 records in
00:28:26.685  256+0 records out
00:28:26.685  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00987419 s, 106 MB/s
00:28:26.685   17:12:19	-- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:28:26.685   17:12:19	-- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:28:26.685  256+0 records in
00:28:26.685  256+0 records out
00:28:26.685  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0282471 s, 37.1 MB/s
00:28:26.685   17:12:19	-- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify
00:28:26.685   17:12:19	-- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0')
00:28:26.685   17:12:19	-- bdev/nbd_common.sh@70 -- # local nbd_list
00:28:26.685   17:12:19	-- bdev/nbd_common.sh@71 -- # local operation=verify
00:28:26.685   17:12:19	-- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest
00:28:26.685   17:12:19	-- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:28:26.685   17:12:19	-- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:28:26.685   17:12:19	-- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:28:26.685   17:12:19	-- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0
00:28:26.944   17:12:19	-- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest
00:28:26.944   17:12:19	-- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0
00:28:26.944   17:12:19	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:28:26.944   17:12:19	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:28:26.944   17:12:19	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:28:26.944   17:12:19	-- bdev/nbd_common.sh@51 -- # local i
00:28:26.944   17:12:19	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:28:26.944   17:12:19	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:28:27.206    17:12:19	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:28:27.206   17:12:19	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:28:27.206   17:12:19	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:28:27.206   17:12:19	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:28:27.206   17:12:19	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:28:27.206   17:12:19	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:28:27.206   17:12:19	-- bdev/nbd_common.sh@41 -- # break
00:28:27.206   17:12:19	-- bdev/nbd_common.sh@45 -- # return 0
00:28:27.206    17:12:19	-- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:28:27.206    17:12:19	-- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:28:27.206     17:12:19	-- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:28:27.466    17:12:20	-- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:28:27.466     17:12:20	-- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:28:27.466     17:12:20	-- bdev/nbd_common.sh@64 -- # echo '[]'
00:28:27.466    17:12:20	-- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:28:27.467     17:12:20	-- bdev/nbd_common.sh@65 -- # echo ''
00:28:27.467     17:12:20	-- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:28:27.467     17:12:20	-- bdev/nbd_common.sh@65 -- # true
00:28:27.467    17:12:20	-- bdev/nbd_common.sh@65 -- # count=0
00:28:27.467    17:12:20	-- bdev/nbd_common.sh@66 -- # echo 0
00:28:27.467   17:12:20	-- bdev/nbd_common.sh@104 -- # count=0
00:28:27.467   17:12:20	-- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:28:27.467   17:12:20	-- bdev/nbd_common.sh@109 -- # return 0
00:28:27.467   17:12:20	-- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0
00:28:27.467   17:12:20	-- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:28:27.467   17:12:20	-- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0')
00:28:27.467   17:12:20	-- bdev/nbd_common.sh@132 -- # local nbd_list
00:28:27.467   17:12:20	-- bdev/nbd_common.sh@133 -- # local mkfs_ret
00:28:27.467   17:12:20	-- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512
00:28:27.725  malloc_lvol_verify
00:28:27.725   17:12:20	-- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs
00:28:27.984  be7e6f85-3fdf-465e-857c-17381fdf91a5
00:28:27.984   17:12:20	-- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs
00:28:28.242  f2dd02c4-b473-498a-9346-da4e4bb1b850
00:28:28.242   17:12:20	-- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0
00:28:28.506  /dev/nbd0
00:28:28.506   17:12:21	-- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0
00:28:28.506  mke2fs 1.46.5 (30-Dec-2021)
00:28:28.506  
00:28:28.506  Filesystem too small for a journal
00:28:28.506  Discarding device blocks:    0/1024         done                            
00:28:28.506  Creating filesystem with 1024 4k blocks and 1024 inodes
00:28:28.506  
00:28:28.506  Allocating group tables: 0/1   done                            
00:28:28.506  Writing inode tables: 0/1   done                            
00:28:28.506  Writing superblocks and filesystem accounting information: 0/1   done
00:28:28.506  
00:28:28.506   17:12:21	-- bdev/nbd_common.sh@141 -- # mkfs_ret=0
00:28:28.506   17:12:21	-- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0
00:28:28.506   17:12:21	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:28:28.506   17:12:21	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:28:28.506   17:12:21	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:28:28.506   17:12:21	-- bdev/nbd_common.sh@51 -- # local i
00:28:28.506   17:12:21	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:28:28.506   17:12:21	-- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:28:28.763    17:12:21	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:28:28.763   17:12:21	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:28:28.763   17:12:21	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:28:28.763   17:12:21	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:28:28.763   17:12:21	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:28:28.763   17:12:21	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:28:28.763   17:12:21	-- bdev/nbd_common.sh@41 -- # break
00:28:28.763   17:12:21	-- bdev/nbd_common.sh@45 -- # return 0
00:28:28.764   17:12:21	-- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']'
00:28:28.764   17:12:21	-- bdev/nbd_common.sh@147 -- # return 0
00:28:28.764   17:12:21	-- bdev/blockdev.sh@324 -- # killprocess 150114
00:28:28.764   17:12:21	-- common/autotest_common.sh@936 -- # '[' -z 150114 ']'
00:28:28.764   17:12:21	-- common/autotest_common.sh@940 -- # kill -0 150114
00:28:28.764    17:12:21	-- common/autotest_common.sh@941 -- # uname
00:28:28.764   17:12:21	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:28:28.764    17:12:21	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 150114
00:28:28.764   17:12:21	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:28:28.764  killing process with pid 150114
00:28:28.764   17:12:21	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:28:28.764   17:12:21	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 150114'
00:28:28.764   17:12:21	-- common/autotest_common.sh@955 -- # kill 150114
00:28:28.764   17:12:21	-- common/autotest_common.sh@960 -- # wait 150114
00:28:29.022   17:12:21	-- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT
00:28:29.022  
00:28:29.022  real	0m4.743s
00:28:29.022  user	0m7.102s
00:28:29.022  sys	0m1.261s
00:28:29.022   17:12:21	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:28:29.022   17:12:21	-- common/autotest_common.sh@10 -- # set +x
00:28:29.022  ************************************
00:28:29.022  END TEST bdev_nbd
00:28:29.022  ************************************
00:28:29.022   17:12:21	-- bdev/blockdev.sh@761 -- # [[ y == y ]]
00:28:29.022   17:12:21	-- bdev/blockdev.sh@762 -- # '[' raid5f = nvme ']'
00:28:29.022   17:12:21	-- bdev/blockdev.sh@762 -- # '[' raid5f = gpt ']'
00:28:29.022   17:12:21	-- bdev/blockdev.sh@766 -- # run_test bdev_fio fio_test_suite ''
00:28:29.022   17:12:21	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:28:29.022   17:12:21	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:28:29.022   17:12:21	-- common/autotest_common.sh@10 -- # set +x
00:28:29.022  ************************************
00:28:29.022  START TEST bdev_fio
00:28:29.022  ************************************
00:28:29.022   17:12:21	-- common/autotest_common.sh@1114 -- # fio_test_suite ''
00:28:29.022  /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk
00:28:29.022   17:12:21	-- bdev/blockdev.sh@329 -- # local env_context
00:28:29.022   17:12:21	-- bdev/blockdev.sh@333 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev
00:28:29.022   17:12:21	-- bdev/blockdev.sh@334 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT
00:28:29.022    17:12:21	-- bdev/blockdev.sh@337 -- # echo ''
00:28:29.022    17:12:21	-- bdev/blockdev.sh@337 -- # sed s/--env-context=//
00:28:29.022   17:12:21	-- bdev/blockdev.sh@337 -- # env_context=
00:28:29.022   17:12:21	-- bdev/blockdev.sh@338 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO ''
00:28:29.022   17:12:21	-- common/autotest_common.sh@1269 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio
00:28:29.022   17:12:21	-- common/autotest_common.sh@1270 -- # local workload=verify
00:28:29.022   17:12:21	-- common/autotest_common.sh@1271 -- # local bdev_type=AIO
00:28:29.022   17:12:21	-- common/autotest_common.sh@1272 -- # local env_context=
00:28:29.022   17:12:21	-- common/autotest_common.sh@1273 -- # local fio_dir=/usr/src/fio
00:28:29.022   17:12:21	-- common/autotest_common.sh@1275 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']'
00:28:29.022   17:12:21	-- common/autotest_common.sh@1280 -- # '[' -z verify ']'
00:28:29.022   17:12:21	-- common/autotest_common.sh@1284 -- # '[' -n '' ']'
00:28:29.022   17:12:21	-- common/autotest_common.sh@1288 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio
00:28:29.022   17:12:21	-- common/autotest_common.sh@1290 -- # cat
00:28:29.022   17:12:21	-- common/autotest_common.sh@1302 -- # '[' verify == verify ']'
00:28:29.022   17:12:21	-- common/autotest_common.sh@1303 -- # cat
00:28:29.022   17:12:21	-- common/autotest_common.sh@1312 -- # '[' AIO == AIO ']'
00:28:29.022    17:12:21	-- common/autotest_common.sh@1313 -- # /usr/src/fio/fio --version
00:28:29.281   17:12:21	-- common/autotest_common.sh@1313 -- # [[ fio-3.35 == *\f\i\o\-\3* ]]
00:28:29.281   17:12:21	-- common/autotest_common.sh@1314 -- # echo serialize_overlap=1
00:28:29.281   17:12:21	-- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}"
00:28:29.281   17:12:21	-- bdev/blockdev.sh@340 -- # echo '[job_raid5f]'
00:28:29.281   17:12:21	-- bdev/blockdev.sh@341 -- # echo filename=raid5f
00:28:29.281   17:12:21	-- bdev/blockdev.sh@345 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 			--verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json'
00:28:29.281   17:12:21	-- bdev/blockdev.sh@347 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output
00:28:29.281   17:12:21	-- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']'
00:28:29.281   17:12:21	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:28:29.281   17:12:21	-- common/autotest_common.sh@10 -- # set +x
00:28:29.281  ************************************
00:28:29.281  START TEST bdev_fio_rw_verify
00:28:29.281  ************************************
00:28:29.281   17:12:21	-- common/autotest_common.sh@1114 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output
00:28:29.281   17:12:21	-- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output
00:28:29.281   17:12:21	-- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio
00:28:29.281   17:12:21	-- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:28:29.281   17:12:21	-- common/autotest_common.sh@1328 -- # local sanitizers
00:28:29.281   17:12:21	-- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:28:29.281   17:12:21	-- common/autotest_common.sh@1330 -- # shift
00:28:29.281   17:12:21	-- common/autotest_common.sh@1332 -- # local asan_lib=
00:28:29.281   17:12:21	-- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}"
00:28:29.281    17:12:21	-- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:28:29.281    17:12:21	-- common/autotest_common.sh@1334 -- # grep libasan
00:28:29.281    17:12:21	-- common/autotest_common.sh@1334 -- # awk '{print $3}'
00:28:29.281   17:12:21	-- common/autotest_common.sh@1334 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.6
00:28:29.281   17:12:21	-- common/autotest_common.sh@1335 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.6 ]]
00:28:29.281   17:12:21	-- common/autotest_common.sh@1336 -- # break
00:28:29.281   17:12:21	-- common/autotest_common.sh@1341 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev'
00:28:29.281   17:12:21	-- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output
00:28:29.281  job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:28:29.281  fio-3.35
00:28:29.281  Starting 1 thread
00:28:41.474  
00:28:41.474  job_raid5f: (groupid=0, jobs=1): err= 0: pid=150333: Tue Nov 19 17:12:32 2024
00:28:41.474    read: IOPS=9658, BW=37.7MiB/s (39.6MB/s)(377MiB/10001msec)
00:28:41.474      slat (usec): min=17, max=131, avg=24.65, stdev= 3.10
00:28:41.474      clat (usec): min=11, max=373, avg=164.81, stdev=60.76
00:28:41.474       lat (usec): min=32, max=432, avg=189.46, stdev=61.64
00:28:41.474      clat percentiles (usec):
00:28:41.474       | 50.000th=[  163], 99.000th=[  281], 99.900th=[  318], 99.990th=[  359],
00:28:41.474       | 99.999th=[  375]
00:28:41.474    write: IOPS=10.1k, BW=39.5MiB/s (41.4MB/s)(390MiB/9881msec); 0 zone resets
00:28:41.474      slat (usec): min=7, max=139, avg=21.25, stdev= 5.20
00:28:41.474      clat (usec): min=78, max=2144, avg=377.76, stdev=115.10
00:28:41.474       lat (usec): min=100, max=2214, avg=399.00, stdev=118.88
00:28:41.474      clat percentiles (usec):
00:28:41.474       | 50.000th=[  375], 99.000th=[  515], 99.900th=[ 1860], 99.990th=[ 2024],
00:28:41.474       | 99.999th=[ 2147]
00:28:41.474     bw (  KiB/s): min=34576, max=46824, per=99.54%, avg=40213.05, stdev=3893.57, samples=19
00:28:41.474     iops        : min= 8644, max=11706, avg=10053.26, stdev=973.39, samples=19
00:28:41.474    lat (usec)   : 20=0.01%, 50=0.01%, 100=9.19%, 250=36.73%, 500=53.44%
00:28:41.474    lat (usec)   : 750=0.30%, 1000=0.03%
00:28:41.474    lat (msec)   : 2=0.31%, 4=0.01%
00:28:41.474    cpu          : usr=99.60%, sys=0.36%, ctx=194, majf=0, minf=9928
00:28:41.474    IO depths    : 1=7.6%, 2=20.0%, 4=55.0%, 8=17.4%, 16=0.0%, 32=0.0%, >=64=0.0%
00:28:41.474       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:28:41.474       complete  : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:28:41.474       issued rwts: total=96598,99793,0,0 short=0,0,0,0 dropped=0,0,0,0
00:28:41.474       latency   : target=0, window=0, percentile=100.00%, depth=8
00:28:41.474  
00:28:41.474  Run status group 0 (all jobs):
00:28:41.474     READ: bw=37.7MiB/s (39.6MB/s), 37.7MiB/s-37.7MiB/s (39.6MB/s-39.6MB/s), io=377MiB (396MB), run=10001-10001msec
00:28:41.474    WRITE: bw=39.5MiB/s (41.4MB/s), 39.5MiB/s-39.5MiB/s (41.4MB/s-41.4MB/s), io=390MiB (409MB), run=9881-9881msec
00:28:41.474  -----------------------------------------------------
00:28:41.474  Suppressions used:
00:28:41.474    count      bytes template
00:28:41.474        1          7 /usr/src/fio/parse.c
00:28:41.474      192      18432 /usr/src/fio/iolog.c
00:28:41.474        1        904 libcrypto.so
00:28:41.474  -----------------------------------------------------
00:28:41.474  
00:28:41.474  
00:28:41.474  real	0m11.338s
00:28:41.474  user	0m12.314s
00:28:41.474  sys	0m0.656s
00:28:41.474   17:12:33	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:28:41.474   17:12:33	-- common/autotest_common.sh@10 -- # set +x
00:28:41.474  ************************************
00:28:41.474  END TEST bdev_fio_rw_verify
00:28:41.474  ************************************
00:28:41.474   17:12:33	-- bdev/blockdev.sh@348 -- # rm -f
00:28:41.474   17:12:33	-- bdev/blockdev.sh@349 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio
00:28:41.474   17:12:33	-- bdev/blockdev.sh@352 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' ''
00:28:41.474   17:12:33	-- common/autotest_common.sh@1269 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio
00:28:41.474   17:12:33	-- common/autotest_common.sh@1270 -- # local workload=trim
00:28:41.474   17:12:33	-- common/autotest_common.sh@1271 -- # local bdev_type=
00:28:41.474   17:12:33	-- common/autotest_common.sh@1272 -- # local env_context=
00:28:41.474   17:12:33	-- common/autotest_common.sh@1273 -- # local fio_dir=/usr/src/fio
00:28:41.474   17:12:33	-- common/autotest_common.sh@1275 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']'
00:28:41.475   17:12:33	-- common/autotest_common.sh@1280 -- # '[' -z trim ']'
00:28:41.475   17:12:33	-- common/autotest_common.sh@1284 -- # '[' -n '' ']'
00:28:41.475   17:12:33	-- common/autotest_common.sh@1288 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio
00:28:41.475   17:12:33	-- common/autotest_common.sh@1290 -- # cat
00:28:41.475   17:12:33	-- common/autotest_common.sh@1302 -- # '[' trim == verify ']'
00:28:41.475   17:12:33	-- common/autotest_common.sh@1317 -- # '[' trim == trim ']'
00:28:41.475   17:12:33	-- common/autotest_common.sh@1318 -- # echo rw=trimwrite
00:28:41.475    17:12:33	-- bdev/blockdev.sh@353 -- # jq -r 'select(.supported_io_types.unmap == true) | .name'
00:28:41.475    17:12:33	-- bdev/blockdev.sh@353 -- # printf '%s\n' '{' '  "name": "raid5f",' '  "aliases": [' '    "a0145004-dc14-4ef8-8059-f635c20d7ee3"' '  ],' '  "product_name": "Raid Volume",' '  "block_size": 512,' '  "num_blocks": 131072,' '  "uuid": "a0145004-dc14-4ef8-8059-f635c20d7ee3",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": false,' '    "write_zeroes": true,' '    "flush": false,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "driver_specific": {' '    "raid": {' '      "uuid": "a0145004-dc14-4ef8-8059-f635c20d7ee3",' '      "strip_size_kb": 2,' '      "state": "online",' '      "raid_level": "raid5f",' '      "superblock": false,' '      "num_base_bdevs": 3,' '      "num_base_bdevs_discovered": 3,' '      "num_base_bdevs_operational": 3,' '      "base_bdevs_list": [' '        {' '          "name": "Malloc0",' '          "uuid": "3833a6fb-4965-4bc4-96dc-65bff4b77a73",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        },' '        {' '          "name": "Malloc1",' '          "uuid": "181e1ce7-4d75-4914-956b-63465473c446",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        },' '        {' '          "name": "Malloc2",' '          "uuid": "3ad80bfe-da17-4a1f-ac0d-530cf47042b4",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        }' '      ]' '    }' '  }' '}'
00:28:41.475   17:12:33	-- bdev/blockdev.sh@353 -- # [[ -n '' ]]
00:28:41.475   17:12:33	-- bdev/blockdev.sh@359 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio
00:28:41.475   17:12:33	-- bdev/blockdev.sh@360 -- # popd
00:28:41.475  /home/vagrant/spdk_repo/spdk
00:28:41.475   17:12:33	-- bdev/blockdev.sh@361 -- # trap - SIGINT SIGTERM EXIT
00:28:41.475   17:12:33	-- bdev/blockdev.sh@362 -- # return 0
00:28:41.475  
00:28:41.475  real	0m11.537s
00:28:41.475  user	0m12.420s
00:28:41.475  sys	0m0.751s
00:28:41.475   17:12:33	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:28:41.475   17:12:33	-- common/autotest_common.sh@10 -- # set +x
00:28:41.475  ************************************
00:28:41.475  END TEST bdev_fio
00:28:41.475  ************************************
00:28:41.475   17:12:33	-- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT
00:28:41.475   17:12:33	-- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 ''
00:28:41.475   17:12:33	-- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']'
00:28:41.475   17:12:33	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:28:41.475   17:12:33	-- common/autotest_common.sh@10 -- # set +x
00:28:41.475  ************************************
00:28:41.475  START TEST bdev_verify
00:28:41.475  ************************************
00:28:41.475   17:12:33	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 ''
00:28:41.475  [2024-11-19 17:12:33.475123] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:28:41.475  [2024-11-19 17:12:33.475544] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150506 ]
00:28:41.475  [2024-11-19 17:12:33.635666] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2
00:28:41.475  [2024-11-19 17:12:33.720800] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:28:41.475  [2024-11-19 17:12:33.720810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:28:41.475  Running I/O for 5 seconds...
00:28:46.748  
00:28:46.748                                                                                                  Latency(us)
00:28:46.748  
[2024-11-19T17:12:39.612Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:28:46.748  
[2024-11-19T17:12:39.612Z]  Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:28:46.748  	 Verification LBA range: start 0x0 length 0x2000
00:28:46.748  	 raid5f              :       5.02    6782.00      26.49       0.00     0.00   29902.63     207.73   22219.82
00:28:46.748  
[2024-11-19T17:12:39.612Z]  Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:28:46.748  	 Verification LBA range: start 0x2000 length 0x2000
00:28:46.748  	 raid5f              :       5.02    7573.30      29.58       0.00     0.00   26782.21     425.20   20097.71
00:28:46.748  
[2024-11-19T17:12:39.612Z]  ===================================================================================================================
00:28:46.748  
[2024-11-19T17:12:39.612Z]  Total                       :              14355.30      56.08       0.00     0.00   28256.48     207.73   22219.82
00:28:46.748  
00:28:46.748  real	0m6.067s
00:28:46.748  user	0m11.179s
00:28:46.748  sys	0m0.329s
00:28:46.748   17:12:39	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:28:46.748   17:12:39	-- common/autotest_common.sh@10 -- # set +x
00:28:46.748  ************************************
00:28:46.748  END TEST bdev_verify
00:28:46.748  ************************************
00:28:46.748   17:12:39	-- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 ''
00:28:46.748   17:12:39	-- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']'
00:28:46.748   17:12:39	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:28:46.748   17:12:39	-- common/autotest_common.sh@10 -- # set +x
00:28:46.748  ************************************
00:28:46.748  START TEST bdev_verify_big_io
00:28:46.748  ************************************
00:28:46.748   17:12:39	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 ''
00:28:47.006  [2024-11-19 17:12:39.617874] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:28:47.006  [2024-11-19 17:12:39.618303] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150600 ]
00:28:47.006  [2024-11-19 17:12:39.788234] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2
00:28:47.264  [2024-11-19 17:12:39.876133] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:28:47.264  [2024-11-19 17:12:39.876147] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:28:47.522  Running I/O for 5 seconds...
00:28:52.794  
00:28:52.794                                                                                                  Latency(us)
00:28:52.794  
[2024-11-19T17:12:45.658Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:28:52.794  
[2024-11-19T17:12:45.658Z]  Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:28:52.794  	 Verification LBA range: start 0x0 length 0x200
00:28:52.794  	 raid5f              :       5.11     643.75      40.23       0.00     0.00 5172011.06     146.29  205720.62
00:28:52.794  
[2024-11-19T17:12:45.658Z]  Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:28:52.794  	 Verification LBA range: start 0x200 length 0x200
00:28:52.794  	 raid5f              :       5.12     621.85      38.87       0.00     0.00 5341111.28     140.43  213709.78
00:28:52.794  
[2024-11-19T17:12:45.658Z]  ===================================================================================================================
00:28:52.794  
[2024-11-19T17:12:45.658Z]  Total                       :               1265.60      79.10       0.00     0.00 5255163.97     140.43  213709.78
00:28:53.052  
00:28:53.052  real	0m6.193s
00:28:53.052  user	0m11.366s
00:28:53.052  sys	0m0.356s
00:28:53.052   17:12:45	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:28:53.052   17:12:45	-- common/autotest_common.sh@10 -- # set +x
00:28:53.052  ************************************
00:28:53.052  END TEST bdev_verify_big_io
00:28:53.052  ************************************
00:28:53.052   17:12:45	-- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:28:53.052   17:12:45	-- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']'
00:28:53.052   17:12:45	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:28:53.052   17:12:45	-- common/autotest_common.sh@10 -- # set +x
00:28:53.052  ************************************
00:28:53.052  START TEST bdev_write_zeroes
00:28:53.052  ************************************
00:28:53.052   17:12:45	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:28:53.052  [2024-11-19 17:12:45.848381] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:28:53.052  [2024-11-19 17:12:45.848552] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150696 ]
00:28:53.310  [2024-11-19 17:12:45.992004] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:28:53.310  [2024-11-19 17:12:46.076106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:28:53.568  Running I/O for 1 seconds...
00:28:54.501  
00:28:54.501                                                                                                  Latency(us)
00:28:54.501  
[2024-11-19T17:12:47.365Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:28:54.501  
[2024-11-19T17:12:47.365Z]  Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:28:54.501  	 raid5f              :       1.00   27279.85     106.56       0.00     0.00    4677.91    1380.94    6553.60
00:28:54.501  
[2024-11-19T17:12:47.365Z]  ===================================================================================================================
00:28:54.501  
[2024-11-19T17:12:47.365Z]  Total                       :              27279.85     106.56       0.00     0.00    4677.91    1380.94    6553.60
00:28:55.067  
00:28:55.067  real	0m2.000s
00:28:55.067  user	0m1.600s
00:28:55.067  sys	0m0.286s
00:28:55.067   17:12:47	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:28:55.067   17:12:47	-- common/autotest_common.sh@10 -- # set +x
00:28:55.067  ************************************
00:28:55.067  END TEST bdev_write_zeroes
00:28:55.067  ************************************
00:28:55.067   17:12:47	-- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:28:55.067   17:12:47	-- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']'
00:28:55.067   17:12:47	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:28:55.067   17:12:47	-- common/autotest_common.sh@10 -- # set +x
00:28:55.067  ************************************
00:28:55.067  START TEST bdev_json_nonenclosed
00:28:55.067  ************************************
00:28:55.067   17:12:47	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:28:55.325  [2024-11-19 17:12:47.922366] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:28:55.325  [2024-11-19 17:12:47.922560] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150740 ]
00:28:55.325  [2024-11-19 17:12:48.064381] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:28:55.325  [2024-11-19 17:12:48.133645] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:28:55.325  [2024-11-19 17:12:48.133911] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}.
00:28:55.325  [2024-11-19 17:12:48.133955] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:28:55.583  
00:28:55.583  real	0m0.456s
00:28:55.583  user	0m0.232s
00:28:55.583  sys	0m0.124s
00:28:55.583   17:12:48	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:28:55.583   17:12:48	-- common/autotest_common.sh@10 -- # set +x
00:28:55.583  ************************************
00:28:55.583  END TEST bdev_json_nonenclosed
00:28:55.583  ************************************
00:28:55.583   17:12:48	-- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:28:55.583   17:12:48	-- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']'
00:28:55.583   17:12:48	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:28:55.583   17:12:48	-- common/autotest_common.sh@10 -- # set +x
00:28:55.583  ************************************
00:28:55.583  START TEST bdev_json_nonarray
00:28:55.583  ************************************
00:28:55.583   17:12:48	-- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:28:55.841  [2024-11-19 17:12:48.458597] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization...
00:28:55.841  [2024-11-19 17:12:48.458844] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150779 ]
00:28:55.841  [2024-11-19 17:12:48.618461] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:28:56.099  [2024-11-19 17:12:48.695536] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:28:56.099  [2024-11-19 17:12:48.695871] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array.
00:28:56.099  [2024-11-19 17:12:48.695939] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:28:56.099  
00:28:56.099  real	0m0.519s
00:28:56.099  user	0m0.234s
00:28:56.099  sys	0m0.185s
00:28:56.099   17:12:48	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:28:56.099   17:12:48	-- common/autotest_common.sh@10 -- # set +x
00:28:56.099  ************************************
00:28:56.099  END TEST bdev_json_nonarray
00:28:56.099  ************************************
00:28:56.357   17:12:48	-- bdev/blockdev.sh@785 -- # [[ raid5f == bdev ]]
00:28:56.357   17:12:48	-- bdev/blockdev.sh@792 -- # [[ raid5f == gpt ]]
00:28:56.357   17:12:48	-- bdev/blockdev.sh@796 -- # [[ raid5f == crypto_sw ]]
00:28:56.357   17:12:48	-- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT
00:28:56.357   17:12:48	-- bdev/blockdev.sh@809 -- # cleanup
00:28:56.357   17:12:48	-- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile
00:28:56.357   17:12:48	-- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:28:56.357   17:12:48	-- bdev/blockdev.sh@24 -- # [[ raid5f == rbd ]]
00:28:56.357   17:12:48	-- bdev/blockdev.sh@28 -- # [[ raid5f == daos ]]
00:28:56.357   17:12:48	-- bdev/blockdev.sh@32 -- # [[ raid5f = \g\p\t ]]
00:28:56.357   17:12:48	-- bdev/blockdev.sh@38 -- # [[ raid5f == xnvme ]]
00:28:56.357  
00:28:56.357  real	0m36.327s
00:28:56.357  user	0m50.495s
00:28:56.357  sys	0m4.703s
00:28:56.357   17:12:48	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:28:56.357   17:12:48	-- common/autotest_common.sh@10 -- # set +x
00:28:56.357  ************************************
00:28:56.357  END TEST blockdev_raid5f
00:28:56.357  ************************************
00:28:56.357   17:12:49	-- spdk/autotest.sh@370 -- # trap - SIGINT SIGTERM EXIT
00:28:56.357   17:12:49	-- spdk/autotest.sh@372 -- # timing_enter post_cleanup
00:28:56.357   17:12:49	-- common/autotest_common.sh@722 -- # xtrace_disable
00:28:56.357   17:12:49	-- common/autotest_common.sh@10 -- # set +x
00:28:56.357   17:12:49	-- spdk/autotest.sh@373 -- # autotest_cleanup
00:28:56.357   17:12:49	-- common/autotest_common.sh@1381 -- # local autotest_es=0
00:28:56.357   17:12:49	-- common/autotest_common.sh@1382 -- # xtrace_disable
00:28:56.357   17:12:49	-- common/autotest_common.sh@10 -- # set +x
00:28:58.254  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev
00:28:58.254  Waiting for block devices as requested
00:28:58.511  0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme
00:28:59.076  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev
00:28:59.076  Cleaning
00:28:59.076  Removing:    /var/run/dpdk/spdk0/config
00:28:59.076  Removing:    /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0
00:28:59.076  Removing:    /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1
00:28:59.076  Removing:    /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2
00:28:59.076  Removing:    /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3
00:28:59.076  Removing:    /var/run/dpdk/spdk0/fbarray_memzone
00:28:59.076  Removing:    /var/run/dpdk/spdk0/hugepage_info
00:28:59.076  Removing:    /dev/shm/spdk_tgt_trace.pid114813
00:28:59.076  Removing:    /var/run/dpdk/spdk0
00:28:59.076  Removing:    /var/run/dpdk/spdk_pid114625
00:28:59.076  Removing:    /var/run/dpdk/spdk_pid114813
00:28:59.076  Removing:    /var/run/dpdk/spdk_pid115112
00:28:59.076  Removing:    /var/run/dpdk/spdk_pid115357
00:28:59.076  Removing:    /var/run/dpdk/spdk_pid115530
00:28:59.076  Removing:    /var/run/dpdk/spdk_pid115617
00:28:59.076  Removing:    /var/run/dpdk/spdk_pid115714
00:28:59.076  Removing:    /var/run/dpdk/spdk_pid115806
00:28:59.076  Removing:    /var/run/dpdk/spdk_pid115902
00:28:59.076  Removing:    /var/run/dpdk/spdk_pid115950
00:28:59.076  Removing:    /var/run/dpdk/spdk_pid115995
00:28:59.076  Removing:    /var/run/dpdk/spdk_pid116074
00:28:59.076  Removing:    /var/run/dpdk/spdk_pid116190
00:28:59.076  Removing:    /var/run/dpdk/spdk_pid116708
00:28:59.076  Removing:    /var/run/dpdk/spdk_pid116762
00:28:59.076  Removing:    /var/run/dpdk/spdk_pid116817
00:28:59.076  Removing:    /var/run/dpdk/spdk_pid116838
00:28:59.076  Removing:    /var/run/dpdk/spdk_pid116914
00:28:59.076  Removing:    /var/run/dpdk/spdk_pid116935
00:28:59.076  Removing:    /var/run/dpdk/spdk_pid117009
00:28:59.076  Removing:    /var/run/dpdk/spdk_pid117030
00:28:59.076  Removing:    /var/run/dpdk/spdk_pid117075
00:28:59.076  Removing:    /var/run/dpdk/spdk_pid117098
00:28:59.076  Removing:    /var/run/dpdk/spdk_pid117152
00:28:59.076  Removing:    /var/run/dpdk/spdk_pid117170
00:28:59.076  Removing:    /var/run/dpdk/spdk_pid117327
00:28:59.076  Removing:    /var/run/dpdk/spdk_pid117372
00:28:59.076  Removing:    /var/run/dpdk/spdk_pid117408
00:28:59.076  Removing:    /var/run/dpdk/spdk_pid117501
00:28:59.076  Removing:    /var/run/dpdk/spdk_pid117566
00:28:59.076  Removing:    /var/run/dpdk/spdk_pid117596
00:28:59.076  Removing:    /var/run/dpdk/spdk_pid117670
00:28:59.076  Removing:    /var/run/dpdk/spdk_pid117705
00:28:59.076  Removing:    /var/run/dpdk/spdk_pid117738
00:28:59.076  Removing:    /var/run/dpdk/spdk_pid117773
00:28:59.076  Removing:    /var/run/dpdk/spdk_pid117806
00:28:59.076  Removing:    /var/run/dpdk/spdk_pid117842
00:28:59.076  Removing:    /var/run/dpdk/spdk_pid117883
00:28:59.076  Removing:    /var/run/dpdk/spdk_pid117911
00:28:59.334  Removing:    /var/run/dpdk/spdk_pid117951
00:28:59.334  Removing:    /var/run/dpdk/spdk_pid117975
00:28:59.334  Removing:    /var/run/dpdk/spdk_pid118019
00:28:59.334  Removing:    /var/run/dpdk/spdk_pid118044
00:28:59.334  Removing:    /var/run/dpdk/spdk_pid118089
00:28:59.334  Removing:    /var/run/dpdk/spdk_pid118112
00:28:59.334  Removing:    /var/run/dpdk/spdk_pid118157
00:28:59.334  Removing:    /var/run/dpdk/spdk_pid118187
00:28:59.334  Removing:    /var/run/dpdk/spdk_pid118225
00:28:59.334  Removing:    /var/run/dpdk/spdk_pid118255
00:28:59.334  Removing:    /var/run/dpdk/spdk_pid118302
00:28:59.334  Removing:    /var/run/dpdk/spdk_pid118324
00:28:59.334  Removing:    /var/run/dpdk/spdk_pid118368
00:28:59.334  Removing:    /var/run/dpdk/spdk_pid118402
00:28:59.334  Removing:    /var/run/dpdk/spdk_pid118436
00:28:59.334  Removing:    /var/run/dpdk/spdk_pid118466
00:28:59.334  Removing:    /var/run/dpdk/spdk_pid118506
00:28:59.334  Removing:    /var/run/dpdk/spdk_pid118536
00:28:59.334  Removing:    /var/run/dpdk/spdk_pid118575
00:28:59.334  Removing:    /var/run/dpdk/spdk_pid118604
00:28:59.334  Removing:    /var/run/dpdk/spdk_pid118637
00:28:59.334  Removing:    /var/run/dpdk/spdk_pid118672
00:28:59.334  Removing:    /var/run/dpdk/spdk_pid118705
00:28:59.334  Removing:    /var/run/dpdk/spdk_pid118742
00:28:59.334  Removing:    /var/run/dpdk/spdk_pid118782
00:28:59.334  Removing:    /var/run/dpdk/spdk_pid118813
00:28:59.334  Removing:    /var/run/dpdk/spdk_pid118861
00:28:59.334  Removing:    /var/run/dpdk/spdk_pid118894
00:28:59.334  Removing:    /var/run/dpdk/spdk_pid118944
00:28:59.334  Removing:    /var/run/dpdk/spdk_pid118967
00:28:59.334  Removing:    /var/run/dpdk/spdk_pid119012
00:28:59.334  Removing:    /var/run/dpdk/spdk_pid119047
00:28:59.334  Removing:    /var/run/dpdk/spdk_pid119082
00:28:59.334  Removing:    /var/run/dpdk/spdk_pid119176
00:28:59.334  Removing:    /var/run/dpdk/spdk_pid119285
00:28:59.334  Removing:    /var/run/dpdk/spdk_pid119468
00:28:59.335  Removing:    /var/run/dpdk/spdk_pid119521
00:28:59.335  Removing:    /var/run/dpdk/spdk_pid119566
00:28:59.335  Removing:    /var/run/dpdk/spdk_pid120764
00:28:59.335  Removing:    /var/run/dpdk/spdk_pid120960
00:28:59.335  Removing:    /var/run/dpdk/spdk_pid121155
00:28:59.335  Removing:    /var/run/dpdk/spdk_pid121265
00:28:59.335  Removing:    /var/run/dpdk/spdk_pid121386
00:28:59.335  Removing:    /var/run/dpdk/spdk_pid121436
00:28:59.335  Removing:    /var/run/dpdk/spdk_pid121474
00:28:59.335  Removing:    /var/run/dpdk/spdk_pid121500
00:28:59.335  Removing:    /var/run/dpdk/spdk_pid121967
00:28:59.335  Removing:    /var/run/dpdk/spdk_pid122042
00:28:59.335  Removing:    /var/run/dpdk/spdk_pid122152
00:28:59.335  Removing:    /var/run/dpdk/spdk_pid122203
00:28:59.335  Removing:    /var/run/dpdk/spdk_pid123342
00:28:59.335  Removing:    /var/run/dpdk/spdk_pid124190
00:28:59.335  Removing:    /var/run/dpdk/spdk_pid125035
00:28:59.335  Removing:    /var/run/dpdk/spdk_pid126106
00:28:59.335  Removing:    /var/run/dpdk/spdk_pid127129
00:28:59.335  Removing:    /var/run/dpdk/spdk_pid128165
00:28:59.335  Removing:    /var/run/dpdk/spdk_pid129624
00:28:59.335  Removing:    /var/run/dpdk/spdk_pid130800
00:28:59.335  Removing:    /var/run/dpdk/spdk_pid132006
00:28:59.335  Removing:    /var/run/dpdk/spdk_pid132673
00:28:59.335  Removing:    /var/run/dpdk/spdk_pid133214
00:28:59.335  Removing:    /var/run/dpdk/spdk_pid133836
00:28:59.335  Removing:    /var/run/dpdk/spdk_pid134339
00:28:59.335  Removing:    /var/run/dpdk/spdk_pid134887
00:28:59.335  Removing:    /var/run/dpdk/spdk_pid135474
00:28:59.335  Removing:    /var/run/dpdk/spdk_pid136156
00:28:59.335  Removing:    /var/run/dpdk/spdk_pid136668
00:28:59.335  Removing:    /var/run/dpdk/spdk_pid138028
00:28:59.335  Removing:    /var/run/dpdk/spdk_pid138626
00:28:59.335  Removing:    /var/run/dpdk/spdk_pid139152
00:28:59.593  Removing:    /var/run/dpdk/spdk_pid140662
00:28:59.593  Removing:    /var/run/dpdk/spdk_pid141333
00:28:59.593  Removing:    /var/run/dpdk/spdk_pid141936
00:28:59.593  Removing:    /var/run/dpdk/spdk_pid142710
00:28:59.593  Removing:    /var/run/dpdk/spdk_pid142752
00:28:59.593  Removing:    /var/run/dpdk/spdk_pid142791
00:28:59.593  Removing:    /var/run/dpdk/spdk_pid142832
00:28:59.593  Removing:    /var/run/dpdk/spdk_pid142976
00:28:59.593  Removing:    /var/run/dpdk/spdk_pid143121
00:28:59.593  Removing:    /var/run/dpdk/spdk_pid143348
00:28:59.593  Removing:    /var/run/dpdk/spdk_pid143653
00:28:59.593  Removing:    /var/run/dpdk/spdk_pid143679
00:28:59.593  Removing:    /var/run/dpdk/spdk_pid143718
00:28:59.593  Removing:    /var/run/dpdk/spdk_pid143736
00:28:59.593  Removing:    /var/run/dpdk/spdk_pid143750
00:28:59.593  Removing:    /var/run/dpdk/spdk_pid143777
00:28:59.593  Removing:    /var/run/dpdk/spdk_pid143785
00:28:59.593  Removing:    /var/run/dpdk/spdk_pid143806
00:28:59.593  Removing:    /var/run/dpdk/spdk_pid143826
00:28:59.593  Removing:    /var/run/dpdk/spdk_pid143841
00:28:59.593  Removing:    /var/run/dpdk/spdk_pid143855
00:28:59.593  Removing:    /var/run/dpdk/spdk_pid143882
00:28:59.593  Removing:    /var/run/dpdk/spdk_pid143894
00:28:59.593  Removing:    /var/run/dpdk/spdk_pid143913
00:28:59.593  Removing:    /var/run/dpdk/spdk_pid143933
00:28:59.593  Removing:    /var/run/dpdk/spdk_pid143952
00:28:59.593  Removing:    /var/run/dpdk/spdk_pid143962
00:28:59.593  Removing:    /var/run/dpdk/spdk_pid143989
00:28:59.593  Removing:    /var/run/dpdk/spdk_pid144002
00:28:59.593  Removing:    /var/run/dpdk/spdk_pid144018
00:28:59.593  Removing:    /var/run/dpdk/spdk_pid144058
00:28:59.593  Removing:    /var/run/dpdk/spdk_pid144066
00:28:59.593  Removing:    /var/run/dpdk/spdk_pid144105
00:28:59.593  Removing:    /var/run/dpdk/spdk_pid144184
00:28:59.593  Removing:    /var/run/dpdk/spdk_pid144218
00:28:59.593  Removing:    /var/run/dpdk/spdk_pid144230
00:28:59.593  Removing:    /var/run/dpdk/spdk_pid144267
00:28:59.593  Removing:    /var/run/dpdk/spdk_pid144279
00:28:59.593  Removing:    /var/run/dpdk/spdk_pid144291
00:28:59.593  Removing:    /var/run/dpdk/spdk_pid144347
00:28:59.593  Removing:    /var/run/dpdk/spdk_pid144355
00:28:59.593  Removing:    /var/run/dpdk/spdk_pid144391
00:28:59.593  Removing:    /var/run/dpdk/spdk_pid144405
00:28:59.593  Removing:    /var/run/dpdk/spdk_pid144417
00:28:59.593  Removing:    /var/run/dpdk/spdk_pid144429
00:28:59.593  Removing:    /var/run/dpdk/spdk_pid144439
00:28:59.593  Removing:    /var/run/dpdk/spdk_pid144451
00:28:59.593  Removing:    /var/run/dpdk/spdk_pid144468
00:28:59.593  Removing:    /var/run/dpdk/spdk_pid144473
00:28:59.593  Removing:    /var/run/dpdk/spdk_pid144511
00:28:59.593  Removing:    /var/run/dpdk/spdk_pid144546
00:28:59.593  Removing:    /var/run/dpdk/spdk_pid144560
00:28:59.593  Removing:    /var/run/dpdk/spdk_pid144595
00:28:59.593  Removing:    /var/run/dpdk/spdk_pid144607
00:28:59.593  Removing:    /var/run/dpdk/spdk_pid144619
00:28:59.593  Removing:    /var/run/dpdk/spdk_pid144676
00:28:59.593  Removing:    /var/run/dpdk/spdk_pid144683
00:28:59.593  Removing:    /var/run/dpdk/spdk_pid144719
00:28:59.593  Removing:    /var/run/dpdk/spdk_pid144724
00:28:59.593  Removing:    /var/run/dpdk/spdk_pid144745
00:28:59.593  Removing:    /var/run/dpdk/spdk_pid144757
00:28:59.593  Removing:    /var/run/dpdk/spdk_pid144767
00:28:59.593  Removing:    /var/run/dpdk/spdk_pid144779
00:28:59.593  Removing:    /var/run/dpdk/spdk_pid144789
00:28:59.593  Removing:    /var/run/dpdk/spdk_pid144801
00:28:59.593  Removing:    /var/run/dpdk/spdk_pid144894
00:28:59.593  Removing:    /var/run/dpdk/spdk_pid144944
00:28:59.593  Removing:    /var/run/dpdk/spdk_pid145064
00:28:59.593  Removing:    /var/run/dpdk/spdk_pid145080
00:28:59.593  Removing:    /var/run/dpdk/spdk_pid145118
00:28:59.593  Removing:    /var/run/dpdk/spdk_pid145167
00:28:59.852  Removing:    /var/run/dpdk/spdk_pid145193
00:28:59.852  Removing:    /var/run/dpdk/spdk_pid145215
00:28:59.852  Removing:    /var/run/dpdk/spdk_pid145237
00:28:59.852  Removing:    /var/run/dpdk/spdk_pid145273
00:28:59.852  Removing:    /var/run/dpdk/spdk_pid145291
00:28:59.852  Removing:    /var/run/dpdk/spdk_pid145373
00:28:59.852  Removing:    /var/run/dpdk/spdk_pid145426
00:28:59.852  Removing:    /var/run/dpdk/spdk_pid145455
00:28:59.852  Removing:    /var/run/dpdk/spdk_pid145732
00:28:59.852  Removing:    /var/run/dpdk/spdk_pid145843
00:28:59.852  Removing:    /var/run/dpdk/spdk_pid145881
00:28:59.852  Removing:    /var/run/dpdk/spdk_pid145972
00:28:59.852  Removing:    /var/run/dpdk/spdk_pid146044
00:28:59.852  Removing:    /var/run/dpdk/spdk_pid146075
00:28:59.852  Removing:    /var/run/dpdk/spdk_pid146315
00:28:59.852  Removing:    /var/run/dpdk/spdk_pid146442
00:28:59.852  Removing:    /var/run/dpdk/spdk_pid146538
00:28:59.852  Removing:    /var/run/dpdk/spdk_pid146588
00:28:59.852  Removing:    /var/run/dpdk/spdk_pid146610
00:28:59.852  Removing:    /var/run/dpdk/spdk_pid146702
00:28:59.852  Removing:    /var/run/dpdk/spdk_pid147115
00:28:59.852  Removing:    /var/run/dpdk/spdk_pid147140
00:28:59.852  Removing:    /var/run/dpdk/spdk_pid147434
00:28:59.852  Removing:    /var/run/dpdk/spdk_pid147537
00:28:59.852  Removing:    /var/run/dpdk/spdk_pid147633
00:28:59.852  Removing:    /var/run/dpdk/spdk_pid147678
00:28:59.852  Removing:    /var/run/dpdk/spdk_pid147708
00:28:59.852  Removing:    /var/run/dpdk/spdk_pid147731
00:28:59.852  Removing:    /var/run/dpdk/spdk_pid149068
00:28:59.852  Removing:    /var/run/dpdk/spdk_pid149190
00:28:59.852  Removing:    /var/run/dpdk/spdk_pid149199
00:28:59.852  Removing:    /var/run/dpdk/spdk_pid149218
00:28:59.852  Removing:    /var/run/dpdk/spdk_pid149732
00:28:59.852  Removing:    /var/run/dpdk/spdk_pid149833
00:28:59.852  Removing:    /var/run/dpdk/spdk_pid149972
00:28:59.852  Removing:    /var/run/dpdk/spdk_pid150026
00:28:59.852  Removing:    /var/run/dpdk/spdk_pid150058
00:28:59.852  Removing:    /var/run/dpdk/spdk_pid150328
00:28:59.852  Removing:    /var/run/dpdk/spdk_pid150506
00:28:59.852  Removing:    /var/run/dpdk/spdk_pid150600
00:28:59.852  Removing:    /var/run/dpdk/spdk_pid150696
00:28:59.852  Removing:    /var/run/dpdk/spdk_pid150740
00:28:59.852  Removing:    /var/run/dpdk/spdk_pid150779
00:28:59.852  Clean
00:29:00.111  killing process with pid 104008
00:29:00.111  killing process with pid 104015
00:29:00.111   17:12:52	-- common/autotest_common.sh@1446 -- # return 0
00:29:00.111   17:12:52	-- spdk/autotest.sh@374 -- # timing_exit post_cleanup
00:29:00.111   17:12:52	-- common/autotest_common.sh@728 -- # xtrace_disable
00:29:00.111   17:12:52	-- common/autotest_common.sh@10 -- # set +x
00:29:00.111   17:12:52	-- spdk/autotest.sh@376 -- # timing_exit autotest
00:29:00.111   17:12:52	-- common/autotest_common.sh@728 -- # xtrace_disable
00:29:00.111   17:12:52	-- common/autotest_common.sh@10 -- # set +x
00:29:00.111   17:12:52	-- spdk/autotest.sh@377 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt
00:29:00.111   17:12:52	-- spdk/autotest.sh@379 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]]
00:29:00.111   17:12:52	-- spdk/autotest.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log
00:29:00.111   17:12:52	-- spdk/autotest.sh@381 -- # [[ y == y ]]
00:29:00.111    17:12:52	-- spdk/autotest.sh@383 -- # hostname
00:29:00.111   17:12:52	-- spdk/autotest.sh@383 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t ubuntu2204-cloud-1711172311-2200 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info
00:29:00.370  geninfo: WARNING: invalid characters removed from testname!
00:29:47.039   17:13:32	-- spdk/autotest.sh@384 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info
00:29:47.039   17:13:38	-- spdk/autotest.sh@385 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info
00:29:48.942   17:13:41	-- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info
00:29:51.476   17:13:44	-- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info
00:29:54.761   17:13:47	-- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info
00:29:57.390   17:13:49	-- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info
00:29:59.922   17:13:52	-- spdk/autotest.sh@393 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR
00:29:59.922     17:13:52	-- common/autotest_common.sh@1689 -- $ [[ y == y ]]
00:29:59.922      17:13:52	-- common/autotest_common.sh@1690 -- $ lcov --version
00:29:59.922      17:13:52	-- common/autotest_common.sh@1690 -- $ awk '{print $NF}'
00:29:59.922     17:13:52	-- common/autotest_common.sh@1690 -- $ lt 1.15 2
00:29:59.922     17:13:52	-- scripts/common.sh@372 -- $ cmp_versions 1.15 '<' 2
00:29:59.922     17:13:52	-- scripts/common.sh@332 -- $ local ver1 ver1_l
00:29:59.922     17:13:52	-- scripts/common.sh@333 -- $ local ver2 ver2_l
00:29:59.922     17:13:52	-- scripts/common.sh@335 -- $ IFS=.-:
00:29:59.922     17:13:52	-- scripts/common.sh@335 -- $ read -ra ver1
00:29:59.922     17:13:52	-- scripts/common.sh@336 -- $ IFS=.-:
00:29:59.922     17:13:52	-- scripts/common.sh@336 -- $ read -ra ver2
00:29:59.922     17:13:52	-- scripts/common.sh@337 -- $ local 'op=<'
00:29:59.922     17:13:52	-- scripts/common.sh@339 -- $ ver1_l=2
00:29:59.922     17:13:52	-- scripts/common.sh@340 -- $ ver2_l=1
00:29:59.922     17:13:52	-- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v
00:29:59.922     17:13:52	-- scripts/common.sh@343 -- $ case "$op" in
00:29:59.922     17:13:52	-- scripts/common.sh@344 -- $ : 1
00:29:59.922     17:13:52	-- scripts/common.sh@363 -- $ (( v = 0 ))
00:29:59.922     17:13:52	-- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:29:59.922      17:13:52	-- scripts/common.sh@364 -- $ decimal 1
00:29:59.922      17:13:52	-- scripts/common.sh@352 -- $ local d=1
00:29:59.922      17:13:52	-- scripts/common.sh@353 -- $ [[ 1 =~ ^[0-9]+$ ]]
00:29:59.922      17:13:52	-- scripts/common.sh@354 -- $ echo 1
00:29:59.922     17:13:52	-- scripts/common.sh@364 -- $ ver1[v]=1
00:29:59.922      17:13:52	-- scripts/common.sh@365 -- $ decimal 2
00:29:59.922      17:13:52	-- scripts/common.sh@352 -- $ local d=2
00:29:59.922      17:13:52	-- scripts/common.sh@353 -- $ [[ 2 =~ ^[0-9]+$ ]]
00:29:59.922      17:13:52	-- scripts/common.sh@354 -- $ echo 2
00:29:59.922     17:13:52	-- scripts/common.sh@365 -- $ ver2[v]=2
00:29:59.922     17:13:52	-- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] ))
00:29:59.922     17:13:52	-- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] ))
00:29:59.922     17:13:52	-- scripts/common.sh@367 -- $ return 0
00:29:59.922     17:13:52	-- common/autotest_common.sh@1691 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:29:59.922     17:13:52	-- common/autotest_common.sh@1703 -- $ export 'LCOV_OPTS=
00:29:59.922  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:29:59.922  		--rc genhtml_branch_coverage=1
00:29:59.922  		--rc genhtml_function_coverage=1
00:29:59.922  		--rc genhtml_legend=1
00:29:59.922  		--rc geninfo_all_blocks=1
00:29:59.922  		--rc geninfo_unexecuted_blocks=1
00:29:59.922  		
00:29:59.922  		'
00:29:59.922     17:13:52	-- common/autotest_common.sh@1703 -- $ LCOV_OPTS='
00:29:59.922  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:29:59.922  		--rc genhtml_branch_coverage=1
00:29:59.922  		--rc genhtml_function_coverage=1
00:29:59.922  		--rc genhtml_legend=1
00:29:59.922  		--rc geninfo_all_blocks=1
00:29:59.922  		--rc geninfo_unexecuted_blocks=1
00:29:59.922  		
00:29:59.922  		'
00:29:59.922     17:13:52	-- common/autotest_common.sh@1704 -- $ export 'LCOV=lcov 
00:29:59.922  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:29:59.922  		--rc genhtml_branch_coverage=1
00:29:59.922  		--rc genhtml_function_coverage=1
00:29:59.922  		--rc genhtml_legend=1
00:29:59.922  		--rc geninfo_all_blocks=1
00:29:59.922  		--rc geninfo_unexecuted_blocks=1
00:29:59.922  		
00:29:59.922  		'
00:29:59.922     17:13:52	-- common/autotest_common.sh@1704 -- $ LCOV='lcov 
00:29:59.922  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:29:59.922  		--rc genhtml_branch_coverage=1
00:29:59.922  		--rc genhtml_function_coverage=1
00:29:59.922  		--rc genhtml_legend=1
00:29:59.922  		--rc geninfo_all_blocks=1
00:29:59.922  		--rc geninfo_unexecuted_blocks=1
00:29:59.922  		
00:29:59.922  		'
00:29:59.922    17:13:52	-- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:29:59.922     17:13:52	-- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]]
00:29:59.922     17:13:52	-- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:29:59.922     17:13:52	-- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh
00:29:59.922      17:13:52	-- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:29:59.922      17:13:52	-- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:29:59.922      17:13:52	-- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:29:59.922      17:13:52	-- paths/export.sh@5 -- $ export PATH
00:29:59.922      17:13:52	-- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:29:59.922    17:13:52	-- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output
00:29:59.922      17:13:52	-- common/autobuild_common.sh@440 -- $ date +%s
00:29:59.922     17:13:52	-- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1732036432.XXXXXX
00:29:59.922    17:13:52	-- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1732036432.xBWBZn
00:29:59.922    17:13:52	-- common/autobuild_common.sh@442 -- $ [[ -n '' ]]
00:29:59.922    17:13:52	-- common/autobuild_common.sh@446 -- $ '[' -n v22.11.4 ']'
00:29:59.922     17:13:52	-- common/autobuild_common.sh@447 -- $ dirname /home/vagrant/spdk_repo/dpdk/build
00:29:59.922    17:13:52	-- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk'
00:29:59.922    17:13:52	-- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp'
00:29:59.922    17:13:52	-- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp  --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs'
00:29:59.922     17:13:52	-- common/autobuild_common.sh@456 -- $ get_config_params
00:29:59.922     17:13:52	-- common/autotest_common.sh@397 -- $ xtrace_disable
00:29:59.922     17:13:52	-- common/autotest_common.sh@10 -- $ set +x
00:29:59.922    17:13:52	-- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build'
00:29:59.922   17:13:52	-- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10
00:29:59.922   17:13:52	-- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk
00:29:59.922   17:13:52	-- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]]
00:29:59.922   17:13:52	-- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]]
00:29:59.922   17:13:52	-- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]]
00:29:59.922   17:13:52	-- spdk/autopackage.sh@23 -- $ timing_enter build_release
00:29:59.922   17:13:52	-- common/autotest_common.sh@722 -- $ xtrace_disable
00:29:59.922   17:13:52	-- common/autotest_common.sh@10 -- $ set +x
00:29:59.922   17:13:52	-- spdk/autopackage.sh@26 -- $ [[ '' == *clang* ]]
00:29:59.922   17:13:52	-- spdk/autopackage.sh@36 -- $ [[ -n v22.11.4 ]]
00:29:59.922   17:13:52	-- spdk/autopackage.sh@36 -- $ [[ -e /tmp/spdk-ld-path ]]
00:29:59.922   17:13:52	-- spdk/autopackage.sh@37 -- $ source /tmp/spdk-ld-path
00:29:59.922    17:13:52	-- tmp/spdk-ld-path@1 -- $ export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib
00:29:59.922    17:13:52	-- tmp/spdk-ld-path@1 -- $ LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib
00:29:59.922    17:13:52	-- tmp/spdk-ld-path@2 -- $ export PKG_CONFIG_PATH=
00:29:59.923    17:13:52	-- tmp/spdk-ld-path@2 -- $ PKG_CONFIG_PATH=
00:29:59.923    17:13:52	-- spdk/autopackage.sh@40 -- $ get_config_params
00:29:59.923    17:13:52	-- spdk/autopackage.sh@40 -- $ sed s/--enable-debug//g
00:29:59.923    17:13:52	-- common/autotest_common.sh@397 -- $ xtrace_disable
00:29:59.923    17:13:52	-- common/autotest_common.sh@10 -- $ set +x
00:29:59.923   17:13:52	-- spdk/autopackage.sh@40 -- $ config_params=' --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build'
00:29:59.923   17:13:52	-- spdk/autopackage.sh@41 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --enable-lto
00:30:00.181  Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs...
00:30:00.181  DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib
00:30:00.181  DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include
00:30:00.181  Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk
00:30:00.439  Using 'verbs' RDMA provider
00:30:15.887  Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done.
00:30:28.133  Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done.
00:30:28.133  Creating mk/config.mk...done.
00:30:28.133  Creating mk/cc.flags.mk...done.
00:30:28.133  Type 'make' to build.
00:30:28.133   17:14:20	-- spdk/autopackage.sh@43 -- $ make -j10
00:30:28.133  make[1]: Nothing to be done for 'all'.
00:30:28.133    CC lib/ut_mock/mock.o
00:30:28.133    CC lib/ut/ut.o
00:30:28.133    CC lib/log/log.o
00:30:28.133    CC lib/log/log_deprecated.o
00:30:28.133    CC lib/log/log_flags.o
00:30:28.133    LIB libspdk_ut_mock.a
00:30:28.133    LIB libspdk_log.a
00:30:28.133    LIB libspdk_ut.a
00:30:28.393    CC lib/ioat/ioat.o
00:30:28.393    CC lib/dma/dma.o
00:30:28.393    CXX lib/trace_parser/trace.o
00:30:28.393    CC lib/util/base64.o
00:30:28.393    CC lib/util/cpuset.o
00:30:28.393    CC lib/util/bit_array.o
00:30:28.393    CC lib/util/crc16.o
00:30:28.393    CC lib/util/crc32.o
00:30:28.393    CC lib/util/crc32c.o
00:30:28.393    CC lib/vfio_user/host/vfio_user_pci.o
00:30:28.393    CC lib/vfio_user/host/vfio_user.o
00:30:28.393    CC lib/util/crc32_ieee.o
00:30:28.393    CC lib/util/crc64.o
00:30:28.393    LIB libspdk_dma.a
00:30:28.393    CC lib/util/dif.o
00:30:28.651    CC lib/util/fd.o
00:30:28.651    CC lib/util/file.o
00:30:28.651    CC lib/util/hexlify.o
00:30:28.651    LIB libspdk_ioat.a
00:30:28.651    CC lib/util/iov.o
00:30:28.651    CC lib/util/math.o
00:30:28.651    CC lib/util/pipe.o
00:30:28.651    CC lib/util/strerror_tls.o
00:30:28.651    CC lib/util/string.o
00:30:28.651    LIB libspdk_vfio_user.a
00:30:28.651    CC lib/util/uuid.o
00:30:28.651    CC lib/util/fd_group.o
00:30:28.651    CC lib/util/xor.o
00:30:28.651    CC lib/util/zipf.o
00:30:28.910    LIB libspdk_util.a
00:30:28.910    LIB libspdk_trace_parser.a
00:30:29.169    CC lib/json/json_parse.o
00:30:29.169    CC lib/json/json_util.o
00:30:29.169    CC lib/json/json_write.o
00:30:29.169    CC lib/conf/conf.o
00:30:29.169    CC lib/env_dpdk/env.o
00:30:29.169    CC lib/vmd/led.o
00:30:29.169    CC lib/env_dpdk/memory.o
00:30:29.169    CC lib/vmd/vmd.o
00:30:29.169    CC lib/rdma/common.o
00:30:29.169    CC lib/idxd/idxd.o
00:30:29.169    CC lib/rdma/rdma_verbs.o
00:30:29.169    CC lib/env_dpdk/pci.o
00:30:29.169    CC lib/idxd/idxd_user.o
00:30:29.169    LIB libspdk_json.a
00:30:29.169    CC lib/env_dpdk/init.o
00:30:29.169    CC lib/env_dpdk/threads.o
00:30:29.169    CC lib/env_dpdk/pci_ioat.o
00:30:29.169    LIB libspdk_conf.a
00:30:29.169    CC lib/env_dpdk/pci_virtio.o
00:30:29.428    LIB libspdk_rdma.a
00:30:29.428    LIB libspdk_vmd.a
00:30:29.428    CC lib/env_dpdk/pci_vmd.o
00:30:29.428    CC lib/env_dpdk/pci_idxd.o
00:30:29.428    LIB libspdk_idxd.a
00:30:29.428    CC lib/env_dpdk/pci_event.o
00:30:29.428    CC lib/env_dpdk/sigbus_handler.o
00:30:29.428    CC lib/env_dpdk/pci_dpdk.o
00:30:29.428    CC lib/env_dpdk/pci_dpdk_2207.o
00:30:29.428    CC lib/env_dpdk/pci_dpdk_2211.o
00:30:29.428    CC lib/jsonrpc/jsonrpc_server.o
00:30:29.428    CC lib/jsonrpc/jsonrpc_server_tcp.o
00:30:29.428    CC lib/jsonrpc/jsonrpc_client.o
00:30:29.428    CC lib/jsonrpc/jsonrpc_client_tcp.o
00:30:29.687    LIB libspdk_jsonrpc.a
00:30:29.687    LIB libspdk_env_dpdk.a
00:30:29.687    CC lib/rpc/rpc.o
00:30:29.946    LIB libspdk_rpc.a
00:30:30.205    CC lib/notify/notify.o
00:30:30.205    CC lib/notify/notify_rpc.o
00:30:30.205    CC lib/trace/trace.o
00:30:30.205    CC lib/sock/sock.o
00:30:30.205    CC lib/sock/sock_rpc.o
00:30:30.205    CC lib/trace/trace_flags.o
00:30:30.205    CC lib/trace/trace_rpc.o
00:30:30.205    LIB libspdk_notify.a
00:30:30.205    LIB libspdk_trace.a
00:30:30.464    LIB libspdk_sock.a
00:30:30.464    CC lib/thread/thread.o
00:30:30.464    CC lib/thread/iobuf.o
00:30:30.464    CC lib/nvme/nvme_ctrlr_cmd.o
00:30:30.464    CC lib/nvme/nvme_fabric.o
00:30:30.464    CC lib/nvme/nvme_ctrlr.o
00:30:30.464    CC lib/nvme/nvme_pcie_common.o
00:30:30.464    CC lib/nvme/nvme_ns_cmd.o
00:30:30.464    CC lib/nvme/nvme_ns.o
00:30:30.464    CC lib/nvme/nvme_pcie.o
00:30:30.464    CC lib/nvme/nvme_qpair.o
00:30:30.464    CC lib/nvme/nvme.o
00:30:31.032    LIB libspdk_thread.a
00:30:31.032    CC lib/nvme/nvme_quirks.o
00:30:31.032    CC lib/nvme/nvme_transport.o
00:30:31.032    CC lib/nvme/nvme_discovery.o
00:30:31.032    CC lib/nvme/nvme_ctrlr_ocssd_cmd.o
00:30:31.032    CC lib/nvme/nvme_ns_ocssd_cmd.o
00:30:31.291    CC lib/nvme/nvme_tcp.o
00:30:31.291    CC lib/nvme/nvme_opal.o
00:30:31.291    CC lib/nvme/nvme_io_msg.o
00:30:31.291    CC lib/nvme/nvme_poll_group.o
00:30:31.291    CC lib/nvme/nvme_zns.o
00:30:31.291    CC lib/nvme/nvme_cuse.o
00:30:31.550    CC lib/nvme/nvme_vfio_user.o
00:30:31.550    CC lib/nvme/nvme_rdma.o
00:30:31.550    CC lib/accel/accel.o
00:30:31.550    CC lib/blob/blobstore.o
00:30:31.550    CC lib/init/json_config.o
00:30:31.808    CC lib/blob/request.o
00:30:31.808    CC lib/blob/zeroes.o
00:30:31.808    CC lib/init/subsystem.o
00:30:31.808    CC lib/blob/blob_bs_dev.o
00:30:31.808    CC lib/init/subsystem_rpc.o
00:30:31.808    CC lib/accel/accel_rpc.o
00:30:31.808    CC lib/accel/accel_sw.o
00:30:31.808    CC lib/init/rpc.o
00:30:31.808    CC lib/virtio/virtio.o
00:30:31.808    CC lib/virtio/virtio_vhost_user.o
00:30:32.067    CC lib/virtio/virtio_vfio_user.o
00:30:32.067    CC lib/virtio/virtio_pci.o
00:30:32.067    LIB libspdk_init.a
00:30:32.067    LIB libspdk_accel.a
00:30:32.067    CC lib/event/app.o
00:30:32.067    CC lib/event/reactor.o
00:30:32.067    CC lib/event/app_rpc.o
00:30:32.067    CC lib/event/log_rpc.o
00:30:32.067    CC lib/event/scheduler_static.o
00:30:32.067    LIB libspdk_nvme.a
00:30:32.067    LIB libspdk_virtio.a
00:30:32.067    CC lib/bdev/bdev.o
00:30:32.067    CC lib/bdev/bdev_rpc.o
00:30:32.067    CC lib/bdev/bdev_zone.o
00:30:32.067    CC lib/bdev/part.o
00:30:32.326    CC lib/bdev/scsi_nvme.o
00:30:32.326    LIB libspdk_event.a
00:30:32.585    LIB libspdk_blob.a
00:30:32.844    CC lib/blobfs/blobfs.o
00:30:32.844    CC lib/blobfs/tree.o
00:30:32.844    CC lib/lvol/lvol.o
00:30:33.103    LIB libspdk_bdev.a
00:30:33.103    LIB libspdk_lvol.a
00:30:33.103    LIB libspdk_blobfs.a
00:30:33.361    CC lib/scsi/dev.o
00:30:33.361    CC lib/scsi/lun.o
00:30:33.361    CC lib/scsi/port.o
00:30:33.361    CC lib/scsi/scsi_bdev.o
00:30:33.361    CC lib/scsi/scsi.o
00:30:33.361    CC lib/scsi/scsi_pr.o
00:30:33.361    CC lib/scsi/scsi_rpc.o
00:30:33.361    CC lib/nbd/nbd.o
00:30:33.361    CC lib/ftl/ftl_core.o
00:30:33.361    CC lib/nvmf/ctrlr.o
00:30:33.361    CC lib/nvmf/ctrlr_discovery.o
00:30:33.361    CC lib/nvmf/ctrlr_bdev.o
00:30:33.361    CC lib/nvmf/subsystem.o
00:30:33.361    CC lib/nvmf/nvmf.o
00:30:33.361    CC lib/nvmf/nvmf_rpc.o
00:30:33.620    CC lib/nvmf/transport.o
00:30:33.620    CC lib/ftl/ftl_init.o
00:30:33.620    CC lib/scsi/task.o
00:30:33.620    CC lib/nvmf/tcp.o
00:30:33.620    CC lib/nbd/nbd_rpc.o
00:30:33.620    CC lib/nvmf/rdma.o
00:30:33.620    CC lib/ftl/ftl_layout.o
00:30:33.620    LIB libspdk_scsi.a
00:30:33.620    CC lib/ftl/ftl_debug.o
00:30:33.620    LIB libspdk_nbd.a
00:30:33.620    CC lib/ftl/ftl_io.o
00:30:33.620    CC lib/ftl/ftl_sb.o
00:30:33.878    CC lib/ftl/ftl_l2p.o
00:30:33.878    CC lib/ftl/ftl_l2p_flat.o
00:30:33.878    CC lib/ftl/ftl_nv_cache.o
00:30:33.878    CC lib/ftl/ftl_band.o
00:30:33.878    CC lib/ftl/ftl_band_ops.o
00:30:33.878    CC lib/ftl/ftl_writer.o
00:30:33.878    CC lib/ftl/ftl_rq.o
00:30:33.878    CC lib/ftl/ftl_reloc.o
00:30:33.878    CC lib/vhost/vhost.o
00:30:33.879    CC lib/iscsi/conn.o
00:30:34.137    CC lib/ftl/ftl_l2p_cache.o
00:30:34.137    CC lib/vhost/vhost_rpc.o
00:30:34.137    CC lib/vhost/vhost_scsi.o
00:30:34.137    CC lib/vhost/vhost_blk.o
00:30:34.137    CC lib/vhost/rte_vhost_user.o
00:30:34.137    CC lib/ftl/ftl_p2l.o
00:30:34.137    LIB libspdk_nvmf.a
00:30:34.137    CC lib/iscsi/init_grp.o
00:30:34.137    CC lib/iscsi/iscsi.o
00:30:34.396    CC lib/iscsi/md5.o
00:30:34.396    CC lib/iscsi/param.o
00:30:34.396    CC lib/ftl/mngt/ftl_mngt.o
00:30:34.396    CC lib/ftl/mngt/ftl_mngt_bdev.o
00:30:34.396    CC lib/ftl/mngt/ftl_mngt_shutdown.o
00:30:34.396    CC lib/iscsi/portal_grp.o
00:30:34.396    CC lib/iscsi/tgt_node.o
00:30:34.396    CC lib/iscsi/iscsi_subsystem.o
00:30:34.654    CC lib/iscsi/iscsi_rpc.o
00:30:34.654    CC lib/iscsi/task.o
00:30:34.654    CC lib/ftl/mngt/ftl_mngt_startup.o
00:30:34.654    CC lib/ftl/mngt/ftl_mngt_md.o
00:30:34.654    CC lib/ftl/mngt/ftl_mngt_misc.o
00:30:34.654    CC lib/ftl/mngt/ftl_mngt_ioch.o
00:30:34.654    CC lib/ftl/mngt/ftl_mngt_l2p.o
00:30:34.654    CC lib/ftl/mngt/ftl_mngt_band.o
00:30:34.654    LIB libspdk_vhost.a
00:30:34.654    CC lib/ftl/mngt/ftl_mngt_self_test.o
00:30:34.654    CC lib/ftl/mngt/ftl_mngt_p2l.o
00:30:34.654    CC lib/ftl/mngt/ftl_mngt_recovery.o
00:30:34.654    CC lib/ftl/mngt/ftl_mngt_upgrade.o
00:30:34.654    CC lib/ftl/utils/ftl_conf.o
00:30:34.914    CC lib/ftl/utils/ftl_md.o
00:30:34.914    CC lib/ftl/utils/ftl_mempool.o
00:30:34.914    CC lib/ftl/utils/ftl_bitmap.o
00:30:34.914    CC lib/ftl/utils/ftl_property.o
00:30:34.914    LIB libspdk_iscsi.a
00:30:34.914    CC lib/ftl/utils/ftl_layout_tracker_bdev.o
00:30:34.914    CC lib/ftl/upgrade/ftl_layout_upgrade.o
00:30:34.914    CC lib/ftl/upgrade/ftl_sb_upgrade.o
00:30:34.914    CC lib/ftl/upgrade/ftl_p2l_upgrade.o
00:30:34.914    CC lib/ftl/upgrade/ftl_band_upgrade.o
00:30:34.914    CC lib/ftl/upgrade/ftl_chunk_upgrade.o
00:30:34.914    CC lib/ftl/upgrade/ftl_sb_v3.o
00:30:34.914    CC lib/ftl/upgrade/ftl_sb_v5.o
00:30:34.914    CC lib/ftl/nvc/ftl_nvc_dev.o
00:30:34.914    CC lib/ftl/nvc/ftl_nvc_bdev_vss.o
00:30:35.190    CC lib/ftl/base/ftl_base_dev.o
00:30:35.190    CC lib/ftl/base/ftl_base_bdev.o
00:30:35.190    LIB libspdk_ftl.a
00:30:35.462    CC module/env_dpdk/env_dpdk_rpc.o
00:30:35.462    CC module/blob/bdev/blob_bdev.o
00:30:35.462    CC module/sock/posix/posix.o
00:30:35.462    CC module/scheduler/dpdk_governor/dpdk_governor.o
00:30:35.462    CC module/scheduler/dynamic/scheduler_dynamic.o
00:30:35.462    CC module/accel/error/accel_error.o
00:30:35.462    CC module/accel/ioat/accel_ioat.o
00:30:35.462    CC module/scheduler/gscheduler/gscheduler.o
00:30:35.462    CC module/accel/iaa/accel_iaa.o
00:30:35.462    CC module/accel/dsa/accel_dsa.o
00:30:35.462    LIB libspdk_env_dpdk_rpc.a
00:30:35.462    CC module/accel/iaa/accel_iaa_rpc.o
00:30:35.462    LIB libspdk_scheduler_gscheduler.a
00:30:35.462    LIB libspdk_scheduler_dpdk_governor.a
00:30:35.721    LIB libspdk_scheduler_dynamic.a
00:30:35.721    CC module/accel/error/accel_error_rpc.o
00:30:35.721    CC module/accel/dsa/accel_dsa_rpc.o
00:30:35.721    CC module/accel/ioat/accel_ioat_rpc.o
00:30:35.721    LIB libspdk_blob_bdev.a
00:30:35.721    LIB libspdk_accel_iaa.a
00:30:35.721    LIB libspdk_accel_error.a
00:30:35.721    LIB libspdk_accel_dsa.a
00:30:35.721    LIB libspdk_accel_ioat.a
00:30:35.721    CC module/bdev/error/vbdev_error.o
00:30:35.721    CC module/blobfs/bdev/blobfs_bdev.o
00:30:35.721    CC module/bdev/delay/vbdev_delay.o
00:30:35.721    CC module/bdev/gpt/gpt.o
00:30:35.721    CC module/bdev/malloc/bdev_malloc.o
00:30:35.721    CC module/bdev/lvol/vbdev_lvol.o
00:30:35.721    CC module/bdev/nvme/bdev_nvme.o
00:30:35.721    CC module/bdev/passthru/vbdev_passthru.o
00:30:35.721    CC module/bdev/null/bdev_null.o
00:30:35.721    LIB libspdk_sock_posix.a
00:30:35.980    CC module/bdev/null/bdev_null_rpc.o
00:30:35.980    CC module/bdev/gpt/vbdev_gpt.o
00:30:35.980    CC module/blobfs/bdev/blobfs_bdev_rpc.o
00:30:35.980    CC module/bdev/error/vbdev_error_rpc.o
00:30:35.980    CC module/bdev/malloc/bdev_malloc_rpc.o
00:30:35.980    CC module/bdev/delay/vbdev_delay_rpc.o
00:30:35.980    CC module/bdev/nvme/bdev_nvme_rpc.o
00:30:35.980    LIB libspdk_bdev_null.a
00:30:35.980    CC module/bdev/passthru/vbdev_passthru_rpc.o
00:30:35.980    LIB libspdk_blobfs_bdev.a
00:30:35.980    CC module/bdev/lvol/vbdev_lvol_rpc.o
00:30:35.980    LIB libspdk_bdev_error.a
00:30:35.980    LIB libspdk_bdev_gpt.a
00:30:35.980    CC module/bdev/raid/bdev_raid.o
00:30:35.980    LIB libspdk_bdev_malloc.a
00:30:35.980    LIB libspdk_bdev_delay.a
00:30:36.239    LIB libspdk_bdev_passthru.a
00:30:36.239    CC module/bdev/zone_block/vbdev_zone_block.o
00:30:36.239    CC module/bdev/split/vbdev_split.o
00:30:36.239    CC module/bdev/aio/bdev_aio.o
00:30:36.239    CC module/bdev/aio/bdev_aio_rpc.o
00:30:36.239    CC module/bdev/iscsi/bdev_iscsi.o
00:30:36.239    CC module/bdev/ftl/bdev_ftl.o
00:30:36.239    LIB libspdk_bdev_lvol.a
00:30:36.239    CC module/bdev/ftl/bdev_ftl_rpc.o
00:30:36.239    CC module/bdev/zone_block/vbdev_zone_block_rpc.o
00:30:36.239    CC module/bdev/split/vbdev_split_rpc.o
00:30:36.239    LIB libspdk_bdev_aio.a
00:30:36.239    CC module/bdev/nvme/nvme_rpc.o
00:30:36.239    CC module/bdev/nvme/bdev_mdns_client.o
00:30:36.506    CC module/bdev/nvme/vbdev_opal.o
00:30:36.506    LIB libspdk_bdev_ftl.a
00:30:36.506    CC module/bdev/iscsi/bdev_iscsi_rpc.o
00:30:36.506    LIB libspdk_bdev_zone_block.a
00:30:36.506    CC module/bdev/virtio/bdev_virtio_scsi.o
00:30:36.506    CC module/bdev/virtio/bdev_virtio_blk.o
00:30:36.506    CC module/bdev/virtio/bdev_virtio_rpc.o
00:30:36.506    CC module/bdev/raid/bdev_raid_rpc.o
00:30:36.506    LIB libspdk_bdev_split.a
00:30:36.506    CC module/bdev/raid/bdev_raid_sb.o
00:30:36.506    CC module/bdev/raid/raid0.o
00:30:36.506    CC module/bdev/nvme/vbdev_opal_rpc.o
00:30:36.506    LIB libspdk_bdev_iscsi.a
00:30:36.506    CC module/bdev/nvme/bdev_nvme_cuse_rpc.o
00:30:36.506    CC module/bdev/raid/raid1.o
00:30:36.506    CC module/bdev/raid/concat.o
00:30:36.506    CC module/bdev/raid/raid5f.o
00:30:36.771    LIB libspdk_bdev_nvme.a
00:30:36.771    LIB libspdk_bdev_virtio.a
00:30:36.771    LIB libspdk_bdev_raid.a
00:30:37.338    CC module/event/subsystems/vhost_blk/vhost_blk.o
00:30:37.338    CC module/event/subsystems/vmd/vmd.o
00:30:37.338    CC module/event/subsystems/vmd/vmd_rpc.o
00:30:37.338    CC module/event/subsystems/iobuf/iobuf_rpc.o
00:30:37.338    CC module/event/subsystems/iobuf/iobuf.o
00:30:37.338    CC module/event/subsystems/sock/sock.o
00:30:37.338    CC module/event/subsystems/scheduler/scheduler.o
00:30:37.338    LIB libspdk_event_vhost_blk.a
00:30:37.338    LIB libspdk_event_sock.a
00:30:37.338    LIB libspdk_event_iobuf.a
00:30:37.338    LIB libspdk_event_vmd.a
00:30:37.338    LIB libspdk_event_scheduler.a
00:30:37.338    CC module/event/subsystems/accel/accel.o
00:30:37.597    LIB libspdk_event_accel.a
00:30:37.855    CC module/event/subsystems/bdev/bdev.o
00:30:38.114    LIB libspdk_event_bdev.a
00:30:38.114    CC module/event/subsystems/nvmf/nvmf_tgt.o
00:30:38.114    CC module/event/subsystems/nvmf/nvmf_rpc.o
00:30:38.114    CC module/event/subsystems/scsi/scsi.o
00:30:38.114    CC module/event/subsystems/nbd/nbd.o
00:30:38.373    LIB libspdk_event_nbd.a
00:30:38.373    LIB libspdk_event_scsi.a
00:30:38.373    LIB libspdk_event_nvmf.a
00:30:38.373    CC module/event/subsystems/vhost_scsi/vhost_scsi.o
00:30:38.373    CC module/event/subsystems/iscsi/iscsi.o
00:30:38.631    LIB libspdk_event_vhost_scsi.a
00:30:38.631    LIB libspdk_event_iscsi.a
00:30:38.889    CXX app/trace/trace.o
00:30:38.889    CC app/trace_record/trace_record.o
00:30:38.889    CC app/iscsi_tgt/iscsi_tgt.o
00:30:38.889    CC app/nvmf_tgt/nvmf_main.o
00:30:38.889    CC examples/ioat/perf/perf.o
00:30:38.889    CC examples/accel/perf/accel_perf.o
00:30:38.889    CC app/spdk_tgt/spdk_tgt.o
00:30:38.889    CC test/accel/dif/dif.o
00:30:38.890    CC examples/blob/hello_world/hello_blob.o
00:30:38.890    CC examples/bdev/hello_world/hello_bdev.o
00:30:39.148    LINK spdk_trace_record
00:30:39.148    LINK nvmf_tgt
00:30:39.148    LINK ioat_perf
00:30:39.148    LINK spdk_tgt
00:30:39.148    LINK iscsi_tgt
00:30:39.148    LINK spdk_trace
00:30:39.148    LINK hello_blob
00:30:39.148    LINK hello_bdev
00:30:39.148    LINK accel_perf
00:30:39.148    LINK dif
00:30:41.685    CC examples/bdev/bdevperf/bdevperf.o
00:30:42.253    LINK bdevperf
00:30:42.823    CC examples/ioat/verify/verify.o
00:30:43.393    LINK verify
00:30:43.963    CC examples/nvme/hello_world/hello_world.o
00:30:44.985    LINK hello_world
00:30:53.103    CC examples/nvme/reconnect/reconnect.o
00:30:54.484    LINK reconnect
00:30:58.678    CC test/app/bdev_svc/bdev_svc.o
00:30:59.246    LINK bdev_svc
00:31:21.186    CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o
00:31:22.123    LINK nvme_fuzz
00:31:34.332    CC examples/nvme/nvme_manage/nvme_manage.o
00:31:36.903    LINK nvme_manage
00:32:15.653    CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o
00:32:15.653    LINK iscsi_fuzz
00:32:22.222    CC examples/nvme/arbitration/arbitration.o
00:32:23.601    LINK arbitration
00:32:38.491    CC examples/nvme/hotplug/hotplug.o
00:32:38.491    LINK hotplug
00:32:38.491    CC app/spdk_lspci/spdk_lspci.o
00:32:39.428    LINK spdk_lspci
00:32:40.365    CC examples/blob/cli/blobcli.o
00:32:42.937    LINK blobcli
00:32:49.506    CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o
00:32:49.506    CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o
00:32:50.074    LINK vhost_fuzz
00:32:51.980    CC examples/nvme/cmb_copy/cmb_copy.o
00:32:52.556    LINK cmb_copy
00:32:53.492    CC examples/sock/hello_world/hello_sock.o
00:32:54.869    LINK hello_sock
00:32:54.869    CC app/spdk_nvme_perf/perf.o
00:32:56.768    CC test/app/histogram_perf/histogram_perf.o
00:32:57.026    LINK histogram_perf
00:32:57.026    LINK spdk_nvme_perf
00:33:01.214    CC test/bdev/bdevio/bdevio.o
00:33:02.590    LINK bdevio
00:33:03.525    CC test/blobfs/mkfs/mkfs.o
00:33:04.092    TEST_HEADER include/spdk/config.h
00:33:04.093    CXX test/cpp_headers/accel.o
00:33:04.093    LINK mkfs
00:33:04.352    CXX test/cpp_headers/accel_module.o
00:33:04.919    CC test/dma/test_dma/test_dma.o
00:33:04.919    CXX test/cpp_headers/assert.o
00:33:05.486    CXX test/cpp_headers/barrier.o
00:33:05.745    LINK test_dma
00:33:06.004    CXX test/cpp_headers/base64.o
00:33:06.570    CC test/app/jsoncat/jsoncat.o
00:33:06.570    CXX test/cpp_headers/bdev.o
00:33:06.828    LINK jsoncat
00:33:07.087    CXX test/cpp_headers/bdev_module.o
00:33:07.346    CC test/app/stub/stub.o
00:33:07.912    CXX test/cpp_headers/bdev_zone.o
00:33:07.912    LINK stub
00:33:08.171    CC test/env/mem_callbacks/mem_callbacks.o
00:33:08.429    CXX test/cpp_headers/bit_array.o
00:33:08.688    LINK mem_callbacks
00:33:08.947    CXX test/cpp_headers/bit_pool.o
00:33:09.513    CXX test/cpp_headers/blob.o
00:33:09.772    CXX test/cpp_headers/blob_bdev.o
00:33:10.339    CC examples/nvme/abort/abort.o
00:33:10.339    CXX test/cpp_headers/blobfs.o
00:33:10.907    CXX test/cpp_headers/blobfs_bdev.o
00:33:11.166    LINK abort
00:33:11.426    CC test/env/vtophys/vtophys.o
00:33:11.685    CXX test/cpp_headers/conf.o
00:33:11.944    CXX test/cpp_headers/config.o
00:33:11.944    LINK vtophys
00:33:12.202    CXX test/cpp_headers/cpuset.o
00:33:12.769    CXX test/cpp_headers/crc16.o
00:33:13.715    CXX test/cpp_headers/crc32.o
00:33:14.008    CXX test/cpp_headers/crc64.o
00:33:14.946    CXX test/cpp_headers/dif.o
00:33:15.514    CC app/spdk_nvme_identify/identify.o
00:33:15.773    CXX test/cpp_headers/dma.o
00:33:17.150    CXX test/cpp_headers/endian.o
00:33:18.086    LINK spdk_nvme_identify
00:33:18.086    CXX test/cpp_headers/env.o
00:33:19.022    CXX test/cpp_headers/env_dpdk.o
00:33:19.959    CXX test/cpp_headers/event.o
00:33:20.526    CXX test/cpp_headers/fd.o
00:33:21.903    CXX test/cpp_headers/fd_group.o
00:33:21.904    CC test/event/event_perf/event_perf.o
00:33:22.472    CXX test/cpp_headers/file.o
00:33:22.730    LINK event_perf
00:33:23.669    CXX test/cpp_headers/ftl.o
00:33:25.046    CXX test/cpp_headers/gpt_spec.o
00:33:25.046    CC test/env/env_dpdk_post_init/env_dpdk_post_init.o
00:33:25.983    CXX test/cpp_headers/hexlify.o
00:33:25.983    LINK env_dpdk_post_init
00:33:27.360    CXX test/cpp_headers/histogram_data.o
00:33:28.299    CXX test/cpp_headers/idxd.o
00:33:29.679    CXX test/cpp_headers/idxd_spec.o
00:33:31.058    CXX test/cpp_headers/init.o
00:33:32.453    CXX test/cpp_headers/ioat.o
00:33:33.391    CXX test/cpp_headers/ioat_spec.o
00:33:34.770    CXX test/cpp_headers/iscsi_spec.o
00:33:35.708    CXX test/cpp_headers/json.o
00:33:37.086    CXX test/cpp_headers/jsonrpc.o
00:33:38.025    CXX test/cpp_headers/likely.o
00:33:38.963    CXX test/cpp_headers/log.o
00:33:39.901    CXX test/cpp_headers/lvol.o
00:33:40.840    CXX test/cpp_headers/memory.o
00:33:41.777    CXX test/cpp_headers/mmio.o
00:33:41.777    CC test/event/reactor/reactor.o
00:33:42.037    CXX test/cpp_headers/nbd.o
00:33:42.605    CXX test/cpp_headers/notify.o
00:33:42.605    LINK reactor
00:33:43.174    CC test/lvol/esnap/esnap.o
00:33:43.433    CC examples/nvme/pmr_persistence/pmr_persistence.o
00:33:43.433    CXX test/cpp_headers/nvme.o
00:33:44.370    LINK pmr_persistence
00:33:44.631    CXX test/cpp_headers/nvme_intel.o
00:33:45.567    CXX test/cpp_headers/nvme_ocssd.o
00:33:46.949    CXX test/cpp_headers/nvme_ocssd_spec.o
00:33:47.517    CXX test/cpp_headers/nvme_spec.o
00:33:48.896    CXX test/cpp_headers/nvme_zns.o
00:33:49.832    CXX test/cpp_headers/nvmf.o
00:33:50.773    CXX test/cpp_headers/nvmf_cmd.o
00:33:51.710    CXX test/cpp_headers/nvmf_fc_spec.o
00:33:53.090    CXX test/cpp_headers/nvmf_spec.o
00:33:53.658    CXX test/cpp_headers/nvmf_transport.o
00:33:55.039    CXX test/cpp_headers/opal.o
00:33:55.039    LINK esnap
00:33:55.606    CC test/env/memory/memory_ut.o
00:33:55.865    CXX test/cpp_headers/opal_spec.o
00:33:56.803    CXX test/cpp_headers/pci_ids.o
00:33:56.803    LINK memory_ut
00:33:57.371    CXX test/cpp_headers/pipe.o
00:33:57.371    CC test/event/reactor_perf/reactor_perf.o
00:33:57.941    LINK reactor_perf
00:33:57.941    CXX test/cpp_headers/queue.o
00:33:58.201    CXX test/cpp_headers/reduce.o
00:33:58.769    CXX test/cpp_headers/rpc.o
00:33:59.706    CXX test/cpp_headers/scheduler.o
00:34:00.645    CXX test/cpp_headers/scsi.o
00:34:01.583    CXX test/cpp_headers/scsi_spec.o
00:34:01.842    CC app/spdk_nvme_discover/discovery_aer.o
00:34:02.410    CXX test/cpp_headers/sock.o
00:34:02.670    LINK spdk_nvme_discover
00:34:03.239    CXX test/cpp_headers/stdinc.o
00:34:04.178    CXX test/cpp_headers/string.o
00:34:05.114    CC test/env/pci/pci_ut.o
00:34:05.114    CXX test/cpp_headers/thread.o
00:34:05.372    CC test/event/app_repeat/app_repeat.o
00:34:05.631    CXX test/cpp_headers/trace.o
00:34:06.199    LINK app_repeat
00:34:06.199    LINK pci_ut
00:34:06.460    CXX test/cpp_headers/trace_parser.o
00:34:07.400    CXX test/cpp_headers/tree.o
00:34:07.400    CXX test/cpp_headers/ublk.o
00:34:08.352    CXX test/cpp_headers/util.o
00:34:09.324    CXX test/cpp_headers/uuid.o
00:34:10.260    CXX test/cpp_headers/version.o
00:34:10.260    CXX test/cpp_headers/vfio_user_pci.o
00:34:11.194    CXX test/cpp_headers/vfio_user_spec.o
00:34:11.761    CXX test/cpp_headers/vhost.o
00:34:12.698    CC test/event/scheduler/scheduler.o
00:34:12.698    CXX test/cpp_headers/vmd.o
00:34:13.633    CC examples/vmd/lsvmd/lsvmd.o
00:34:13.633    LINK scheduler
00:34:14.200    CXX test/cpp_headers/xor.o
00:34:14.458    LINK lsvmd
00:34:15.025    CXX test/cpp_headers/zipf.o
00:34:15.593    CC examples/vmd/led/led.o
00:34:16.160    LINK led
00:34:16.726    CC test/nvme/aer/aer.o
00:34:18.160    LINK aer
00:34:26.280    CC examples/nvmf/nvmf/nvmf.o
00:34:27.217    LINK nvmf
00:34:31.445    CC app/spdk_top/spdk_top.o
00:34:33.351    LINK spdk_top
00:34:33.351    CC examples/util/zipf/zipf.o
00:34:33.610    LINK zipf
00:34:35.517    CC examples/thread/thread/thread_ex.o
00:34:35.776    CC test/rpc_client/rpc_client_test.o
00:34:36.035    LINK rpc_client_test
00:34:36.035    LINK thread
00:34:36.294    CC test/thread/poller_perf/poller_perf.o
00:34:36.294    CC examples/idxd/perf/perf.o
00:34:36.294    CC test/thread/lock/spdk_lock.o
00:34:36.551    LINK poller_perf
00:34:36.551    CC app/vhost/vhost.o
00:34:36.810    LINK idxd_perf
00:34:37.069    LINK vhost
00:34:38.010    LINK spdk_lock
00:34:39.416    CC test/nvme/reset/reset.o
00:34:39.674    LINK reset
00:34:40.244    CC examples/interrupt_tgt/interrupt_tgt.o
00:34:40.812    LINK interrupt_tgt
00:34:46.088    CC app/spdk_dd/spdk_dd.o
00:34:47.025    CC test/unit/include/spdk/histogram_data.h/histogram_ut.o
00:34:47.285    LINK spdk_dd
00:34:47.544    LINK histogram_ut
00:34:48.113    CC app/fio/nvme/fio_plugin.o
00:34:50.020    LINK spdk_nvme
00:34:50.959    CC test/unit/lib/accel/accel.c/accel_ut.o
00:34:51.897    CC app/fio/bdev/fio_plugin.o
00:34:53.804    LINK spdk_bdev
00:34:57.095    LINK accel_ut
00:35:09.334    CC test/nvme/sgl/sgl.o
00:35:09.334    LINK sgl
00:35:19.303    CC test/unit/lib/bdev/bdev.c/bdev_ut.o
00:35:37.395    LINK bdev_ut
00:35:42.666    CC test/unit/lib/bdev/part.c/part_ut.o
00:35:47.932    CC test/nvme/e2edp/nvme_dp.o
00:35:49.307    LINK nvme_dp
00:35:49.873    CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o
00:35:51.248    LINK part_ut
00:35:51.248    LINK scsi_nvme_ut
00:35:57.806    CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o
00:35:58.374    CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o
00:35:59.311    LINK gpt_ut
00:36:01.846    LINK vbdev_lvol_ut
00:36:06.039    CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o
00:36:07.947    LINK blob_bdev_ut
00:36:08.516    CC test/unit/lib/blobfs/tree.c/tree_ut.o
00:36:09.454    LINK tree_ut
00:36:11.989    CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o
00:36:13.893    CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o
00:36:14.151    LINK blobfs_async_ut
00:36:15.528    CC test/unit/lib/blob/blob.c/blob_ut.o
00:36:16.465    CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o
00:36:16.465    CC test/nvme/overhead/overhead.o
00:36:17.401    LINK overhead
00:36:17.658    CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o
00:36:18.597    LINK bdev_raid_sb_ut
00:36:19.537    LINK bdev_ut
00:36:19.800    LINK bdev_raid_ut
00:36:23.087    CC test/unit/lib/bdev/raid/concat.c/concat_ut.o
00:36:24.023    CC test/unit/lib/dma/dma.c/dma_ut.o
00:36:24.589    LINK concat_ut
00:36:24.847    CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o
00:36:25.104    LINK dma_ut
00:36:27.007    LINK blobfs_sync_ut
00:36:27.575    CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o
00:36:28.142    LINK blob_ut
00:36:28.142    LINK blobfs_bdev_ut
00:36:29.521    CC test/unit/lib/event/app.c/app_ut.o
00:36:29.780    CC test/unit/lib/ioat/ioat.c/ioat_ut.o
00:36:30.039    CC test/unit/lib/iscsi/conn.c/conn_ut.o
00:36:30.298    CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o
00:36:30.866    LINK app_ut
00:36:30.866    LINK ioat_ut
00:36:31.449    LINK init_grp_ut
00:36:31.709    CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o
00:36:31.968    LINK conn_ut
00:36:32.227    LINK raid1_ut
00:36:32.794    CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o
00:36:33.363    CC test/unit/lib/event/reactor.c/reactor_ut.o
00:36:34.300    LINK reactor_ut
00:36:34.559    CC test/unit/lib/bdev/raid/raid5f.c/raid5f_ut.o
00:36:34.559    CC test/unit/lib/iscsi/param.c/param_ut.o
00:36:34.817    LINK iscsi_ut
00:36:35.077    CC test/nvme/err_injection/err_injection.o
00:36:35.077    LINK param_ut
00:36:35.336    CC test/unit/lib/json/json_parse.c/json_parse_ut.o
00:36:35.336    LINK err_injection
00:36:35.594    LINK raid5f_ut
00:36:36.980    CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o
00:36:37.239    LINK json_parse_ut
00:36:37.239    LINK jsonrpc_server_ut
00:36:37.239    CC test/unit/lib/log/log.c/log_ut.o
00:36:37.497    CC test/unit/lib/json/json_util.c/json_util_ut.o
00:36:38.065    LINK log_ut
00:36:38.323    LINK json_util_ut
00:36:38.581    CC test/unit/lib/json/json_write.c/json_write_ut.o
00:36:39.517    CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o
00:36:39.517    CC test/unit/lib/lvol/lvol.c/lvol_ut.o
00:36:39.776    LINK json_write_ut
00:36:39.776    LINK bdev_zone_ut
00:36:41.152    CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o
00:36:41.152    CC test/unit/lib/notify/notify.c/notify_ut.o
00:36:41.152    LINK lvol_ut
00:36:41.411    LINK notify_ut
00:36:41.670    LINK portal_grp_ut
00:36:41.670    CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o
00:36:41.929    CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o
00:36:42.188    CC test/unit/lib/nvme/nvme.c/nvme_ut.o
00:36:42.754    LINK tgt_node_ut
00:36:43.346    LINK vbdev_zone_block_ut
00:36:43.604    CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o
00:36:43.862    CC test/nvme/startup/startup.o
00:36:44.120    LINK nvme_ut
00:36:44.378    LINK startup
00:36:44.635    CC test/unit/lib/nvmf/tcp.c/tcp_ut.o
00:36:45.570    CC test/unit/lib/scsi/dev.c/dev_ut.o
00:36:45.570    CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o
00:36:45.570    CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o
00:36:45.828    LINK dev_ut
00:36:45.828    CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o
00:36:45.828    LINK tcp_ut
00:36:45.828    LINK nvme_ctrlr_ut
00:36:46.394    LINK nvme_ctrlr_cmd_ut
00:36:46.394    CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o
00:36:46.651    LINK nvme_ctrlr_ocssd_cmd_ut
00:36:46.651    LINK nvme_ns_ut
00:36:46.651    CC test/unit/lib/sock/sock.c/sock_ut.o
00:36:49.180    LINK sock_ut
00:36:49.439    CC test/unit/lib/sock/posix.c/posix_ut.o
00:36:50.373    LINK posix_ut
00:36:50.941    CC test/unit/lib/thread/thread.c/thread_ut.o
00:36:50.941    LINK bdev_nvme_ut
00:36:50.941    CC test/unit/lib/util/base64.c/base64_ut.o
00:36:51.200    CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o
00:36:51.200    CC test/unit/lib/scsi/lun.c/lun_ut.o
00:36:51.459    LINK base64_ut
00:36:52.028    LINK lun_ut
00:36:52.028    LINK thread_ut
00:36:52.287    CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o
00:36:53.665    LINK nvme_ns_cmd_ut
00:36:53.924    CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o
00:36:54.182    CC test/unit/lib/util/bit_array.c/bit_array_ut.o
00:36:54.748    LINK bit_array_ut
00:36:54.748    CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o
00:36:54.748    LINK ctrlr_ut
00:36:54.748    CC test/nvme/reserve/reserve.o
00:36:55.005    LINK reserve
00:36:55.265    CC test/unit/lib/scsi/scsi.c/scsi_ut.o
00:36:55.524    LINK nvme_ns_ocssd_cmd_ut
00:36:55.524    LINK scsi_ut
00:36:55.783    CC test/unit/lib/thread/iobuf.c/iobuf_ut.o
00:36:56.352    LINK iobuf_ut
00:36:56.352    LINK nvme_pcie_ut
00:36:56.611    CC test/unit/lib/util/cpuset.c/cpuset_ut.o
00:36:56.869    LINK cpuset_ut
00:36:56.869    CC test/unit/lib/util/crc16.c/crc16_ut.o
00:36:56.869    CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o
00:36:57.132    LINK crc16_ut
00:36:57.734    CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o
00:36:58.303    LINK scsi_bdev_ut
00:36:58.303    LINK scsi_pr_ut
00:36:58.870    CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o
00:36:58.870    CC test/nvme/simple_copy/simple_copy.o
00:36:58.870    CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o
00:36:59.129    LINK crc32_ieee_ut
00:36:59.129    LINK simple_copy
00:37:00.063    LINK nvme_poll_group_ut
00:37:00.322    CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o
00:37:00.322    CC test/unit/lib/util/crc32c.c/crc32c_ut.o
00:37:00.580    CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o
00:37:00.580    LINK crc32c_ut
00:37:00.838    CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o
00:37:00.838    CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o
00:37:01.097    LINK pci_event_ut
00:37:01.662    LINK subsystem_ut
00:37:01.919    CC test/unit/lib/util/crc64.c/crc64_ut.o
00:37:02.176    LINK nvme_qpair_ut
00:37:02.176    LINK crc64_ut
00:37:02.434    LINK ctrlr_discovery_ut
00:37:02.434    CC test/unit/lib/util/dif.c/dif_ut.o
00:37:02.434    CC test/unit/lib/util/iov.c/iov_ut.o
00:37:02.999    LINK iov_ut
00:37:03.257    CC test/unit/lib/util/math.c/math_ut.o
00:37:03.516    LINK dif_ut
00:37:03.516    LINK math_ut
00:37:03.773    CC test/nvme/connect_stress/connect_stress.o
00:37:04.340    LINK connect_stress
00:37:04.599    CC test/unit/lib/util/pipe.c/pipe_ut.o
00:37:05.533    LINK pipe_ut
00:37:05.533    CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o
00:37:05.533    CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o
00:37:06.099    LINK ctrlr_bdev_ut
00:37:06.357    CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o
00:37:06.615    CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o
00:37:06.615    LINK nvmf_ut
00:37:07.178    CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o
00:37:07.436    LINK nvme_quirks_ut
00:37:08.370    CC test/nvme/boot_partition/boot_partition.o
00:37:08.370    LINK nvme_transport_ut
00:37:08.629    CC test/unit/lib/util/string.c/string_ut.o
00:37:08.629    LINK boot_partition
00:37:08.887    CC test/unit/lib/util/xor.c/xor_ut.o
00:37:08.887    LINK string_ut
00:37:08.887    CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o
00:37:09.145    LINK nvme_tcp_ut
00:37:09.145    LINK xor_ut
00:37:10.079    CC test/unit/lib/nvmf/rdma.c/rdma_ut.o
00:37:10.079    CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o
00:37:10.079    LINK nvme_io_msg_ut
00:37:10.079    CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o
00:37:11.012    CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o
00:37:11.584    CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o
00:37:11.584    LINK nvme_fabric_ut
00:37:11.857    LINK nvme_opal_ut
00:37:11.858    LINK nvme_pcie_common_ut
00:37:12.809    LINK rdma_ut
00:37:12.809    CC test/unit/lib/init/subsystem.c/subsystem_ut.o
00:37:13.375    CC test/unit/lib/rpc/rpc.c/rpc_ut.o
00:37:13.375    LINK nvme_rdma_ut
00:37:13.633    LINK subsystem_ut
00:37:13.633    CC test/unit/lib/nvmf/transport.c/transport_ut.o
00:37:13.633    CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o
00:37:13.891    LINK rpc_ut
00:37:13.891    CC test/unit/lib/vhost/vhost.c/vhost_ut.o
00:37:14.458    LINK idxd_user_ut
00:37:16.357    LINK transport_ut
00:37:16.357    LINK vhost_ut
00:37:16.357    CC test/nvme/compliance/nvme_compliance.o
00:37:16.614    CC test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut.o
00:37:16.872    CC test/unit/lib/idxd/idxd.c/idxd_ut.o
00:37:16.872    LINK nvme_compliance
00:37:17.439    CC test/nvme/fused_ordering/fused_ordering.o
00:37:17.439    LINK idxd_ut
00:37:17.698    LINK fused_ordering
00:37:17.956    CC test/unit/lib/rdma/common.c/common_ut.o
00:37:18.214    LINK nvme_cuse_ut
00:37:18.472    LINK common_ut
00:37:18.472    CC test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut.o
00:37:19.039    LINK ftl_l2p_ut
00:37:19.039    CC test/unit/lib/ftl/ftl_band.c/ftl_band_ut.o
00:37:19.607    CC test/unit/lib/ftl/ftl_io.c/ftl_io_ut.o
00:37:20.540    LINK ftl_io_ut
00:37:20.798    CC test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut.o
00:37:20.798    LINK ftl_band_ut
00:37:20.798    CC test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut.o
00:37:21.055    LINK ftl_bitmap_ut
00:37:21.314    LINK ftl_mempool_ut
00:37:22.688    CC test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut.o
00:37:22.688    CC test/nvme/doorbell_aers/doorbell_aers.o
00:37:22.946    LINK doorbell_aers
00:37:23.512    LINK ftl_mngt_ut
00:37:23.770    CC test/unit/lib/ftl/ftl_sb/ftl_sb_ut.o
00:37:23.770    CC test/nvme/fdp/fdp.o
00:37:23.770    CC test/nvme/cuse/cuse.o
00:37:24.337    LINK fdp
00:37:25.272    CC test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut.o
00:37:25.272    LINK ftl_sb_ut
00:37:25.530    LINK cuse
00:37:26.479    LINK ftl_layout_upgrade_ut
00:37:58.574  json_parse_ut.c: In function ‘test_parse_nesting’:
00:37:58.574  json_parse_ut.c:616:1: note: variable tracking size limit exceeded with ‘-fvar-tracking-assignments’, retrying without
00:37:58.574    616 | test_parse_nesting(void)
00:37:58.574        | ^
00:37:58.574   17:21:49	-- spdk/autopackage.sh@44 -- $ make -j10 clean
00:37:58.574  make[1]: Nothing to be done for 'clean'.
00:38:01.860   17:21:54	-- spdk/autopackage.sh@46 -- $ timing_exit build_release
00:38:01.860   17:21:54	-- common/autotest_common.sh@728 -- $ xtrace_disable
00:38:01.860   17:21:54	-- common/autotest_common.sh@10 -- $ set +x
00:38:01.860   17:21:54	-- spdk/autopackage.sh@48 -- $ timing_finish
00:38:01.860   17:21:54	-- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl
00:38:01.860   17:21:54	-- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']'
00:38:01.860   17:21:54	-- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt
00:38:01.860  + [[ -n 2284 ]]
00:38:01.860  + sudo kill 2284
00:38:01.869  [Pipeline] }
00:38:01.889  [Pipeline] // timeout
00:38:01.894  [Pipeline] }
00:38:01.908  [Pipeline] // stage
00:38:01.914  [Pipeline] }
00:38:01.928  [Pipeline] // catchError
00:38:01.938  [Pipeline] stage
00:38:01.941  [Pipeline] { (Stop VM)
00:38:01.955  [Pipeline] sh
00:38:02.237  + vagrant halt
00:38:05.526  ==> default: Halting domain...
00:38:15.518  [Pipeline] sh
00:38:15.801  + vagrant destroy -f
00:38:19.088  ==> default: Removing domain...
00:38:19.101  [Pipeline] sh
00:38:19.382  + mv output /var/jenkins/workspace/ubuntu22-vg-autotest/output
00:38:19.392  [Pipeline] }
00:38:19.406  [Pipeline] // stage
00:38:19.412  [Pipeline] }
00:38:19.426  [Pipeline] // dir
00:38:19.431  [Pipeline] }
00:38:19.445  [Pipeline] // wrap
00:38:19.451  [Pipeline] }
00:38:19.467  [Pipeline] // catchError
00:38:19.476  [Pipeline] stage
00:38:19.479  [Pipeline] { (Epilogue)
00:38:19.491  [Pipeline] sh
00:38:19.774  + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh
00:38:34.703  [Pipeline] catchError
00:38:34.705  [Pipeline] {
00:38:34.717  [Pipeline] sh
00:38:34.999  + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh
00:38:35.259  Artifacts sizes are good
00:38:35.268  [Pipeline] }
00:38:35.282  [Pipeline] // catchError
00:38:35.293  [Pipeline] archiveArtifacts
00:38:35.300  Archiving artifacts
00:38:35.568  [Pipeline] cleanWs
00:38:35.579  [WS-CLEANUP] Deleting project workspace...
00:38:35.579  [WS-CLEANUP] Deferred wipeout is used...
00:38:35.585  [WS-CLEANUP] done
00:38:35.587  [Pipeline] }
00:38:35.602  [Pipeline] // stage
00:38:35.606  [Pipeline] }
00:38:35.619  [Pipeline] // node
00:38:35.624  [Pipeline] End of Pipeline
00:38:35.658  Finished: SUCCESS